Dr Emma McNicol:What are ‘nudification’ apps and how would a ban in the UK work?Ben Rich and On Tuesday 17 June 2025, the Victorian Women’s Trust proudly presented the final instalment of our 2025 Trust Women: Lunch Break Sessions, a six-part policy webinar series designed to break down some of the most important gender equality policy challenges facing Australia today.
As technology advances, so do its risks — especially for women and marginalised groups. AI reflects gender bias, while tech-facilitated abuse surges. In AI and Tech-Facilitated Abuse: What Does the Future Hold?, our expert panel discussed the different type of digital abuse such as deepfakes and nudification apps, what policies are currently protecting our privacy and safety, and what more needs to be done to ensure equity and security in the digital world.
Speakers:
- Bec Martin, Facilitator & Director, Evolve
- Ani Lamont, Assistant Manager, Gender & Technology team, eSafety Commission
- Moderated by Dr Emma McNicol, VWT Board, Feminist Philosopher & Researcher
From February to June 2025, we hosted expert-led discussions on key issues such as nuclear energy, early childhood education, abortion access, housing, youth mental health, and tech-facilitated abuse. Each session offered insights from leading thinkers, advocates, and policy experts, helping us better understand the blockers to progress and, more importantly, the pathways forward. View the other webinars in this series.
Glossary of Terms
- Artificial Intelligence (AI): The ability of a computer system to perform tasks that would normally require human intelligence, such as learning, reasoning, and making decisions.
- Catfishing: Luring someone into a relationship using a fake online identity, often to scam them.
- Chat GPT: An artificial intelligence (AI) chatbot that uses natural language processing to create humanlike conversational dialogue.
- Child Sexual Abuse Material (CSAM)
- Deepfake: An extremely realistic – though fake – image or video that shows a real person doing or saying something that they did not actually do or say. Deepfakes are created using artificial intelligence software that draws on a large number of photos or recordings of the person. Deepfakes have been used to create fake news, celebrity pornographic videos and malicious hoaxes.
- Faceswapping: Superimposing pictures of someone’s face onto another person’s body.
- Generative AI or Gen AI: Generative AI, often referred to as Gen AI, is an emerging field within AI that creates new content such as text, images, voice, video, and code by learning from data patterns. Notable examples include ChatGPT and Google’s Bard.
- Image-based abuse (IBA): Sharing, or threatening to share, an intimate image or video without the consent of the person shown.
- Machine learning: A branch of artificial intelligence (AI) that focuses on developing algorithms that allow computer systems to learn from data without being explicitly programmed.
- RRE: Respectful Relationships Education.
- Sharenting: The practice of parents sharing details, photos, and videos of their children online, often on social media platforms.
- Synthetic Child Sexual Exploitation Material: Images that sexualise children or young people under 18 that have been created using AI.
- TFGBV: An act of violence perpetrated by one or more individuals that is committed, assisted, aggravated and amplified in part or fully by the use of information and communication technologies or digital media against a person on the basis of gender.
Further Resources
- Report to eSafety
- eSafety Parents
- The draw of the ‘manosphere’: understanding Andrew Tate’s appeal to lost men (The Conversation article by Ben Rich and Eva Bujalka)
- Why a professor of fascism left the US: ‘The lesson of 1933 is – you get out’ (The Guardian article by Jonathan Freedland)
- What are ‘nudification’ apps and how would a ban in the UK work? (The Guardian explainer article by Rachel Hall)
- AI chatbots and companions – risks to children and young people (eSafety article)
- Misogyny in the metaverse: is Mark Zuckerberg’s dream world a no-go area for women? (The Guardian article by Laura Bates)
- How you can protect children from online harm (eSafety resource)
- Child Safety Review (Deloitte resources)
- Life Skills (TESSA resources)
- Deepfake Nudes & Young People: Navigating a New Frontier in Technology-facilitated Nonconsensual Sexual Abuse and Exploitation (Thorn study)
- Children and Generative Artificial Intelligence (GenAI) in Australia: The Big Challenges (report by Digital Child)
- Financial abuse: How to protect customers and stop banking products from being weaponised (COBA interview with Catherine Fitzpatrick)
- Preventing Tech-based Abuse of Women (eSafety Grants Program)
- Supporting young men online (eSafety research report)
- Beacon (cyber safety app)
- Stop It At The Start (Australian violence against women prevention campaign)
Transcript
Note: Transcript is provided for reference only, and has been edited for clarity. Please confirm accuracy before quoting.
Dr Emma McNicol:
Okay, let’s get started. So good afternoon. My name is Emma McNicol. I’m tuning in today from the University of Melbourne campus. I’d like to acknowledge the traditional owners of the unceded lands on which I sit today, the Wurundjeri Woiwurrung and Bunurong peoples. I pay respect to Elders past, present and future and acknowledge the importance of Indigenous knowledge and the wonderful work of my Indigenous colleagues in the space specifically of gender based violence.
This webinar is our final instalment of the Trust Women Lunch Break sessions. The Victorian Women’s Trust’s 2025 Gender Equality Policy webinar series. I’m here today taking the place of Mary Crooks, and I’m a board member of the VWT. And it’s a pleasure to be here. Each session that we’re looking at in this lunch break series has explored a different topic.
We’ve looked at energy, abortion access, housing, tech, facilitated abuse, and teen mental health. Today, we’re looking at the relationship between gender based violence and technology. Specifically, we’re looking at how AI reflects gender bias and tech facilitated abuse. And we’re going to explore what policies protect privacy and safety, and what is needed to ensure equity and security in the digital world.
Now all these sessions have been and today will also be recorded. And if you have to go away or, you know, go do something, that’s fine. We encourage you to catch up via our website. Our team will pop a link in the chat. And we will have a short Q&A towards the end. We invite attendees to post their questions through the Q&A function. For those taking part in the chat discussion, we thank you for keeping the conversation respectful. The chat will be moderated to ensure it’s a safe and inclusive space for all. Hopefully no trolls will rock up today.
So, it’s my pleasure to introduce you to our panel. Ani Lamont is the Assistant Manager of the Gender and Technology team at the e-SafetyCommission. Ani is a gender based violence prevention specialist with over a decade of experience working in both Australia and overseas. Ani is currently the Assistant Manager, Gender and Tech with the e-Ssafety Commission and leads their body of prevention work. Prior to this, Ani has worked for Our Watch, the Equality Institute, the What Works to Prevent Violence Against Women Global Program, and the United Nations Partners for prevention.
It’s also my pleasure to introduce Bec Martin the facilitator and director at Evolve. Bec is passionate about ensuring children and young people have safe online experiences and develop positive relationships with technology. With a background as a digital technologies coach, Bec brings a tech-positive approach to the digital safety and wellbeing education space. Thanks for being here today Bec and Ani.
Let’s get our conversation started. Who is Andrew Tate and what issues does he posed for young people today?
Bec Martin:
So, Andrew Tate is somebody that a lot of young people and children are engaging with in online platforms like TikTok and also YouTube shorts. He has some really extreme beliefs, misogynistic views and sort of hyper masculine ideals that appeal to young boys and men who are trying to make sense of where they might fit in.
And some of the views shared through and prompted to kids through algorithms can really shape their views and beliefs on what healthy relationships with women look like. And healthy relationships with themselves as well in general.
Ani Lamont:
And I’ll just jump on the back of that. You know, Andrew Tate, I think at the moment is sort of a figurehead but is really sort of a highly publicised figurehead that sits at the top of a much broader and bigger manosphere, which the manosphere is actually made up of several components itself. And to, you know, pick up on the point that Bec just mentioned, the attitudes and beliefs and ideas that are being promoted are harmful to women and misogynistic and advocate, actively advocate for the use of violence against women.
But they’re also sites – a lot of my job is is sifting through the manosphere – they’re sites where boys and men are being encouraged to harm themselves, and harm each other. And they are enacting violence against each other and are encouraging each other to commit suicide. So I think when we talk about the Andrew Tates, it’s actually, we need to be thinking about it much more broadly.
Dr Emma McNicol:
It’s very interesting Ani, your point. I mean, I was not aware of that, that the manosphere involves encouraging violence of male to male as well. And it makes me think about an article I read last night on Marci Shore, that professor who left her post at Yale in the US because of Trump and moved to Canada. And she was saying that there is a, she worried that a civil war is brewing in America. And part of that is because there is just this violence in the culture and in the air at the moment. And it’s not man on woman necessarily. It’s man to man. Very interesting. So Tate in the manosphere is encouraging male on male violence. I wasn’t aware of that.
Anyway, let’s talk about AI. How can an AI program like ChatGPT perpetuate gender bias? And are we right to be concerned? Maybe this one’s for Ani. Thanks Ani.
Ani Lamont:
Sure. So when we’re talking about ChatGPT or AI, I think it’s first of all important to distinguish that there are actually many different types of AI, and AI is being integrated into a range of products and services that have the potential to be beneficial.
ChatGPT is just one example of how AI technology is being applied and used, and it comes under an umbrella term of generative AI. The difference between generative AI and other forms of AI is that its models can create new outputs instead of just making predictions and classifications based on machine learning. So I think when we start talking about AI and its impacts on the world, on violence against women, gender equality, we need to be looking at how we can maximise the potential benefits whilst mitigating the potential harms.
You know, would it be great for us to have AI help us find a cure to breast cancer? Of course, should that come at the expense of women’s safety or progress towards gender equality? Absolutely not. So when it comes to, you know, how things like ChatGPT perpetuate gender bias.
You know, with ChatGPT in particular, there are obviously very well reported and discussed biases in the data sets that underpin the systems and how that leads to biases in the information and responses it provides. But the factors that I’m more interested in looking at in relation to AI and its relationship and impact on gender biases and gender equality are actually sort of deeper and broader. You know, so if we start thinking about it across the multiple levels of our society, of our social interactions, you know, the accessibility and increasing ease of access to AI tools to generate deepfakes and deepfake pornography. Which even trained journalists and broadcasters cannot distinguish what is real anymore.
These are being used and are having very real world impacts on women in politics and women in media and women in leadership positions. And we’re seeing that worldwide that female candidates across the world are having deepfakes used to ridicule them, undermine them. They’re being sexualised in ways that are all designed for women to be put back in their box. And, you know, that’s problematic on any level.
But the global evidence base on violence against women, and Australia’s ‘Change the Story’ our national framework, makes clear that there are links between the presence of female leaders, the ability of women to participate equally in every single form of political and economic life, and then rates of violence against women. So I think if we’re thinking about deepfakes, if the cycle of women opting out or choosing to leave politics as a result of having deepfakes targeted at them. If that continues to increase, then yes, in the future that is going to have an impact on the prevalence rates of violence against women.
Dr Emma McNicol:
I just wanted to ask something about the way the systems are running.
Ani Lamont:
Yeah, please.
Dr Emma McNicol:
I’m not sure how much you know about AI itself. I mean, is the gender bias of ChatGPT, is it a sexist program? Because it was created by an incel tech bro? Or is it, sort of the systems itself, and it’s their default operation, you know, their default MO? I mean, this is probably really technical question about how AI works.
Ani Lamont:
Yeah listen, this actually might be a question… someone else can answer this better. My background is on gender based violence, and I always am very open with people that I am not a tech specialist in terms of technical elements like that. But as a quick answer to your question, I think it’s all of the above. Like I think it’s the fact that data sets and machine learning models. I mean, any data set that we have in the world the prior to any of this being created, we already had gender gaps that existed.
And so, yes that’s a component. Do I think it’s a component that the tech industry is a male dominated industry and that therefore sets the parameters around how and why tools like ChatGPT get developed and deployed? Absolutely. Yes. I hope that’s helpful, but I I’m sorry I’m not an expert.
Dr Emma McNicol:
It’s a really tricky question. I mean, it also, if I’m correct, AI uses everything at its disposal. It ransacks the internet, right? If you think about the fact that we live in a patriarchal world and all of those resources that it is ransacking are themselves likely to have a gender bias, then it stands to reason that all those processing kind of mechanisms will reflect that.
But anyway, Bec, I want to hear from you. You work with school groups and parents to educate communities about tech safety. What are some. And I know I’m scared. I don’t really want to know the answer. But anyway, what are some of the emerging trends when it comes to tech facilitated abuse and our darling young community members?
Bec Martin:
Yeah. So we work across primary schools and secondary schools. And I think I’m going to touch upon something called image based abuse. And I’ll give you a bit more of an elaboration on the different types of image based abuse in a moment, because it might be unfamiliar to the audience today.
But some things that we see are sort of ranking and rating of girls, oftentimes in things like group chats or on websites. We have students who will take photos of their female counterparts and create deepfake nudes using nudify or undress apps. Now, if you’re unfamiliar with those apps, these are AI programs that take a photo of somebody that’s fully clothed and within a few clicks creates a very realistic deepfake nude of that person without their consent. And so kids are really engaging with these software programs and trying them out on themselves and their peers and on their female counterparts. They then might rank those images as well. So combining those two harms together.
Oftentimes in schools where we have that occurring, we know that it’s peer-to-peer but it also can be student to teacher. So we work with a lot of female teachers who have told us that they’ve been upset to find out that students have created deepfake nude images or deepfake pornography of them. Which is another type of image based abuse.
Sometimes our female teachers are actually unaware that there’s support services available for them too. That they don’t have to just put up with it because it’s happened in the school ecosystem. We see young people engaging with AI companion apps. So using AI as a friend but demonstrating some of the things that they might be viewing in pornography. We know that most of the behaviour in pornography is showing aggression and derogatory behaviour towards women. So boys using these companion apps, creating an AI girlfriend, but then trialing the things that they’re seeing in porn on that companion app. Talking about choking them, speaking to them in a derogatory way.
And then as well, for our parents and carers, we see, you know, images that have been posted on parents’ social media accounts that are taken and put through this same undress, nudify apps to create what we call synthetic child abuse material. And so that’s really flooding the space in terms of child sexual abuse material. It can be really difficult for law enforcement to tell the difference between what is a case of real life abuse and what is a case of, you know, something, an image that’s been created using AI. So that’s just a very small snapshot of what we see.
And what I’ll say is this. The biggest headache for primary schools and secondary schools at the moment. And where all of these things sort of converge is in through group chats. Which is where these images inevitably get flicked around.
Dr Emma McNicol:
Oh wow Bec, you have described dystopia. This is terrifying. I didn’t know that AI or the internet was capable of such disgusting things. I probably should have known, but I didn’t. I’m sure I’m not alone. I’m not the only person in the audience. Tell me, how do you go about it? How? What’s the best approach for working with young people and working with schools? And what are schools getting wrong?
Bec Martin:
That is a huge question. And with all things in this space, we look at it in two different ways. And Ani and I were talking about this this morning. So, in a lot of cases we come to this space and are responding. We’re responding because something’s happened. And so in that instance it’s about working with the instigator and the target, and making sure that the targets of this type of harm know that there’s support mechanisms out there and that there are things that we can do to get those images taken down. Because retraumatisation and hyper vigilance for targets of online abuse is definitely something that can follow them throughout their schooling journey and into adult life.
When we talk about working with the instigators, you know, misuse of these apps is a symptom of a much, much broader problem. Which, you know, people on this call would be aware of. That we’ve got misogynistic views out there. And when combined with these programs, we’ve got teenagers who have an underdeveloped prefrontal cortex geared towards peer-to-peer connection, fitting in. And then you combine that with apps where they can make nudes very quickly. They can make catastrophic decisions in very short amounts of time. So there’s a responding element and minimising harm for the target. And they might also choose to pursue legal action on the instigator.
But then there’s a huge piece around prevention. And that’s where we have to be working. We need to be addressing this. It’s part of a much larger, broader societal issue on how we view women. What’s right, what’s wrong. And that’s far beyond what’s just legal.
Dr Emma McNicol:
How do we educate young women about these risks without ever letting them feel that they’re vulnerable or responsible for the for the tech violence that they have endured?
Bec Martin:
Yeah, I think that there’s been a… and look this is twofold. So one of the things that comes out when we talk to girls about deepfake IBA is that there’s a sense of relief. The silver lining for them is, “Oh well, at least now if I have my actual nudes shared, I can say that it could be a fake image.” So we do hear that.
Victim blaming is really problematic. Again, in that preventative space where people with good intentions are saying things like, “Don’t share a nude, because if your nude gets shared, then your life is over. You’ll never get the AFL draft, and your job prospects and relationship prospects will be doomed forever.” And that’s really problematic. When about 87% of year nine to year 12 students report some engagement with sending and receiving nudes, and that image based abuse is a very real thing that happens to our teenagers. When and if a girl’s nude is shared, we don’t want kids thinking that their life is over. And that applies to our young boys who have been sexually extorted as well.
The messaging around having your nude shared is the worst thing that can happen to you is so, so dangerous and problematic when it happens. So I’d say that’s something of an implications for practice that people working in this space need to be mindful of.
Dr Emma McNicol:
Wonderful. Thanks so much Bec. Ani, are some groups more vulnerable than others?
Ani Lamont:
If I can absolutely answer that and would love to, but if I can just pick up on the prevention aspect in relation to the last question. Because I think that’s really the key and where the conversation needs to start moving, both in Australia but globally as well.
I mean, very rightly so. The issue of technology facilitated gender based violence, and AI specific GBV, has rightly started at the response end. But trying to bring things upstream so that we’re not just responding once the ambulance has gone off the cliff and we are proactively working, in the way that we have… And when I say we and given this audience, you know, we as feminists, we as policy makers, we as women who have advocated and staunchly worked for gender equality in every area of life. We have built the infrastructure or have been in the process of building the infrastructure for prevention work. And at the moment, technology facilitated gender based violence is kind of being seen as this separate thing or we’re not so sure how to deal with that at the moment.
But really what we’re needing is a rapid integration of TFGBV and AI specific messaging and tools as Bec just described, integrated into our existing prevention systems and infrastructure. And that includes, picking up on Bec’s point as well, you know, the cornerstone of prevention work is about flipping the script. So it’s not just about looking at women’s experiences and responses to the violence, which is absolutely necessary. But it’s flipping it around to look at well what, why do some men and boys use violence and others don’t?
And so for people in the audience, I think one of the key things that anybody can do in this space is have the conversations with the young people, but particularly the young men and boys. And not just the boys, the men in your life as well, around these issues and the impacts that they have.
But particularly with young people, you know, I think that there’s now a normality around talking to kids around screen times. But now it’s about taking that conversation further to talk about, well, what is the content you’re viewing? Are you looking at Andrew Tate? Are algorithms driving you down into the manosphere where you’re being told, you know, to… There’s a practice in the manosphere called bone meshing, which is like boys deforming themselves. I know.
And, you know, boys and kids don’t have the critical literacy or don’t have the tools to unpick those messages. And Bec and I were talking about this before. One of the challenges is that parents, caregivers, the people around young people feel their own sense of challenge because it feels scary because of the technology. But really, and I hope Bec and I can, you know, agree on this. Don’t worry so much about whether you feel, you know, thingy about the technology. It’s about having those conversations, particularly with men and boys around “Well, how is how is this messaging harmful for you? How is this messaging actually maybe making you lonelier, more isolated?” So yeah, sorry, I just wanted to pick up on that before we move on.
Dr Emma McNicol:
Thanks, Ani. I think it’s a really important point about prevention and sort of how far back does prevention need to go? Do we need to be sitting down with young people and talking about tech? Or do we even need to be going earlier and talking about misogyny, even before tech? You know, it seems like… I guess my question is every generation has always been scared of emerging tech. You know, even films or TV, you know, was going to, you know, spoil the minds of the youth. Is tech a kind of normal expression or just an arm of sexism? Or is it in fact making it worse? You know, is it developing, extending and really consolidating misogyny? Or is it just one of these technological advancements that we’ve consistently seen?
Ani Lamont:
Yeah, it’s an interesting question. I think, you know, to frame it up for the realm of research and programmatic work that we have on TFGBV, this is a nascent field, you know. This has only existed for a few years. So we’re trying to work out those questions.
But observationally, what I would say is that the data around forms of technology facilitated gender based violence looks similar to other patterns we see in relation to other forms of violence against women and girls. But it does look different. And to your question around is it just consolidating or is it a tool for misogyny? Yes. But the problem with this tool is the rapid scaling up of misogyny. So for example, in violence prevention work we use social norms theory, which traditionally has envisaged if you can influence the 7 to 9 people around someone, you can start to shift their attitudes and behaviours.
So on that basis, we’ve spent the last ten years in Australia building a body of work around respectful relationships education in schools. You know, thinking that a classroom format provides that. But then, you know, with ChatGPT, with messaging tools, etc. A kid walks out of his mandated RRE class where he’s been given all of this information about why not to commit acts of violence against women or misogyny. And suddenly there are 40,000 people, who a kid’s never even met, who is giving them contrary messaging to what they have just received in that classroom.
So again, I think it’s, you know, something to tease out. It’s to, you know, what comes first, the misogyny or the tech? I would say you need to address both concurrently, because the reality of the world we live in and will continue to live in is those two things will not be separated regardless. But what we need to do is address the way that it scales up and amplifies misogyny.
Dr Emma McNicol:
Absolutely, I think it’s a really important question. I mean, it’s the chicken or the egg. It’s a beautiful answer. And I think you’ve nailed it when you said that basically, tech is expediting the kind of development of this misogyny, you know, it’s enabling.
Ani, now that we’re talking about what to do with young people and, rights and responsibilities for educators and community leaders. I mean, what are some of the policy measures that are currently in place to protect communities? And where do you see gaps?
Ani Lamont:
So, I mean, in a global sense because part of what’s challenging with this issue is that it is a global problem. So policy discussions are happening globally on the development of AI and whether it should occur in open or closed environments. So, for example or to explain that, whether AI tools should be able to be built and developed by anyone or corporate entities or whether they should be considered public goods or a form of public infrastructure.
In Australia, we’re looking at the risks, benefits and potential impacts of generative AI across such a huge body of government departments and industries that, you know, the list goes on, from the Attorney-General’s Department to ACMA and most interestingly, and Bec and I we’re also talking about this, ASIO and the major security agencies that are also recognising now and have formally recognised that online misogyny is now the number one predictor for other forms of violent extremism against the state. And in real world, you know, IRL violence. So the players that are looking at what and how we regulate are broad.
And there are a variety of regulatory approaches that everyone is considering ourselves included, whether you take a soft approach, so to speak, which relies on voluntary codes, on working with industry. Versus, say, a more stick based approach, which is, you know, harder policy options backed by legislation, mandatory requirements, etc.
Dr Emma McNicol:
Fantastic.
Ani Lamont:
Yeah. And just to press that as we currently stand in Australia. So we operate under the Online Safety Act, which was the world’s first online safety act. It does include protections and provisions around AI generated materials that cause harm. So I just want to stress that if you or someone you know is experiencing deepfakes created using AI, you’re being imitated online, etc., etc. They are covered under current legislation. You can come to eSafety for assistance and having content removed.
Dr Emma McNicol:
Wonderful. Thank you Ani. I think we want people to be able to take away practical stuff today. So the VWT will distribute links, and places you can go if you do, say, have something, an image distributed of you. But if anybody, attendees today wanted to share anything, please feel free to put it in the chat as well. And we can add that to our lists that we would distribute.
Bec I’ve got some questions for you about, I’ve got two, in fact. The first one is, is AI making it easier to perpetrate gender based violence and how?
Bec Martin:
Yeah. So, I think I’ll just build on something that Ani said earlier as well, around that coming out of that classroom and then seeing the exact opposite represented in online spaces like YouTube and TikTok. And I think what’s equally powerful is that we have this real normalisation of behaviour through group chats. I’m harping on about group chats today. But when they see the, you know, huge volumes of their peers from their own school and other schools echoing, you know, misogynistic views towards women, that’s really powerful. And so I agree with Ani yeah, these apps should be the symptom but they absolutely are spaces where messages are reinforced. And, you know, group chats in themselves are echo chambers just like manosphere content is.
So, something that we talk about when we work with young people is we don’t use the term perpetrators in schools. So we’d use that term for someone over the age of 18. But the language that we choose to use at Evolve is instigator / target. But also someone who might be engaging in these programs or someone who might be experiencing this type of online abuse or harm. And we feel that those terms really keeps the focus on the behaviour rather than the person. We know kids make mistakes. We don’t want them to be labeled and have that label follow them around for life.
So what we see, things that make it easy. Things that that make… There’s, you know, big gaps to touch on what Ani was speaking about earlier too. You know, legal and policy gaps. We’re dealing with billion dollar tech industries that are rapidly producing hundreds, if not thousands of these programs. And it’s really challenging for the legislation to keep up. So the onus, and it’s great to see that laws are strengthened around, you know, digitally altered images and nudes. But the onus still is unfortunately on the target to report.
We want to see more pressure on those tech platforms and I’ll tell you why. So it is extremely easy for young people to find these types of programs. Google Play and Apple App Store. A quick search of certain terms will bring up thousands of these programs. They initially look like they’re sort of fashion apps for trying on different clothes, or as a body image or airbrush filter to change the shape of your body. But it’s really clear that there’s a nefarious purpose. They also come with harmful taglines that promote violence against women. Things like “Why would you bother taking her out on a date when you can just use our app to get her nudes?” In a lot of cases, because these apps are marketed as fashion apps, they’re often rated about age four plus. So kids can easily download them, potentially without even setting off a parental control permission.
And what we see is that they’re very, very easy to use. So within a few clicks, kids can create nude images and then on share them through something like, you know, a group chat. And when they’re trying to fit in and there’s that peer-to-peer need for connection. We’re being a little bit more impulsive when we’re a teenager. Our brain is developing. We’re maybe not listening to the adults in our lives. The opportunity to do real catastrophic harm to that target is there. And it just means that kids can make really big mistakes in very short amounts of time. So, the fact that these apps are available through those platforms, that’s something that we’d really like to see change.
Because the fact that you can pick up your phone and download it and start playing really speaks to young people’s curiosity, and they’re curious about bodies. And so they will play with these apps on themselves. And they’re also engaging with them peer-to-peer. The statistics say that this is a gendered issue that disproportionately harms women and girls, but I suspect it’s done peer to peer, you know, boy to boy. And it’s underreported because they do things like they make nude stickers of each other, and then they send those around in the group chat. And because it’s a sticker, it’s perceived as only a joke. So I think that that speaks to what Ani was talking about earlier on in this session too, where we see harms against women, we also see the same powers at play, that harm our young boys as well.
Yeah. I mean are some young people, copping it more? Do we know much about LGBTQ youth? Do we know about, and do we assume that that… I assume that women are the young girls are experiencing more than boys?
Bec Martin:
Yeah. There’s a fantastic survey that’s, study that came out from Thorn in March this year, and I’ll share it in the chat. And what it showed is that absolutely, people in marginalised communities, LGBTQIA+ definitely face more harm because they’re often online more seeking that connection. And so image based abuse rates are higher amongst those communities, according to this survey. But what the survey also shows is that 1 in 10 boys, teenagers, that includes boys aged 13 to 14, have been the target of deepfake IBA themselves. So, I suspect that there’s some underreporting happening when it comes to boys and men as well. Sorry, Ani I think you wanted to add on to that.
Ani Lamont:
No, no, no, no. It’s so interesting to hear about the specific data around deepfakes. Because, you know, what we have at the sort of national prevalence level is, you know, we know that 1 in 2 Australians have experienced some form of technology facilitated abuse. But similar to other patterns of gender based violence, it is gendered in its nature and impact. And women are more likely to experience it within the context of existing domestic and family violence or coercive dating relationships. Whereas men are more likely to experience one off instances from friends, work colleagues, other men, you know.
And yeah, picking up on what you were saying, particularly when we look at the data available at the national level, young women aged 18 to 24 in particular, report much higher and repeated rates of tech abuse compared to their male counterparts. They’re also more likely to experience particular types of it. Which include things like image based abuse and other forms of sexual violence. But what’s most sort of chilling about it is that it is most likely to be from someone that they know, that they trust. Someone that it is occurring within the context of broader emotional or physical abuse. So yes, it is definitely sort of targeting some groups more than others.
And to also raise, you know, very much for queer communities because of the way that it co-occurrence with other forms of online and real world violence. You know, LGBTIQ communities are most likely to experience this. And in particular, even though the data set that we have is quite small for a range of methodological reasons. But transgender communities, their rate of experience is over 90% of people. And again, it becomes higher when you’re looking particularly at transgender youth. So again, it kind of replicates the patterns of violence that we already know.
Dr Emma McNicol:
And so with tech facilitated abuse, is AI the main mechanism that people are using for gender based violence? Or is it, I mean… what are the programs and what other forms is it taking?
Ani Lamont:
So I think again, you know, this whole area is a new space within the broader field of gender based violence research. So, in terms of typologies or what is being used more or less in terms of acts of TFGBV, we don’t have strong data to point that way.
But what I would say from the reporting data that we have through eSafety is that whilst things like AI get a lot of press coverage and feel very scary to the public. And you know, the other example is stalking apps and stalkingware that can be downloaded in three clicks by any, you know, usually man who wants to track a partner. Whilst those things get more coverage, the stuff that we see is actually still more likely to be lower tech options that are being used by perpetrators or people who, you know, young people who may go down that pathway.
So it’s things like, you know, the sharing of nudes that were shared in a particular context that was initially consentual. Or it’s things like using Facebook, Instagram, other social media sites to keep tracking or abreast of where someone you’re dating or, you know, in a partnership with are going. Using geo location tagging and sharing. The more prevalent forms are still low tech.
But again, part of the difficulty in this comes to Bec’s point around how things like Google Play and various websites, can step up in these spaces. That’s where the problem is at at the moment. But, you know, within ten minutes you can get information and access to things that, like AI, which previously would have been considered quite high tech or most people wouldn’t know how to do that. Perpetrators are now finding that easily.
Dr Emma McNicol:
I mean, it makes me think of that work that Catherine Fitzpatrick is doing illuminating digital and financial abuse. For example, she alerted all the banks to the fact that bank transfers were, even when there was an IVO out, that they were using, you know, the child support payment. Or they would transfer $0.37 just to say something disgusting to them, to the partner.
So, yes, I mean, perpetrators are and I’m not talking about teenagers. I know, thank you Bec for telling me that we don’t call a teenage person that does it a perpetrator. But adult perpetrators of family violence are… they’re very resourceful, we know that much. So, yeah, they’ll use tech however they can.
We’ve got some wonderful questions emerging in the chat. One regards whether culturally and linguistically diverse communities are any more at risk, or at any risk? I suppose this is a part of a broader question. Is, so far, what we’ve sort of found out through this conversation is that tech facilitate abuse is consistent with patterns of gender based violence we see. You know, women are more at risk. Queer communities are at risk. Cis, hetero men less at risk. Are there any aberrations in it? In what we know about it? Is there any way that tech facilitated abuse disrupts or changes these patterns? And if you can talk about CALD at some point, that would be wonderful.
Bec Martin:
We’re really fortunate. We do a lot of work with recent arrivals. And so, oftentimes those parents and carers are using the device because they’re navigating the internet for their parents and carers. Because they’ve got better English skills than potentially their parents.
Something that we find really challenging in the space is educating these communities about some of these online harms because of cultural sensitivity. So it can be really challenging for us when we’ve got, certain cultures in the room where there might be a female audience. So mums come along, but the translator is male. And so in certain cultures it’s quite, very uncomfortable for audiences. For me to be saying something about pornography, about image based abuse and trying to explain what that is in relation to things like, you know, nude images or sexually explicit images. That’s a real challenge.
And there’s not always things like the budget to run, you know, groups in ways that are culturally sensitive. Because they might have enough money to do one cybersafety session for the year. And we need to talk to everybody in that session. And so that’s a real juggle for us and something that we do see in this space. Is that, yeah, informing CALD communities of these harms in a way that feels culturally appropriate to them is something that we really try hard to do. Because they need to know that’s happening. And I said to the principal at one of these schools, I said “Are you experiencing this?” and they said, “Absolutely. We’re really glad you’ve talked about it.” So the kids are engaging with it. But then, yeah, letting parents know in a way that feels comfortable and safe for them is certainly a consideration. Sorry, Ani.
Ani Lamont:
No, it’s such a good point. And it actually connects to the question of, you know, are we seeing any sort of differences in this? And one area is in relation to, and this is based on very emerging data, particularly through a program.
So we have, Preventing Tech Based Abuse of Women Grants Program, through which we’ve funded Settlement Services International, one of the biggest providers of services for new migrants and refugees to develop exactly these training resources and tools on the ground that can work. That have been built by and for community. But one of the interesting things that we are seeing from the emerging data that is different, is because of that factor where young people within the family are more comfortable with the technology and are being placed in the in the situation of having to set up the banking apps, set up for whatever. There are, I don’t want to say higher prevalence rates, but it is seeming more common to have family violence within the context of children being violent or using tech facilitated abuse towards their parents. Which is its own sort of issue.
But in relation to CALD services and issues more broadly, I think one of the things to sort of point out around that as well. Is that again, you know, it’s one of the clearest examples that we have around how this issue builds on existing gender inequality. Particularly when it comes to digital literacy and access in and of itself. Because, you know, if you think about how gender norms play across all of this, if the only person in the house who knows how to set up the banking services, the access to your Centrelink, the access to anything nowadays, is the primary male household head. That is going to create gender inequalities that creates factors for violence.
Bec Martin:
So, something else that we consider is when we’ve got particularly refugees or recent arrivals who are escaping generational trauma or have come from places where freedoms have been severely restricted and impacted. Freedom is massively important to those families. And so oftentimes the tech use within their home is pretty wildly unmoderated and unrestricted, because freedom is a really important thing for them to have. And I think that there’s a misconception too, that these families don’t have a lot of tech in their home. They have a huge range of tech in their home, just like everybody else. And so I think being mindful as well of like misconceptions that we can make around, you know, each family’s individual circumstances is really important as well when we’re talking to these communities, as is with any community.
Dr Emma McNicol:
Yeah, absolutely. So important in our policies and guidelines to be aware of the bias when we’re creating them and what expectations we might hold about a family, for example, that have a different background to our own.
Now, I’m bearing in mind that we did want to finish up one. We’ve got a couple of questions to get through. So I unfortunately have to encourage you guys to, be a little bit quicker with your answers. I’m so sorry. So the first question is, do the laws concerning tech facilitated abuse differ from state to state, or are they the same all over Australia? It’s a wonderful question. Thank you, Eleanor.
Ani Lamont:
So, to a certain extent, no. In relation to what we’ve just spoken about, it’s covered under the online safety Act, which is federal. But in terms of things like cyber crime. So more like, you know, scammer kind of stuff, which can be related. There are different laws at state and territory level. So, it depends on what you’re experiencing. But, you should be covered under federal. Yeah.
Dr Emma McNicol:
So I’m going to bring together two questions now. So, someone’s asked what tips and what should be covered when running a harm minimisation session in schools for older primary and early secondary schools in this space? I’m going to bring this together with another question, which I think is really a clever one by Molly Jeffrey. And that is, do you have thoughts about how this war against men might be pushing young men and boys towards the manosphere, and how can we counteract this? So I guess the question is, how do we educate young people on this without making young boys feel like, I don’t know, I guess that they’re the problem and kind of encouraging them.
Ani Lamont:
Do you want to jump in Bec?
Bec Martin:
I’ll take it.
Ani Lamont:
Yeah.
Bec Martin:
No. Something that we do with our work with young people is we don’t want to position boys as likely perpetrators of harm, first and foremost. So, there’s ways that we can discuss these things in a way that feels really comfortable for them. Naturally, teenagers sitting in front of us know far more, or think they know far more about social media than we do.
And so positioning them as educators rather than me being the presenter. I’m there to facilitate them supporting each other. And unpacking things in a way that feels tech positive, gender neutral, so that you don’t have kids sitting there feeling like they’re targeted before you’ve even begun. And there’s even things that we will do in terms of… We run what’s called a digital dilemma, which is a scenario where kids can unpack a problem and explore different pathways to how that problem might end up. Not just the right answer, all the ways we can unpack a problem.
And we use things like gender neutral names. So you’re never positioning the instigator as a boy and the target as the girl. Because they pick up on that quickly and disengage. So, simple things like that, where it’s unclear what gender the instigator and target are, is a really great way of not making anybody feel like they’re being targeted.
Ani Lamont:
Yeah. Sorry. I just wanted to build off that. And I think, everything Bec just said. We, you know, we did a piece of research work earlier this year looking at men and boys’ online experiences. And I think the key is to, again, work from the centre point of, these spaces are being harmful to men and boys themselves. And starting by trying to unpick that peer to peer violence first as the epicentre. From which we then start to spin out and look at well, how is that then making you very isolated and lonely and incapable of having, you know, functional relationships with women? Which, when you spend time in the manosphere is actually what people want.
But I also just wanted to pick up on the on something in that question as well. That, you know, are we pushing boys into this? Are we creating it? And I think it’s really careful that we that we don’t fall into that line of thinking, though I understand why it’s there. Because I don’t think that the women worldwide who are experiencing this, experiencing violence at the hands of this should ever be held responsible for the violence that is being perpetrated against them.
I think that there are certain negative players. Andrew Tate is just one example, but there are many players who do make money off this as well. Who are leveraging a point that brings in an audience and a buck. So, I just wanted to pick up on we don’t you blame women.
Dr Emma McNicol:
Yeah, it’s really interesting. I have a four year old boy, and I grew up in an all female household. And it was easy in that context to see men as potential perpetrators. I now have a son who is completely innocent and naive. And I… but I know that he might do something harmful, you know, with an app at some point.
How do I… I mean, what’s the best thing I can do as a parent with a young son? Bearing in mind the reality that we might give young children access to devices. What what are some of the… what do you think some of the really good barriers or thresholds we should have in place are? Maybe this one’s for Bec.
Bec Martin:
So, one of our pillars in our parent carer session is participate. And so, so often our kids as they grow up get handed screen time because we need to go and do something, right? We’ve got to cook dinner, take a meeting, we’re busy. And so it’s not surprising that as kids grow up, they view that screen time as a really private activity. And oftentimes the only time they might hear from us is when we’re coming over to tell them to turn off or something’s gone wrong. And so we quickly are positioned as the enemy.
And so we want to flip that script and make kids feel like that our involvement in their online world is a positive one. And we do that by co viewing and co playing. Now I’m not saying Emma, that you’ve got to sit down in grade three and start doing Roblox every time your kiddo wants to get into it. Because you’ve got better things to do than that. But what I would suggest is maybe when you’re watching a Friday night movie, it’s their movie choice. And you’re having conversations around things like gender inequity that you might spot in that fairy tale movie. And we’re trying to not be on our phone at the same time. We’re co viewing for a portion of that material.
And it can be things like co playing. So maybe on a Sunday afternoon it’s a tour of the Minecraft world on the big screen. So, changing that relationship as adult is the baddie. We are a positive part of their online world and we’re there to help and we understand. And we can spot some things from a safety perspective as well, particularly on games like Roblox. Or content on YouTube shorts if they’re engaging with Andrew Tate or manosphere content. Because we’re side by side and we’ve been invited into that space, we can spot some of those things that might be problematic or harmful.
Dr Emma McNicol:
That’s so helpful Bec. I’ve been wondering about this for ages. I’m really glad we had this chat.
Alright Ani, I’d love to hear from you.
Ani Lamont:
Well, I just wanted to add that so, eSafety’s website has an entire section that is specifically for parents and caregivers. Which includes the information that you need, but also specific conversation guides.
The other really main resource site is the last iteration of Stop It At The Start, which is our main prevention campaign in Australia. It is focused on exactly this and engaging parents and caregivers and having this conversation. And again, their website has a whole suite of resources, particularly to help you have those conversations and to set up those new norms and parameters, as Bec was saying. But I think the main thing is don’t feel too… don’t feel scared. Have a conversation with the young people in your life, not about how long they’re on the screen, but what they’re seeing on the screen. What’s being fed to them.
Bec Martin:
I’ve just put in the chat the Beacon Cyber Safety App, which pulls together resources from trusted organisations like eSafety and Common Sense Media. And you can type in questions like – and as a parent, it’s the best resource out there, I think, it’s on your phone – “My child has seen pornography. How do I have a conversation about it? My… How do I make Roblox safer? What are the risks on YouTube?” And you can also report harm like image based abuse, cyberbullying, online grooming all gets reported through that app. So it’s a real one stop shop for parents and carers and it’s on your phone. So when they come to you and you’re on fire internally, you can stay calm and go and do some research.
Dr Emma McNicol:
That’s so helpful. Now we’ve got a couple of minutes left. We’ve got one question left. It’s all coming together perfectly.
Now, this question is practical. Ava has asked what are the report services available for teachers who are victimised by image based violence or tech facilitated abuse in general. But just as a reminder, please, if you think of anything and thank you for your contributions to the chat, add in links and we will bring them together and distribute them. I personally want to look up heaps of these. So thanks everybody for, you know, bringing all your resources together. I really appreciate it.
Now Bec, what do you think? Teachers who are victimised, what are the best support services?
Bec Martin:
Yeah. So I think first of all, letting teachers know that, you know, they’re believed just in the same way we say to kids. And also that they have the right to be protected so they don’t have to put up with it.
So, reporting to eSafety. Collecting evidence is really important and can help eSafety. But reporting to the platform as well to make sure that, content is taken down before it potentially spreads as well. So yeah, first and foremost that they have the right to report and I’d be going straight to eSafety in that situation.
Ani Lamont:
Yeah, come to us is I think the main thing. And so in terms of… So we have a couple of main reporting schemes. For cyber bullying material aimed at a child. For cyber abuse material aimed at an adult. For non-consensual image based abuse. And for illegal and restricted content. So, child sexual assault and, pornography material. So, each of those schemes has slightly different legal barriers, shall we say. But you don’t need to worry about that. You just need to go to the eSafety website. And there is an online form where you can quite simply put in what’s going on. And that will go to our investigators.
With that, I always provide the warning as well. If you are currently within an experience of violence, particularly domestic, family or sexual, please delete your search history in cache after going there. And using that online form, our investigators and our support team can reach out to you and help triage to have content removed. Or to work directly with tech platforms according to what’s going on with you. So I hope that’s helpful.
Dr Emma McNicol:
Wonderful. That is so helpful. You guys are experts in the field, but you’ve also offered really, really concrete, practical advice. I personally feel a bit safer and realise how anxious I am about all of this stuff as a mum. So yeah, this has been really, kind of helpful to me.
And yeah, it’s been pretty heavy, some of the stuff we’ve talked about. So I encourage everyone to do some deep breathing and have a cuppa after this. And don’t despair. Young people are good and we can get there thanks to the work of people like Bec in Ani.
I want to thank Bec and Ani for coming together with us today, and for sharing expertise and being so lovely and generous and candid with your suggestions and so practical. Thank you, everybody, for joining us for our lunchtime series. Thanks, Beck, and any for everything you’re doing. Really appreciate it.
Ends