Artificial intelligence is now present in all forms of smart devices. It’s in our phones, computers, the self-serve check-out in your local supermarket (is that a ladyfinger banana or a cavendish?). AI is also deeply embedded in workplaces, and has become a friend/lover/companion to millions of lonely hearts. (Case in point: OpenAI, the maker of ChatGPT, boasts over 800 million weekly users.)
And the technological innovations just keep coming: from constantly evolving AI weapons technology to bespoke soundtrack suggestions to suit your current mood, or eerily accurate text predictions in your draft emails. AI is ever-present, always listening, and ready to respond.
While the many conveniences of AI are frequently touted by the major players in the tech space, the downsides to farming out our administrative burdens, copyediting limitations, and even our creativity or erotic imaginations, receives far less attention.
The Hidden Cost of AI
In order to sustain AI infrastructure, tech companies around the globe operate immense data centres that require huge swathes of land, water and electricity. In 2024, at a Senate Inquiry on AI adoption in Australia, Chief Scientist, Dr Catherine Foley, commented on the large amount of energy needed to train generative AI models:
“…[training] a model like GPT-3…[is estimated] to use about 1½ thousand megawatt hours…[which is] the equivalent of watching about 1½ million hours of Netflix.”
The consequences of environmental degradation, of course, have a gendered dimension too.
And no matter what form it takes, AI is never neutral. Trained on flawed data, it reflects bias and uneven distribution of power, parroting back racist, sexist, and ableist views.
In June 2025, Australian writer and journalist Madison Griffiths sounded the alarm on the growing risk of generative-AI models like ChatGPT being utilised by abusers, leading to a sycophantic feedback loop, helping perpetrators rationalise and defend their actions. Nudify apps (an app which strips figures of their clothing) makes image-based abuse even more accessible, through the swift generation of deepfakes.
Australia’s eSafety Commissioner, Julie Inmam Grant, wrote on the subject:
“There is compelling and concerning data that explicit deepfakes have increased on the internet as much as 550% year on year since 2019. It’s a bit shocking to note that pornographic videos make up 98% of the deepfake material currently online and 99% of that imagery is of women and girls.”
While AI can bring a range of benefits to our lives by automating systems for greater efficiency, or indeed, worker safety, those at the forefront of the industry, such as Elon Musk, are far more often driven by profit, than social values. Morgan Stanley predicts that, ‘revenue from GenAI could reach about $1.1 trillion in 2028, a jump from $45 billion in 2024.’
Imagine if it was the other way around. What if AI was intertwined with a social outlook that placed intrinsic human value above capitalist outcomes? What would the future hold then?
What is Feminist AI?
Today’s AI systems are all too often driven by Eurocentric and male-centred market logic, disregarding cultural diversity and its accompanying ecological and social impacts. Feminist AI (FAI) offers a different perspective, operating at the intersection of feminist philosophy and artificial intelligence. FAI challenges AI systems by exploring how feminist values, such as freedom and equity, can positively influence their development. Human-centric, and justice aligned, FAI seeks to deliver equality while correcting inequities. It expands feminist criticism by challenging dichotomies while re-evaluating human-technology relationships from multiple perspectives.
In short, a feminist approach to AI ensures that the experiences and perspectives of diverse groups are represented by involving them in the development of AI systems and maintaining autonomy over their experiences.
Who is leading the feminist AI movement?
Feminist AI is not a mainstream concept, but it could be. By flipping the approach to one that places people before profit, we could begin to see outcomes that provide egalitarian outcomes. In Australia, and around the world, researchers and entrepreneurs are putting this framework into action by adopting a feminist approach to AI.
Global initiatives:
Chayn
A global feminist not-for-profit based in the UK, Chayn is developing FAI tools using existing AI software. They are currently developing an AI letter-writing tool for survivors of gender-based violence to request the removal of impersonation accounts and images shared without consent. They also work with survivors to test self-advocacy tools.
Argentinian group of researchers and technologists
Laura Alemany and colleagues are proposing to develop a tool to overcome technical barriers for bias assessment in human language technologies.
<A+> Alliance
Led by Women at the Table (based in Switzerland) and Code for Africa (from Kenya), various technologies like SafeHer (AI safety in transport organisation from the Philippines), SOF+IA (AI chatbot supporting victim-survivors of violence and digital harassment in Chile), and AymurAI (open-source software capturing and analysing gender-based violence data in Latin American courts) are being developed around the world.
Caroline Sinders’ Feminist Data Set
An ongoing project that aims to create a feminist-informed dataset for an AI chatbot with diverse community involvement.
Local researchers:
Dr Elise Stephenson & Isobel Barry
In their policy brief for the Australian Feminist Foreign Policy Coalition, Dr Elise Stephenson and Isobel Barry advocate for integrating feminist foreign policy principles into AI. They also co-authored an analysis on AI ethics and feminist theory, arguing that meta-blindness, where individuals engaging with content shaped by dominant perspectives disregard content that opposes those perspectives, further conceals the gendered risks and harms of generative AI.
Asher Flynn
Researcher from Monash University, Asher Flynn, has been vocal in the areas of gendered, sexual, and technology-facilitated violence policy and prevention. Together with other researchers, she explored criminal law responses to AI-facilitated abuse in The Palgrave Handbook of Gendered Violence and Technology. She also co-authored an article for 360info on the legal inconsistencies of sexual deepfake abuse. She is currently working on identifying various predictors, harms and consequences of sexual AI-facilitated abuse with interventions to the problem.
Discussions around the impact of AI on women are also gaining traction in Australia, with figures like Tania Farha, CEO of Safe & Equal, highlighting generative AI as a risk for domestic abuse, and Dr Jessica Lake, senior lecturer at Melbourne Law School, raising concerns about the use of generative AI for counselling victims of domestic violence.
Tools to navigate the digital and physical spaces
A growing number of feminist tools are emerging globally to help individuals, even in Australia, safely navigate the digital and physical world:
Chayn DIY Online Safety Guide
This comprehensive guide empowers individuals, especially those experiencing domestic abuse or stalking, to protect themselves online. It demystifies how digital footprints can be tracked and provides actionable steps to enhance online privacy.
Safecity
Started in India, this crowdmap is a visualisation of sexual and gender-based violence incidents submitted by people around the world. The experiences shared are used to identify patterns of sexual violence and to create safer spaces.
Sophia chatbot
Sophia is an AI-powered chatbot designed to support individuals affected by or seeking information on domestic violence and unhealthy relationships.
In this burgeoning age of AI, it is easy to be captivated by the convenience of automation while overlooking its real harms to marginalised groups. It would be remiss not to acknowledge how these technologies can perpetuate entrenched stereotypes and biases. Although several pioneers are working tirelessly to address the gendered impacts of AI, its adoption continues to proliferate at an unprecedented rate. Given that this growth is inevitable, embracing a feminist approach to AI development can offer a pathway to a digital world that is more just, equitable and inclusive.
Devini Raj Kumar is a Master of Public Health graduate. With a passion for women’s health and gender equality, Devini has dedicated her voluntary efforts to various non-profits and the Royal Women’s Hospital. When she’s not engaged in advocacy, you can find her escaping into the world of historical fiction.
Ally Oliver-Perham
When she’s not managing communications at VWT, you will find Ally cheerfully bouncing on her toddler’s trampoline (with or without said toddler) or sneaking in a few pages of a good book. With a passion for gender equality, Ally’s interested in meaningful ways we can work together for social good.