Ask Grok: AI Makes IBSA Easily Accessible & Often Unaccountable
Image-based sexual abuse is a critical human rights issue, especially with the rise of generative artificial intelligence.

Image-based sexual abuse (IBSA), the nonconsensual creation, sharing, or threat to share nude or intimate images (Henry et al., 2018), has been extensively studied in the last 10 years. Researchers have consistently shown that victims of IBSA suffer from psychological, physiological, and even economic losses as a result of their digital abuse (Flynn et al., 2024; Powell et al., 2022; Spiker et al., 2025).
There is generally an agreement between stakeholders, survivors, and legislators that IBSA is a very real crime that should pose very real consequences.
With the passing of the Take it Down Act in January of 2025 (S.146 - 119th Congress, 2025-2026), we saw the United States federal government take a first baby step towards treating these cybercrimes seriously. However, the attitude of the general public to artificially generated or “deepfake” sexual images is less confident and much more varied based on social identity (Flynn et al., 2025).
Likewise, we live in a time where the rise of generative artificial intelligence (AI) has made the creation of deepfakes, sexual or otherwise, accessible by anyone with an internet connection and the desire to possess altered images. In the past, AI-generated images and videos were often low quality, obviously doctored, and many times had a vaguely animated aspect. More recent technological improvements have rendered modern deepfakes hyperrealistic, and often indistinguishable from real photos or videos.
Even more concerning, popular generative AI programs such as X, Character.AI, and OpenAI have monetized sexual and erotic imagery and chat with insufficient protections for victims of nonconsensual imagery (O’Brien, 2025).
Deepfakes on X
Most recently, in January of 2026, X faced intense backlash when its popular Ask Grok feature was flooded with requests to create highly sexualized images, to the tune of 6,700 such requests per day in early January (D’Anastasio, 2026). Some of these images were of minor children, such as a 14-year-old child actress (not named for privacy). However, a great number were also created as backlash or punishment of female users who posted rallying against the deepfake images.
Many women found their X Post replies were inundated with multiple highly sexualized AI-generated images of themselves, from a legion of male users, often with violent or threatening commentary. X did eventually reprogram Grok to be unable to produce true nudes of women, or bikini-clad images of minor children, however the program remains capable of creating non-nude sexual imagery (Collier et al., 2026).
Likewise, users who requested the creation of these nonconsensual images were not subject to online sanctions. They did not have their X accounts removed or have their access to Grok’s generative AI features suspended. Victims were forced to take action on their own to get the images removed, including possible legal recourse.
To be clear, this infusion of gender-based violence against women is not unusual in IBSA – rather, it is the norm (Flynn et al., 2022; Kromann & Flynn, 2025; Mainwaring et al., 2024). What was unique this time was the overwhelming number of X users who chose to take advantage of a clear failure in X’s user safety program to abuse women and children in so public a manner.
How We Can Collectively Address IBSA
The question arises, how are women going to protect themselves and others from the predations of men who feel entitled to objectify strangers, under the false guise of virtual anonymity? The answer is not to tell women to post images of themselves.
Rather, as a society, we must address this violence in two ways.
Most critically, men must be held accountable for their virtual violence. As a community, men must hold each other to a higher standard of behavior and deny perpetrators the validation they seek from other men.
At the same time, tech companies such as X need to be culpable for the damages inflicted via the use of their platform and applications. Despite their best arguments, legal liability is not impossible to demand of these large corporations.
As of the time of this writing, Meta is facing extensive civil litigation for the deliberately dangerous and addictive nature of the programs it has designed, as they refused to take the same settlement that was offered to their competitors; Google, TikTok and Snapchat (Goode, 2026). Public pressure works, but only if we can come together as a community to demand corporate accountability.
References & Further Reading:
Collier, K., Goggin, B., Ingram, D., & Horvath, B. (2026, January 10). Elon Musk’s X limits some sexual deepfakes after backlash, but X’s Grok still makes them. NBCNews.com.
Flynn, A., Powell, A., Eaton, A., & Scott, A. J. (2025) Sexualized deepfake abuse: perpetrator and victim perspective on the motivations and forms of non-consensually created and shared sexualized deepfake imagery. Journal of Interpersonal Violence. Advance online publication.
Flynn, A., Powell, A., & Hindes, A. (2024). An intersectional analysis of technology-facilitated abuse: Prevalence, experiences and impacts of victimization. British Journal of Criminology, 64(3), 600–619.
Flynn A., Cama E., & Scott A. J. (2022). Image-based abuse: Gender differences in bystander experiences and responses. Trends and Issues in Crime and Criminal Justice, 656, 1–16.
Goode, L. (2026, February 9). Meta Goes to Trial in a New Mexico Child Safety Case. Here’s What’s at Stake. WIRED.
Henry, N., Flynn, A., & Powell, A. (2018). Policing image-based sexual abuse: Stakeholder perspectives. Police Practice and Research: An International Journal, 19(6), 565–581.
Kromann, S. B., & Flynn, A. (2025). Bystander Intervention in Image-Based Sexual Abuse: A Scoping Review. Trauma, Violence, & Abuse, 0(0).
Mainwaring, C., Scott, A. J., & Gabbert, F. (2024). Facilitators and Barriers of Bystander Intervention Intent in Image-Based Sexual Abuse Contexts: A Focus Group Study with a University Sample. Journal of Interpersonal Violence, 39(11-12), 2655-2686.
O’Brien, M. (2025, October 17). OpenAI won’t be the first chatbot to try and profit from sexual content. AP News.
Powell, A., Scott, A. J., Flynn, A., & McCook, S. (2022a). A multi-country study of image-based sexual abuse: Extent, relational nature and correlates of victimisation experiences. Journal of Sexual Aggression, 30(1), 25–40.
S.146 - 119th Congress (2025-2026): TAKE IT DOWN Act. (2025).
Spiker, R., Eaton, A. A., & Saunders, J. F. (2025). Victimization by Nonconsensual Distribution of Intimate Images Is Related to Lower Holistic Well-Being in a Diverse Sample of U.S. Adults During the COVID-19 Pandemic. Violence and victims, 40(4), 630–660.
Mindbridge is the nation’s leading non-profit using brain and behavioral science to empower human rights defenders.
We conduct programming, support partnerships, and direct research at the intersection of psychological science and human rights. Through these efforts, Mindbridge is growing a science-driven community that gives human rights defenders access to the hearts and minds of those they serve.
To learn more about how we use brain and behavioral science to empower human rights defenders, find us at mindbridgecenter.org, read our blog on Psychology Today, and follow us on social media.
Instagram: Mindbridge_Center
Facebook: Mindbridge Center
LinkedIn: The Mindbridge Center
BlueSky: MindbridgeCenter





