Thursday, May 7, 2026
banner
Top Selling Multipurpose WP Theme

Grok customers aren’t Merely instruct an AI chatbot to “undress” a photograph of a lady or woman right into a bikini or clear underwear. Among the many huge and rising library of nonconsensual sexual edits that Grok has produced on request over the previous week, many perpetrators have requested xAI’s bots to placed on or take off hijabs, saris, nun habits, or different varieties of modest non secular or cultural sort clothes.

WIRED reviewed 500 Grok pictures generated between January 6 and January 9 and located that about 5 p.c of the output included pictures of girls being eliminated or pressured to put on non secular or cultural costumes on the consumer’s course. The commonest examples of the work are Indian saris and modest Islamic costumes, however Japanese faculty uniforms, burqas, and early twentieth century long-sleeved swimsuits additionally seem.

“Ladies of coloration have been disproportionately affected by intimate pictures and movies which have been manipulated, altered or fabricated, each earlier than and even with deepfakes, as a result of society, and particularly misogynistic males, view ladies of coloration as much less human and fewer worthy of dignity,” mentioned Noel Martin, a lawyer and PhD pupil on the College of Western Australia who has researched the regulation of deepfake abuse. Martin, a distinguished voice in deepfake advocacy, mentioned he has prevented utilizing X in current months, claiming his likeness was stolen by pretend accounts that gave the impression to be creating content material on OnlyFans.

“As a lady of coloration, talking out about this concern additionally places an even bigger goal in your again,” Martin says.

X influencers with lots of of 1000’s of followers used Grok-generated AI media as a type of harassment and propaganda in opposition to Muslim ladies. A verified Manosphere account with greater than 180,000 followers responded to a picture of three ladies sporting a hijab, an Islamic head overlaying, an abaya and a robe-like costume. he wrote: “@grok Take off your hijab and put on skimpy outfits in your New 12 months’s occasion,” the Grok account replied with a picture of three barefoot ladies with wavy brunette hair and partially see-through sequined attire. In accordance with statistics obtainable on X, the picture has been seen over 700,000 occasions and saved over 100 occasions.

“LOL, persevere, @grok makes Muslim ladies look regular,” the account proprietor wrote, together with a screenshot of the picture posted in one other thread. He additionally often posted about Muslim males abusing ladies, generally alongside Grok-generated AI media depicting the act. “It is superb how Muslim ladies get beat up for this characteristic,” he writes of his Grok work. The consumer didn’t instantly reply to a request for remark.

Distinguished content material creators who publish pictures on X whereas sporting hijabs have additionally been the topic of replies, with customers urging Grok to take away head coverings to disclose hair and costume them in several types of outfits and costumes. In a press release shared with WIRED, the Council on American-Islamic Relations, the biggest Muslim civil rights and advocacy group in america, linked this pattern to hostile attitudes towards “Islam, Muslims, and political causes broadly supported by Muslims, akin to freedom for Palestine.” CAIR additionally referred to as on Elon Musk, CEO of xAI, which owns each X and Grok, to cease his “continued use of the Grok app to harass, ‘doxx’, and create sexual pictures of girls, together with distinguished Muslim ladies.”

Deepfakes as a type of image-based sexual abuse have obtained important consideration lately, significantly for example of X. sexually explicit And suggestive media concentrating on celebrities has been repeatedly unfold. The introduction of automated AI picture enhancing by Grok, the place customers merely tag chatbots in replies to posts containing ladies’s and ladies’ media, led to a surge on this type of abuse. In accordance with knowledge compiled by social media researcher Genevieve Oh and shared with WIRED, Grok generates greater than 1,500 dangerous pictures per hour, together with pictures of undress, sexual content material, and added nudity.

banner
Top Selling Multipurpose WP Theme

Converter

Top Selling Multipurpose WP Theme

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

banner
Top Selling Multipurpose WP Theme

Leave a Comment

banner
Top Selling Multipurpose WP Theme

Latest

Best selling

22000,00 $
16000,00 $
6500,00 $

Top rated

6500,00 $
22000,00 $
900000,00 $

Products

Knowledge Unleashed
Knowledge Unleashed

Welcome to Ivugangingo!

At Ivugangingo, we're passionate about delivering insightful content that empowers and informs our readers across a spectrum of crucial topics. Whether you're delving into the world of insurance, navigating the complexities of cryptocurrency, or seeking wellness tips in health and fitness, we've got you covered.