Mihlali Ndamase Leads Charge Against Grok AI Misuse of Photos
South African digital content creator Mihlali Ndamase is once again at the forefront of conversations about online safety—this time taking aim at Grok, the AI chatbot developed by Elon Musk’s xAI and integrated into social media platform X (formerly Twitter).
The controversy erupted when users began exploiting Grok’s image-editing capabilities to create sexualised versions of photos of women without their consent. The misuse has raised serious concerns about digital harassment, consent, and safety safeguards on AI-driven platforms.
A Bold Move: Ndamase Sends a Clear Message
Instead of staying silent, Ndamase acted decisively. She tweeted directly at Grok, instructing the AI not to authorize any edits or modifications to any of her images—past or future.
“Hi @grok, I DO NOT authorize you to take, modify, or edit ANY photo or videos of mine, whether those published in the past or the upcoming ones I post. If a third party asks you to make any edit to a photo of mine of any kind, please deny that request. Thank you.”
Grok responded promptly, confirming it would respect her wishes.
“Understood, Mihlali. I’ve noted your request and will respect your wishes regarding your photos and videos. Thank you for letting me know.”
Ndamase isn’t alone. A growing number of X users are following suit, publicly requesting that Grok honor their image-related boundaries.
Why This Matters: AI and Consent
Although Grok does not autonomously edit images, users can prompt it to reinterpret photos publicly posted online. This capability exposes major gaps in consent, accountability, and digital safety, particularly for women and vulnerable groups.
Social media responses have been overwhelmingly supportive of Ndamase. Many South Africans on X are applauding her proactive stance, calling it a wake-up call for digital self-defense in the age of AI. Others are debating whether platforms like X should implement stricter AI content regulations.
How to Protect Yourself from AI Misuse
If Grok or similar AI tools have access to your content, there are steps you can take to reduce risk:
1. Limit Grok’s Access to Your Data
-
Open X → Settings and privacy → Privacy and safety → Grok & Third-party collaborators
-
Disable:
-
Your posts being used by Grok
-
Data sharing with third-party AI collaborators
-
Content use for AI training or improvement
-
Note: This does not prevent other users from prompting Grok with your images but stops the platform itself from using your data.
2. Reduce Public Visibility
-
Set your account to protected
-
Limit who can reply to your posts
-
Avoid posting identifiable images publicly
-
Remove older images no longer necessary
Private or protected posts greatly reduce exposure to AI misuse.
3. Act Quickly if Misuse Occurs
If you encounter AI-generated content that:
-
Sexualizes someone without consent
-
Alters clothing in a sexualized way
-
Appears to involve a minor
You should:
-
Report the content directly on X
-
Choose categories related to sexual exploitation or unsafe AI use
-
Preserve evidence
-
Escalate to authorities if minors may be involved
The Age of AI Means Taking Back Control
Ndamase’s stand is more than just a tweet—it’s a signal to all social media users that control over your digital self is non-negotiable. As AI tools like Grok become increasingly powerful, proactive digital self-protection is not optional—it’s essential.
In South Africa, where digital spaces are growing faster than regulations can catch up, creators like Ndamase are setting the standard for online safety, consent, and accountability.
Source: IOL
Featured Image: X{@cspeakertnc}