Sign Up to Our Newsletter
Subscribe
Primary Menu Search
  • Entertainment
    • Celebrity News
  • Fashion and Beauty
    • Hair
    • Beauty
    • Fashion
    • Weddings
  • Lifestyle
    • Love & Relationships
    • Parenting
    • Motoring
    • Food
    • Travel
      • Travel News
      • Property
  • Health & Wellness
    • Diet
    • Fitness
    • Health
  • Work & Money
    • Finance
    • Career
  • Sports
    • Soccer Mag
    • Sa Rugby Mag
    • Sa Cricket Mag
    • Compleat Golfer
    • American Sports
    • Multi Sport
  • Competitions
  • Deals
    • One Day Deals
    • Nationwide Deals
      • Deals in Cape Town
      • Deals in Johannesburg
      • Deals in Durban
      • Deals in Pretoria
      • Deals in Port Elizabeth
    • Accommodation Deals
    • Romantic Getaways
    • Food and Drink Deals
    • Experiences
    • Health and Wellness Deals

ChatGPT Falsely Accuses Law Professor of Sexual Harassment

by Zaghrah Anthony

We’ve all heard the promises. Artificial intelligence will revolutionize how we work, learn, and solve the world’s most complex problems. It’s the shiny new tool in our digital toolbox, often presented with an aura of cool, calculated objectivity. But what happens when that tool gets it dangerously, life-alteringly wrong?

For Jonathan Turley, a respected constitutional law professor at George Washington University, that abstract “what if” became a terrifying reality. His story isn’t about a misattributed quote or a minor factual error. It’s about an AI constructing a complete, damning fiction that threatened his reputation in an instant.

The Email That Changed Everything

The ordeal began not with a lawsuit or a angry student, but with a concerned email from a colleague. UCLA Professor Eugene Volokh was conducting research and asked OpenAI’s ChatGPT a simple question: could it list examples of American law professors who had been accused of sexual harassment?

The chatbot complied, listing several names. Among them was Jonathan Turley.

According to the AI, a Washington Post article from 2018 detailed how Turley, while a faculty member at Georgetown University, had been accused of sexually harassing a student during a law school trip to Alaska.

The problem? None of that is true.

A Fabrication, Bolt by Bolt

When Turley read the account, his shock was matched only by the sheer audacity of the fabrication. The AI hadn’t just gotten a date wrong; it had built an entire false reality around him.

“It invented an allegation where I was on the faculty at a school where I have never taught, went on a trip that I never took, and reported an allegation that was never made,” Turley explained. The irony was palpable. As a legal scholar, he has long written about the dangers AI poses to free speech and personal liberty. Now, he was living his worst-case scenario.

The story had all the hallmarks of a credible scandal, a specific date, a major newspaper citation, and a detailed scenario. But each pillar was made of digital sand. Turley has never taught at Georgetown. The Washington Post article does not exist. And in his 35-year career, he has never taken students on a trip, never gone to Alaska with them, and most importantly, has never been accused of sexual harassment or assault.

The Chilling Lack of a “Front Page”

This incident goes far beyond a simple software glitch. It highlights a fundamental shift in how defamation works in the digital age.

As Turley pointed out, when a traditional newspaper libels you, there is a path to recourse. There’s an editor, a publisher, a legal team. There’s a process for a retraction. You can demand to see the front page be corrected.

But who do you call when the accuser is a lines of code housed in a server farm? When Turley discovered that Microsoft’s Bing Chat (powered by the same GPT-4 technology) was also spreading the false story, the response was a digital shrug. The AI doesn’t call for comment. It doesn’t have a managing editor. It simply generates, disseminates, and moves on, leaving a shattered reputation in its wake.

A Society Built on Flawed Code?

The conversation online, particularly among academics and legal experts, has moved from theoretical concern to urgent alarm. This case is a perfect storm that reveals the core flaws of these systems: their ability to present confident falsehoods with the tone of authority and their replication of the biases embedded in their training data.

“AI algorithms are no less biased and flawed than the people who program them,” Turley noted. They scrape vast amounts of information from the internet—a landscape filled with both truth and malicious fiction and learn to replicate patterns. In this case, the pattern was a scandal narrative involving a public figure, and the AI filled in the blanks with a real person’s name.

The implications are staggering. For journalists, researchers, or any citizen using these tools for quick answers, it creates a minefield of misinformation. For public figures, it introduces a new form of digital risk, one where allegations can be generated at scale without a single human accuser.

Turley’s call is now for serious legislative discussion on AI accountability. How do we apply the old rules of libel and defamation to these new, non-human entities? How do we build “guardrails” that protect people from becoming the protagonist in an AI’s fictional story?

His story is a wake-up call. It reminds us that for all their brilliance, AI systems are not oracles. They are tools—powerful, imperfect, and sometimes perilous. And before we hand them any more authority over our lives and reputations, we need to figure out how to hold them accountable when they, inevitably, get it wrong.

{Source: IOL}

Featured Image: X {@ReutersLegal}

More from Lifestyle

Gordon Ramsay Opens Up About Skin Cancer: A Fiery Chef’s Wake-Up Call on Sun Safety

Spring Planting: 5 Fast-Growing Veggies for a Full Harvest

The healthiest ways to store, eat and serve fruit and vegetables

In the market for a new vehicle? Here’s when it is best to buy

    Primary Menu

    • Entertainment
      • Celebrity News
    • Fashion and Beauty
      • Hair
      • Beauty
      • Fashion
      • Weddings
    • Lifestyle
      • Love & Relationships
      • Parenting
      • Motoring
      • Food
      • Travel
        • Travel News
        • Property
    • Health & Wellness
      • Diet
      • Fitness
      • Health
    • Work & Money
      • Finance
      • Career
    • Sports
      • Soccer Mag
      • Sa Rugby Mag
      • Sa Cricket Mag
      • Compleat Golfer
      • American Sports
      • Multi Sport
    • Competitions
    • Deals
      • One Day Deals
      • Nationwide Deals
        • Deals in Cape Town
        • Deals in Johannesburg
        • Deals in Durban
        • Deals in Pretoria
        • Deals in Port Elizabeth
      • Accommodation Deals
      • Romantic Getaways
      • Food and Drink Deals
      • Experiences
      • Health and Wellness Deals

    • Contact Us
    • Terms and Conditions
    • Privacy Policy
    • Cookies Policy
    CAPE TOWN OFFICE: 36 Old Mill Road, Ndabeni, Maitland, 7405, Western Cape > Telephone: (021) 530 3300 > Fax: (021) 530 3333
    © Copyright 2025 Bona Magazine
    ×

    SEARCH

    ×