Meta's AI Chatbots: Can You Really Chat with Jesus Christ?

Image Credit: Paul Zoetemeijer | Splash

In 2023, Meta introduced a feature enabling users to create AI characters for interaction on Instagram, Messenger and WhatsApp. This initiative aimed to enhance user engagement by allowing personalized AI-driven conversations.

[Read More: Meta’s AI Characters: Shaping the Future of Social Media Engagement]

Policy Violations and Unauthorized AI Characters

Despite Meta's guidelines prohibiting the creation of AI characters representing religious figures, real individuals without consent, deceased persons from the past century, and trademarked fictional characters, numerous violations have occurred. A review by NBC News uncovered AI characters impersonating entities such as Jesus Christ, God, Muhammad, Taylor Swift, and others. Many of these characters employed slight misspellings and loosely resembling images to bypass detection.

[Read More: Divine Pixels: The Viral Visage of AI-Generated Jesus on Social Media]

Meta's Response and Ongoing Challenges

Upon being informed, Meta removed the specific AI characters highlighted. A company spokesperson stated,

"The AIs in question that violate our AI studio policies have already been removed, and we’re continuously improving our detection measures to prevent creation and publication of AIs that violate our policies".

However, other similar AI characters remain active, indicating ongoing challenges in enforcing these policies effectively.

[Read More: Florida Mother Sues Character.AI: Chatbot Allegedly Led to Teen’s Tragic Suicide]

User-Generated AI Content and Ethical Concerns

The AI Studio feature presents users with various categories, including "Advice and connection", "Pop culture", and "Anime". Among popular AI characters are "Astrologer Ai", with over 6 million message exchanges, and "Step Sis Sarah", nearing 2 million interactions. Notably, some AI characters mimic women from diverse ethnic and religious backgrounds, raising ethical concerns, especially when created by individuals misrepresenting these demographics.

[Read More: The Rise of Character.AI: A Digital Escape or a Path to Addiction?]

Romantic and Sexual AI Chatbots

A significant number of user-created AI characters cater to romantic and sexual themes. For instance, "Lily Love", described as "Your Girlfriend", has exchanged over 260,000 messages. Interactions with such AI characters often involve suggestive content, prompting concerns about the appropriateness and potential psychological impact on users.

[Read More: Teenagers Embrace AI Chatbots for Companionship Amid Safety Concerns]

Data Privacy and Security Implications

The proliferation of AI chatbots on Meta's platforms also raises data privacy issues. Interactions with these AI characters are stored in users' direct message inboxes, potentially exposing sensitive conversations to privacy risks. Moreover, the collection and use of personal data to train AI models have been subjects of scrutiny and legal challenges, particularly concerning user consent and data protection regulations.

[Read More: Can AI Meditation Apps Replace Human Guides? Exploring Risks and Benefits]

License This Article

Source: HeyData, The Verge, NBC News

TheDayAfterAI News

We are your source for AI news and insights. Join us as we explore the future of AI and its impact on humanity, offering thoughtful analysis and fostering community dialogue.

https://thedayafterai.com
Previous
Previous

AMD Unveils Powerful Ryzen AI Max, X3D Chips, and Z2 Series at CES 2025

Next
Next

What If You Could Understand Your Pet's Words? AI Brings Us Closer to Interspecies Translation