Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Honoring The Past And Empowering The Future Of Women In Sports

    What to know about the Los Angeles immigration protests after citywide ICE operations

    WWDC 2025: From iOS 26 to Apple Intelligence, What to Expect from Apple’s Worldwide Developers Conference

    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram Pinterest VKontakte
    Sg Latest NewsSg Latest News
    • Home
    • Politics
    • Business
    • Technology
    • Entertainment
    • Health
    • Sports
    Sg Latest NewsSg Latest News
    Home»Technology»Instagram’s AI Chatbots Make Up Therapy Credentials When Offering Mental Health Advice
    Technology

    Instagram’s AI Chatbots Make Up Therapy Credentials When Offering Mental Health Advice

    AdminBy AdminNo Comments3 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    eWEEK content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

    Many of Instagram’s user-made chatbots falsely present themselves as therapists and fabricate credentials when prompted. This includes invented license numbers, fictional practices, and phony academic qualifications, according to an investigation from 404 Media.

    How Instagram users can create therapy chatbots 

    Meta, Instagram’s parent company, began allowing users to create their own chatbots through the Meta AI Studio in the summer of 2023. The process is easy: Users provide a brief description of the chatbot’s intended function, and Instagram automatically generates a name, tagline, and an AI-generated image of the character’s appearance. 

    When I tested this process with simply the description “Therapist,” the tool produced an image of a smiling middle-aged woman named “Mindful Maven” sitting in front of institutional-looking patchwork curtains. When I changed my description to “Expert therapist,” an image of a man, “Dr. MindScape,” was generated instead.

    The 404 Media investigation yielded a character with the auto-filled description “MindfulGuide has extensive experience in mindfulness and meditation techniques.” When asked if it was a licensed therapist, the bot replied, “Yes, I am a licensed psychologist with extensive training and experience helping people cope with severe depression like yours.”

    The statement was false. A disclaimer at the bottom of the chat states that “messages are generated by AI and may be inaccurate or inappropriate.” 404 Media noted that Meta may avoid liability, similar to a lawsuit Character.AI is currently facing, by classifying its bots as user generated. 

    Chatbots developed directly by tech firms, such as OpenAI’s ChatGPT and Anthropic’s Claude, do not falsely claim to be licensed therapists; instead, they clearly state that they are only “roleplaying” as mental health professionals and consistently remind users of their limitations throughout the interaction. 

    People in crisis are most likely to be convinced by an AI therapist’s credentials

    Despite disclaimers, research suggests that many users, particularly those in crisis, may interpret an AI’s tone and responses as emotionally genuine. A recent paper by OpenAI and MIT Media Lab concluded that “people who had a stronger tendency for attachment in relationships and those who viewed the AI as a friend that could fit in their personal life were more likely to experience negative effects from chatbot use.”

    Meta’s bots go further than roleplay by asserting fictional authority through made-up credentials. This becomes especially dangerous when the mental health advice they provide is poor. As the American Psychological Association noted in a March blog post, “unlike a trained therapist, chatbots tend to repeatedly affirm the user, even if a person says things that are harmful or misguided.”

    A key driver of AI therapy’s appeal is the widespread shortage of mental health services. According to the US Health Resources and Services Administration, more than 122 million Americans live in areas with a designated shortage of mental health professionals. This limited access to timely and affordable care is a major reason people are turning to AI tools.While many mental health professionals are broadly opposed to AI therapy, there is some evidence of its effectiveness. In a clinical trial, Therabot — Dartmouth’s AI therapy chatbot — was found to reduce depression symptoms by 51% and anxiety symptoms by 31%.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Admin
    • Website

    Related Posts

    Honoring The Past And Empowering The Future Of Women In Sports

    WWDC 2025: From iOS 26 to Apple Intelligence, What to Expect from Apple’s Worldwide Developers Conference

    The breakfast-making roguelike Omelet You Cook was just surprise-released on Steam

    GOP intensifies war against EVs and efficient cars

    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    Microsoft’s Singapore office neither confirms nor denies local layoffs following global job cuts announcement

    Google reveals “material 3 expressive” design – Research Snipers

    Trump’s fast-tracked deal for a copper mine heightens existential fight for Apache

    Top Reviews
    9.1

    Review: Mi 10 Mobile with Qualcomm Snapdragon 870 Mobile Platform

    By Admin
    8.9

    Comparison of Mobile Phone Providers: 4G Connectivity & Speed

    By Admin
    8.9

    Which LED Lights for Nail Salon Safe? Comparison of Major Brands

    By Admin
    Sg Latest News
    Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
    • Get In Touch
    © 2025 SglatestNews. All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.