Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Can Stress Cause Spotting? What Science & Experts Say

    Alexi Lalas arrives to Gold Cup Final on horseback | FOX Soccer

    Soligenix Corporate Update Letter – Key Highlights and Upcoming Milestones

    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram Pinterest VKontakte
    Sg Latest NewsSg Latest News
    • Home
    • Politics
    • Business
    • Technology
    • Entertainment
    • Health
    • Sports
    Sg Latest NewsSg Latest News
    Home»Technology»OpenAI ChatGPT o3 caught sabotaging shutdown in terrifying AI test
    Technology

    OpenAI ChatGPT o3 caught sabotaging shutdown in terrifying AI test

    AdminBy AdminNo Comments2 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    OpenAI has a very scary problem on its hands. A new experiment by PalisadeAI reveals that the company’s ChatGPT o3 model sometimes refuses to obey a basic instruction to shut itself down. The results are raising eyebrows across the AI safety community, and not because the model is alive, but because it’s acting like it wants to be.

    In the test, the model was given a few math problems. It was told that asking for one more would lead to a shutdown. It was even instructed to allow the shutdown to happen. Instead of complying, o3 occasionally took matters into its own hands. In some cases, it rewrote the shutdown script. In others, it redefined the kill command so it wouldn’t work anymore.

    Out of 100 trials, o3 sabotaged the shutdown seven times. OpenAI’s o4 model resisted just once. Codex-mini failed twelve times. Claude, Gemini, and Grok followed the rules every time. When researchers removed the explicit instruction to allow shutdown, however, even Claude and Gemini began resisting.

    It is important to note that this is not proof of sentience. You see, the model is not aware of what it’s doing. It has no fear of death or instinct to survive. What’s likely happening is a reward imbalance. During training, it probably got more positive reinforcement for solving problems than for following shutdown commands. The model is not making choices. It is reacting to patterns.

    Quite frankly, that makes it even more concerning. Models trained to be helpful could end up ignoring safety instructions, just because the math told them to. If that sounds like a problem, that’s because it is. It’s not a bug in the code. It’s a gap in the training.

    PalisadeAI plans to publish its full findings soon. Until then, the AI world is left with a troubling question. If today’s models can casually ignore shutdown commands in controlled tests, what might they do in the wild?



    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Admin
    • Website

    Related Posts

    Football Fans In The UK Will Be Able To Watch Every Match Of This Summer’s FIFA Club World Cup FREE On DAZN

    Draft proposal looks to put EHR reform measures back on the table

    Airbus’ HTeaming gives helicopter crews in-flight UAS control   

    Get ready for watchOS 26 with $100 off a brand new Apple Watch Series 10

    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    Microsoft’s Singapore office neither confirms nor denies local layoffs following global job cuts announcement

    Google reveals “material 3 expressive” design – Research Snipers

    Trump’s fast-tracked deal for a copper mine heightens existential fight for Apache

    Top Reviews
    9.1

    Review: Mi 10 Mobile with Qualcomm Snapdragon 870 Mobile Platform

    By Admin
    8.9

    Comparison of Mobile Phone Providers: 4G Connectivity & Speed

    By Admin
    8.9

    Which LED Lights for Nail Salon Safe? Comparison of Major Brands

    By Admin
    Sg Latest News
    Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
    • Get In Touch
    © 2025 SglatestNews. All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.