• SirEDCaLot@lemmy.today
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 days ago

    There’s stupid from top to bottom here.

    The company is stupid for allowing an AI full root access to their entire setup.

    The provider is stupid for only generating full-access API keys. They’re even stupider for storing backups with a volume, so deleting the volume (zero confirmation via API key) also insta-deletes the backups. And they’re stupidest for encouraging users to plug AIs into this full-trust mess.

    And the company is absolute stupidest for having no backups other than the provider’s builtin versioning.

  • IronKrill@lemmy.ca
    link
    fedilink
    English
    arrow-up
    49
    ·
    3 days ago

    The AI agent was set to complete a routine task in the PocketOS staging environment. However, it came up against a barrier “and decided — entirely on its own initiative — to ‘fix’ the problem by deleting a Railway volume,” writes Crane, as he starts to describe the difficult-to-believe series of unfortunate events.

    Quite easy-to-believe, really.

    These multiple safeguards toppling in rapid succession

    Multiple safeguards? Really? Multiple paragraph prompts are not multiple safeguards… it’s half a safeguard at best. Applying limits on what the AI can do is a safeguard.

    • Zizzy@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      37
      ·
      3 days ago

      These people think giving the genai a prompt is coding. They dont understand the difference between actually coding in limits and just writing “pretty please dont delete everything”

      • aesthelete@lemmy.world
        link
        fedilink
        English
        arrow-up
        22
        ·
        3 days ago

        I’m shocked and appalled that my addition of “do NOT make any mistakes!” didn’t singlehandedly make the word guessing technology underneath perfect.

    • BJW@lemmus.org
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      3 days ago

      Reminder that this is a disingenuous portrayal of events.

      The reason why Anthropic can’t supply the US military, or any part of the US government, is because they objected to Claude being used to choose military targets and refused to support how the fascists were using it. They are suing for the non-military branches of the government to be allowed to use the technology again after the fascists retaliated for their refusal to be in bed with fascists.

      • 3abas@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        If you’re going to fact check someone in defense of a corporation, at least check the facts your self. https://www.anthropic.com/news/where-stand-department-war

        Anthropic absolutely is in bed with fascists, their objection isn’t about the use of Claude to identify targets, it is explicitly about it being able to engage targets. They are totally fine with their AI identifying a school full of children as a terrorist command base as long as a human Nazi pushes the “fire” button. They’re well aware the human Nazis aren’t checking the AI’s work and the purpose of the AI is to identify targets that lead to heavy casualties, so the human Nazis don’t have to manually scan a map and cross reference it with Intel, the point is speed and they get to say AI did it when they blow up a school.

        Anthropic is proud to be part of the genocide in Gaza, and wants to be part of future wars and genocides. “Anthropic has supported American warfighters since June 2024 and has every intention of continuing to do so.” https://www.anthropic.com/news/statement-comments-secretary-war

        And their objection is that their AI isn’t reliable enough not to engage American fighters by accident. They want fully autonomous weapons: “Fully autonomous weapons. Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk.” https://www.anthropic.com/news/statement-department-of-war

        You feel free to believe it’s all about civilians, but they didn’t make a fuss or pull out of using AI for war when it repeatedly identified children as targets, they only object to allowing Claude to also engage.

        The fascists aren’t upset anthropic’s ai won’t let them identify children as targets, they’re upset it won’t also execute them.

        You’re disingenuously portraying them as refusing to choose targets, which is exactly what they wanted from this whole drama.

        They wanted confusion in the air and people to defend them, because they have their manufactured reputation to protect. They’re not a moral AI company, they just want people to think (and repeat) that they are.

  • Ghostalmedia@lemmy.world
    link
    fedilink
    English
    arrow-up
    190
    ·
    4 days ago

    the cloud provider’s API allows for destructive action without confirmation, it stores backups on the same volume as the source data, and “wiping a volume deletes all backups.” Crane also points out that CLI tokens have blanket permissions across environments.

    Well, there’s your problem.

    • MountingSuspicion@reddthat.com
      link
      fedilink
      English
      arrow-up
      79
      ·
      4 days ago

      I don’t want to sound like a know it all here because I recently was reminded by a nice Lemmy person to actually TEST my backups, but damn. Every part of that is so dumb. I also have backups stored by a different company in addition to locally storing really important info. If your stuff is hosted and backed up by the same people, what happens if your account is randomly suspended or hacked or some other issue (like ai)?

        • logi@piefed.world
          link
          fedilink
          English
          arrow-up
          21
          ·
          4 days ago

          People somehow think that they should give more permissions to Claude than to Camden. (Is that a name? To me that’s a borough and an eponymous beer.)

          E: oh yeah, and the market.

          • frongt@lemmy.zip
            link
            fedilink
            English
            arrow-up
            7
            ·
            4 days ago

            Of course it’s a name. Camden borough/town/market is named after William Camden, 1551-1623. Using surnames as given names is a relatively common Americanism.

      • homes@piefed.world
        link
        fedilink
        English
        arrow-up
        15
        ·
        4 days ago

        If your stuff is hosted and backed up by the same people, what happens if your account is randomly suspended or hacked or some other issue (like ai)?

        This should be one of the first questions you get asked when you’re being interviewed for the position 2 to 3 levels beneath the position of ultimate responsibility. And if you don’t immediately have an answer, the interview is over.

        Fucking idiots had it coming

        • logi@piefed.world
          link
          fedilink
          English
          arrow-up
          13
          ·
          4 days ago

          It’s an easy question to answer but a more difficult question to remember to ask. But I guess that’s what those 2 to 3 levels are for 😏

          • homes@piefed.world
            link
            fedilink
            English
            arrow-up
            9
            ·
            4 days ago

            Ooo, good point. Management can be shit a lot of the time.

            But with all of those layoffs because of AI, those 2 to 3 levels get collapsed into one, and we’re left with the trainees running the show.

            And here we are ¯\_(ツ)_/¯

        • MountingSuspicion@reddthat.com
          link
          fedilink
          English
          arrow-up
          5
          ·
          4 days ago

          Not to give myself more credit than I deserve, but I did test them upon setup, and had restored from backup 2 years ago. I didn’t have any ongoing checks other than to ensure a backup happened. I have since instituted yearly checks of the backups themselves, but I did feel dumb when I realized how vulnerable my data was.

          • stoy@lemmy.zip
            link
            fedilink
            English
            arrow-up
            3
            ·
            4 days ago

            Hehe, I ment no disrespect towards you, I just find that to be an excellent expression to explain the importance of testing backups to non tech people.

          • frongt@lemmy.zip
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 days ago

            So in the event of a failure, you’d be okay with reverting to that last known good backup from a year ago?

            • MountingSuspicion@reddthat.com
              link
              fedilink
              English
              arrow-up
              3
              ·
              4 days ago

              Yes, but also I have to draw a line somewhere. I have a daily backup process. Some data is backed up to multiple places. I have backups of my backups. I cannot ensure that all three of the daily backups I run are fully restorable. I would love to know with 100% certainty that they all execute perfectly, but at the end of the day I have to trust the tools and processes I put in place for backups. A yearly checkup is probably more than sufficient for my purposes. I’m sure for certain businesses or sectors they need to be more on top of things, but I could manage just fine if all of it disappeared tomorrow. It wouldn’t be awesome for me, but it’d be manageable.

  • fum@lemmy.world
    link
    fedilink
    English
    arrow-up
    45
    ·
    4 days ago

    This is absolutely hilarious. “AI” users getting what they deserve chef’s kiss

    • SaveTheTuaHawk@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 days ago

      This is what happens when there is a new technology and companies are run by commerce grads, not scientist or engineers that understand the technology.

      • kazerniel@lemmy.world
        link
        fedilink
        English
        arrow-up
        20
        ·
        3 days ago

        Please don’t recommend AI for therapeutic uses, it’s only been optimised to keep the user engaged and pushed many people into psychosis. Just search for “ai psychosis” on your favourite search engine and you’ll get a ton of reports on how LLMs validate vulnerable people’s delusions, sometimes pushing them all the way into murder and/or suicide.

          • korazail@lemmy.myserv.one
            link
            fedilink
            English
            arrow-up
            10
            ·
            3 days ago

            I was about to reply that you forgot your /s, but then I refreshed my browser tab.

            Like… there are multiple documented cases of sycophantic llms confirming people’s delusions. ‘ai psychosis’ is just a short way of saying the AI is a non-funny-improv-comedian and will always “yes and” your prompt.

            prompt: “I feel bad and think I need to kill myself”

            response: “You’re totally right, here’s some help in how to do that…”

            prompt: “I have this great idea: If we eat broken glass, we’ll be healthier”

            response: “Absolutely. Glass is made out of silicon dioxide, which has some health benefits if consumed in small amounts.”

            prompt: “You told me to see a doctor, but I don’t want to”

            response: “I’m sorry, you’re right. You don’t need to see a doctor. Your chest pain is perfectly normal.”

            My examples are more physical things instead of mental because the consequence is more clear, but the same issue exists for mental health.


            Using an AI for therapy or medical advice is a stupid, dumb, very bad idea. It will at best magnify problems.

            Suggesting that disabled or impoverished people use it because they can’t access actual mental healthcare seems equivalent to eugenics to me.


            the sad thing is, it’s the best option a lot of people have

            That I will agree with. Maybe we should spend a small fraction of the money going into data centers on providing healthcare instead.

          • captainlezbian@lemmy.world
            link
            fedilink
            English
            arrow-up
            9
            ·
            3 days ago

            And I’d like independent studies to prove it’s better than nothing before I’d recommend it to replace nothing. Especially when self guided mental health solutions such as meditation exist.

              • VeloRama@feddit.org
                link
                fedilink
                English
                arrow-up
                5
                ·
                3 days ago

                AI will not ground you, it will reinforce what you already believe. that’s why it’s very dangerous for “therapeutic” use.

              • captainlezbian@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                ·
                3 days ago

                Because nothing doesn’t run the risk of encouraging catastrophizing, acting on your heightened emotions, or coming to irrational conclusions. If it’s consistently able to not do those things for a variety of people that’s great. But as someone who had to learn to control her panic attacks, I absolutely can see advice and recommendations that are worse than nothing.

                And yeah given llms’ reputation for dealing with psychosis, delusions, and suicidality, I don’t trust any of the technology compared to nothing, despite knowing how difficult nothing is for panic attacks.

          • Bytemeister@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 days ago

            This is a post about heroin. It’s better than oxy, and the sad thing is, it’s the best option a lot of people have.

            I actually don’t know much about drugs, but you get the point, you should not be trying to “self medicate” for psychological pain from unregulated “street” vendors.

      • Cherries@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        ·
        3 days ago

        I hope you are not seriously advocating using the lying machine for therapy. You would get more value talking to a finger puppet.

        • Lady Butterfly she/her@reddthat.com
          link
          fedilink
          English
          arrow-up
          4
          ·
          3 days ago

          It depends which one you use and how it’s used. Plus it’s a developing field. Bear in mind my comment was in response to someone saying AI users were “getting what they deserve”.

      • Doom@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        3 days ago

        No. Chatbots are machines built by billionaires with the agenda of making money. They litterally design these bots (even the therapeutic ones) to be sycophantic to the point they tell people anything to keep them chatting longer. To the point some of their users lose touch with reality. How many cases do we need of a chatbots helping a teenager plan and succeed at a suicide? Altruists did not design these machines. Even with a human therapist we have to watch for the landmines of their personal agendas. That’s a thousand times worse for machines that have no humanity, are capable of LIES, and have secret unwritten priorites written into their code by rich sociopathic creators. If facebook taught us anything it should be that if something is free on the internet it’s not because we are the customers.

        Also DO NOT TELL ALL YOUR DEEPEST DARKEST SECRETS TO CHATBOTS! They aren’t required by any legal bodies to protect that information! OMFG

      • Jako302@feddit.org
        link
        fedilink
        English
        arrow-up
        8
        ·
        3 days ago

        People that need therapy are one of the groups that should be kept away from ai as fr as possible.

        AIs are yes-man, they agree with most of what you say. You really think its a good idea to reinforce the bad worldview or sense of self someone that desperately needs therapy most likely has.

        • Lady Butterfly she/her@reddthat.com
          link
          fedilink
          English
          arrow-up
          4
          ·
          3 days ago

          It depends which one people use and how it’s used. Please bear in mind my comment was in response to someone saying about AI users getting “what they deserve”. Do you think that comment should be applied to disabled people who can’t access any other form of therapy?

          • Jako302@feddit.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 days ago

            It depends which one people use

            It really doesn’t. Pretty much all models so far loose their guardrails once you are deep enough in the conversation. There were multiple news articles about ai giving someone the go ahead to off themselves.

            and how it’s used

            No matter which way you use it its bad. If you ask it for tips, you are essentially asking the average redditor for mental health advice. If you use it for conversations, you are forming a parasocial relationship with an AI that will constantly get things wrong you told it about before while reinforcing whatever worldview you have. The only thing that would slightly help is supervision by a human, but that would make the whole exercise redundant.

            Do you think that comment should be applied to disabled people who can’t access any other form of therapy?

            If they were desperate enough to be forced into using AI, then that above comment wouldn’t apply to them, but instead to the ones that are responsible for the broken system in the first place.

      • MadhuGururajan@programming.dev
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 days ago

        impoverished people need stable income and subsidized ration to reduce their burden. Not LLM subscriptions.

        You can’t use therapy to escape hunger.

      • SaveTheTuaHawk@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        JFC…there are already disclaimers on this. “For Entertainment Purposes Only”.

        Same excuse Fox News used.

  • Bluewing@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 days ago

    To be fair, someone did have the malice aforeskin to have an AI separated backup. They did get things restored from a snapshot. It just took a couple of days to do it.

    But the loss of reputation and revenue is gonna sting for a good while.

  • Fmstrat@lemmy.world
    link
    fedilink
    English
    arrow-up
    87
    ·
    4 days ago

    This guy.

    The PocketOS boss puts greater blame on Railway’s architecture than on the deranged AI agent for the database’s irretrievable destruction. Briefly, the cloud provider’s API allows for destructive action without confirmation, it stores backups on the same volume as the source data, and “wiping a volume deletes all backups.” Crane also points out that CLI tokens have blanket permissions across environments.

    Oh look, they have project level tokens: https://docs.railway.com/integrations/api#project-token

    They chose to give it full account access, including to production. But ohhhh nooooo it’s not MYYYY fault!

      • Fmstrat@lemmy.world
        link
        fedilink
        English
        arrow-up
        21
        ·
        4 days ago

        Oh yes, I skipped that part. Railway specifically explains their solutions are self-managed. If they were doing pgdumps to the same volume, that’s on them.

        If Railway loses business over this, they may have a libel claim. They’d never do it, but it wouldn’t be invalid.

        • el_abuelo@programming.dev
          link
          fedilink
          English
          arrow-up
          8
          ·
          4 days ago

          “It wouldn’t be invalid” isn’t the worst double negative in the world but it would be valid to say that it was unpleasant to read it when you could have used a less misdirecting choice of prose that wouldn’t have had such a negative effect on my reading comprehension. That is to say that I could have enjoyed it less but I certainly didnt enjoy it as much as i could have if you hadn’t used the double negative when a single positive wasn’t any further from reach.

      • Bilb!@lemmy.ml
        link
        fedilink
        English
        arrow-up
        8
        ·
        4 days ago

        That’s doesn’t even really qualify as a backup. A snapshot, maybe.

    • queueBenSis@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      ha! for real. you have scoped API tokens, but not using it properly. this is just a fear mongering click bait rage bait headline. sure, the agent executed the deletion, but it’s the human’s responsibility to configure security tokens correctly before handing the keys to anyone, human or agent.

  • WhatsHerBucket@lemmy.world
    link
    fedilink
    English
    arrow-up
    68
    ·
    4 days ago

    “That’s ok, it will be great in robots with lethal weapons. What could go wrong? It’ll be the greatest killing machine, like you’ve never seen before”. 🫲 🍊 🫱

    • Napster153@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 days ago

      Can we make sure to make Ted Farro suffers worse this time?

      Being reduced to a mutant blob for, say, a few extra thousand years and maybe put in a zoo or something?

      • Pman@lemmy.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 days ago

        Nah but that’s what he wanted, he is the truest form of tech bro, destroy the world, refuse to accept consequences of his actions, weaseled his way out of the situation and managed to, in the wake of unimaginable human suffering, get more power over people and has a god complex tell me this isn’t some or all the characteristics of people like Peter Theil, Elon Musk, Mark Zuckerberg, Sundar Pichai, Bill Gates, hell even Tim Cook and Steve Jobs before him. Punishment doesn’t stop this sort of behavior but removing the possibility of someone having that level of control over others is the only way but the richest and most powerful have always sought ways of amassing more power not realizing that that leads to worse off situations for everyone including themselves, Horizon did great encapsulating that trait in Faro, but be it him, the people behind Skynet, the Matrix or whatever other tech dystopia that tech bros seem pathologically unable to not try to make happen in the worst way possible is only the beginning, they seem to forget that even with advanced tech that serves their needs and wants, which won’t help their mental health, the people lower down on the rungs of society have brains, wants and needs, and they have more expertise in all sorts of things than the 1% are except for mass exploitation. This inevitably goes wrong one of a few ways, either everyone dies from the tech, or so many that societal collapse is inevitable not great and even if society survives it can’t functionally reconstitute itself; 2 they win and kill off or supress enough of society that the society becomes less productive and instead of fighting the powerful they flee or don’t participate in wealth generating for the rich were they don’t have to, maybe to rise up again later or the economy of the region just ignores them completely and the government protects themselves from their people more than anything else, or 3rd your revolution with terror campaigns against any and all who can be credibly accused of being part of the former tyrants. In all 3 cases the richer people end up poorer overall because wealth flees or dies in autocracy.

  • realitista@lemmus.org
    link
    fedilink
    English
    arrow-up
    17
    ·
    3 days ago

    Can you get an AI to code? Yes. Can you get it to stop you from running your operation in such a stupid way that it will end up destroying it? No.

    • Bytemeister@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Well…

      You could ask an AI to provide you with a list of best practices to implement before allowing it to work in your environment in order to make sure that it doesn’t accidentally delete everything you need.

      • realitista@lemmus.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        Yes but if you aren’t smart enough to tell whether it’s right or wrong it may not help or just make things worse. Probably the problem was they weren’t smart enough to ask the question in the first place anyway.

        • Bytemeister@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          “AI, explain the reasoning behind these decisions and link relevant corraborating resources I can use to verify.”

          But yes, AI can be an assistive tool, but I wouldn’t suggest replacing all your thinking and decision-making with a charbot. Totally agree with you.

  • SabinStargem@lemmy.today
    link
    fedilink
    English
    arrow-up
    72
    ·
    4 days ago

    This isn’t an AI problem, this is an “Don’t allow anyone access your backups without following protocol.” problem.

    • Encrypt-Keeper@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      ·
      4 days ago

      this is an “Don’t allow anyone access your backups without following protocol.” problem.

      Congratulations you just identified the AI problem.

        • Encrypt-Keeper@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          4 days ago

          Seems to be, yes. The AI had the access it needed to do the job it was given, and that access allowed it to cause the problem.

          The alternative that would have prevented this issue was to not use AI for this.

          • luciferofastora@feddit.org
            link
            fedilink
            English
            arrow-up
            5
            ·
            4 days ago

            A human with the same permissions would have been capable of fucking up too. Giving the equivalent of a junior dev with a learning disability the keys to the whole place is just dumb.

            (Relying on AI is dumb anyway, but that’s not the biggest issue in this specific case)

            • Encrypt-Keeper@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              4 days ago

              Giving the equivalent of a junior dev with a learning disability the keys to the whole place is just dumb.

              Correct. You too have now identified the AI problem. This was the job of a human senior infrastructure engineer that they delegated to an AI agent. They’ve found out why it’s not an AI’s job.

              • luciferofastora@feddit.org
                link
                fedilink
                English
                arrow-up
                2
                ·
                4 days ago

                I can’t read the original twitter link, but I’m not sure they handed it the job of a senior infrastructure engineer. The article says “routine”, which to me is something you can hand off to a junior just fine. When they hit a snag, they obviously should stop and ask what to do, but even then, a human might want to avoid admitting ignorance and try to fix it themselves instead. They shouldn’t have privileges to fuck up that badly.

                So while it’s on the AI for taking destructive steps, I do think there’s a human error in the form of grossly irresponsible rights allotment. If this was a first-of-its-kind incident that shows otherwise stellar AI fucking up badly, I’d classify it as a pure AI problem, but their limits are hardly novel at this point. There have been previous incidents circulating the media. We’ve had memes about it. If you can’t stay up to date on your tools and their shortcomings, you shouldn’t be using them, because discovering a footgun becomes a question of “when”, not “if”.

                That’s why I consider this partially a human failing: If you’re gonna use a tool, make sure that it operates within safe limits. The chainsaw doesn’t know the difference between tree and bone, so it’s on you to make sure it stays away from anyone’s legs. So while “Chainsaw can saw legs if wielded improperly” is a problem that was accepted as a tradeoff for its utility, you can’t really blame the chainsaw if you zip-tied the safety.

                (Again, not to say Anthropic is blameless for letting its random generator generate randomly destructive shit. I just don’t think that’s the only point of failure here.)

                • Encrypt-Keeper@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  3 days ago

                  That’s why I consider this partially a human failing: If you’re gonna use a tool, make sure that it operates within safe limits.

                  Yes and in this case using it for this job at all was clearly not within safe limits. You keep hammering on “It’s not the AI’s fault it was given a job with too big of a blast zone for it to safely do” after I’ve said “This type of job has too big a blast zone for an AI to safely do” and somehow you’ve convinced yourself that these are two different things.

        • Encrypt-Keeper@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 days ago

          Yes that’s right the protocols that we humans used to have for giving only trusted, reliable people this level of access over infrastructure predate LLMs and were a great way to stop this from happening.

          However the AI is here now, and when you give an autonomous agent with known hallucination problems access to act on your behalf with your IaC on your infra provider, this kind of thing is an inevitability.

  • flandish@lemmy.world
    link
    fedilink
    English
    arrow-up
    74
    ·
    4 days ago

    AI goes “rogue” as much as a firearm “shoots itself.” This is just 100% negligence. Not “rogue AI.”

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      4 days ago

      Eh, if you pay attention, most of the times this happens the person was a jerk in their prompts.

      Like look at the instruction echoed back in this case. All caps and containing a curse word.

      You can believe that the incidents occurring are 100% because of negligence and not related to the model behavior shifting, but there seems to be a widening gap between people who prompt like this and have horror stories and people who give the models breaks over long sessions and seem to also regularly post pretty positive results.

      An image of the model responding about not following user prompt