• lefthandeddude@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    39 minutes ago

    The elephant in the room that no one talks about is that locked psychiatry facilities treat people so horribly and are so expensive, and psychologists and psychiatrists have such arbitrary power to detain suicidal people, that suicidal people who understand the system absolutely will not open up to professional help about feeling suicidal, lest they be locked up without a cell phone, without being able to do their job, without having access to video games, being billed tens of thousands of dollars per month that can only be discharged by bankruptcy. There is a reason why people online have warned about the risks and expenses of calling suicide hotlines like 988 that regularly attempt to geolocate and imprison people in mental health facilities, with psychiatric medications being required in order for someone to leave.

    The problem isn’t ChatGPT. The problem is a financially exploitative psychiatric industry with horrible financial consequences for suicidal patients and horrible degrading facilities that take away basic human dignity at exorbitant cost. The problem is vague standards that officially encourage suicidal patients to snitch on themselves for treatment with the consequence that at the professional’s whim they can be subject to misery and financial exploitation. Many people who go to locked facilities come out with additional trauma and financial burdens. There are no studies about whether such facilities traumatize patients and worsen patient outcomes because no one has a financial interest in funding the studies.

    The real problem is, why do suicidal people see a need to confide in ChatGPT instead of mental health professionals or 988? And the answer is because 988 and mental health professionals inflict even more pain and suffering upon people already hurting in variable randomized manner, leading to patient avoidance. (I say randomized in the sense that it is hard for a patient to predict the outcome of when this pain will be inflicted, rather than something predictable like being involuntarily held every 10 visits.) Psychiatry and psychology do everything they possibly can to look good to society (while being paid), but it doesn’t help suicidal people at all who bare the suffering of their “treatments.” Most suicidal patients fear being locked up and removed from society.

    This is combined with the fact that although lobotomies are no longer common place, psychiatrists regularly push unethical treatments like ECT which almost always leads to permanent memory loss. Psychiatrist still lie to patients and families regarding ECT about how likely memory loss is, falsely stating memory loss is often temporary and not everyone gets it, just like they lied to patients and families about the effects of lobotomies. People in locked facilities can be pressured into ECT as part of being able to leave a facility, resulting in permanent brain damage. They were charlatans then and now, a so called “science” designed to extract money while looking good with no rigorous studies on how they damage patients.

    In fact, if patients could be open about being suicidal with 988 and mental health professionals without fear of being locked up, this person would probably be alive today. ChatGPT didn’t do anything other than be a friend to this person. The failure is due to the mental health industry.

    • andros_rex@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      45 minutes ago

      God this. Before I was stupid enough to reach out to a crisis line, I had a job with health insurance. Now I have worsened PTSD and no health insurance (the psych hospital couldn’t be assed to provide me with discharge papers.) I get to have nightmares for the rest of my life about a three men shoving me around and being unable to sleep for fear of being assaulted again.

    • brygphilomena@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      35 minutes ago

      While I agree with much of what you said, there are other issues with psychology and psychiatry that they often can’t treat some environmental causes or triggers. When I was suicidal, it was also the feeling of being trapped in a job where I wasn’t appreciated and couldn’t advance.

      If I were placed in an inpatient facility, it would only have exacerbated the issues where I would have so much to deal with the try and be on medical leave before I got fired for not showing up.

      That said, for SOME mental illnesses ECT it can be a valid treatment. We don’t know how the brain works, but we’ve seen correlation where ECT kind of resets the way the brain perceives the world temporarily. All medical decisions need to be weighed against the side effects and determined if the benefits outweigh the risks.

      The other issue with inpatient facilities is that they can be incredibly hard to convince the staff that you are doing better. All actions are viewed through the lens that you are ill and showing the staff you are better is just trying to trick the staff to get out.

      • lefthandeddude@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        15 minutes ago

        You’re wrong about ECT. It nearly always results in permanent memory loss and even if occasionally some patients seem “better” because they remember less of their lives, it does not negate the evil of the treatment. Worse than that, psychiatrist universally deceive patients about the risk of memory loss, saying memory loss is temporary, when most patients who have had ECT report that the memory loss is permanent. There were people who extolled the virtues of lobotomies decades ago and the procedure even won a Nobel Prize. The reason it won a Nobel Prize is because patient experiences mean nothing compared to the avarice of a psuedoscientific discipline that is always looking for the next scam, with the worst most cruel and most expensive scams always inflicted on the most vulnerable. It is hard and traumatic for patients who have been exploited by their supposed “healers” to come forward with the truth. It is incredibly psychologically agonizing to admit to being duped. Patients are not believed then or now. You are completely wrong.

  • Uriel238 [all pronouns]@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    9
    ·
    2 hours ago

    Plenty of judges won’t enforce a TOS, especially if some of the clauses are egregious (e.g. we own and have unlimited use of your photos )

    The legal presumption is that the administrative burden of reading a contract longer than King Lear is too much to demand from the common end-user.

  • wavebeam@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    57 minutes ago

    Gun company says you “broke the TOS” when you pointed the gun at a person. It’s not their fault you used it to do a murder.

  • brap@lemmy.world
    link
    fedilink
    English
    arrow-up
    42
    ·
    11 hours ago

    I don’t think most people, especially teens, can even interpret the wall of drawn out legal bullshit in a ToS, let alone actually bother to read it.

  • NutWrench@lemmy.ml
    link
    fedilink
    English
    arrow-up
    9
    ·
    10 hours ago

    AIs have no sense of ethics. You should never rely on them for real-world advice because they’re programmed to tell you what you want to hear, no matter what the consequences.

    • Zetta@mander.xyz
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 hours ago

      The problem is that many people don’t understand this no matter how often we bring it up. I personally find LLMs to be very valuable tools when used in the right context. But yeah, the majority of people who utilize these models don’t understand what they are or why they shouldn’t really trust them or take critical advice from them.

      I didn’t read this article, but there’s also the fact that some people want biased or incorrect information from the models. They just want them to agree with them. Like, for instance, this teen who killed themself may not have been seeking truthful or helpful information in the first place, but instead just wanted to agree with them and help them plan the best way to die.

      Of course, OpenAI probably should have detected this and stopped interacting with this individual.

      • Timecircleline@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        The court documents with extracted text are linked in this thread. It talked him out of seeking help and encouraged him not to leave signs of his suicidality out for his family to see when he said he hoped they would stop him.

  • Fedizen@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    ·
    12 hours ago

    “Hey computer should I do <insert intrusive thought here>?”

    Computer "yes, that sounds like a great idea, here’s how you might do that. "

    • ExLisperA
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      9 hours ago

      I think with all the guardrails current models have you have to talk to it for weeks if not months before it degrades to a point that it will let you talk about anything remotely harmful. Then again, that’s exactly what a lot of people do.

      • AnarchistArtificer@slrpnk.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 hours ago

        Exactly, and this is why their excuses are bullshit. They know that guardrails become less effective the more you use a chatbot, and they know that’s how people are using chatbots. If they actually gave a fuck about guardrails, they’d make it so that you couldn’t do conversations that take place over weeks or months. This would hurt their bottom line though.

  • chunes@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    8 hours ago

    AI bad, upvotes to the left please.

    I don’t recall seeing articles about how search engines are bad because teens used them to plan suicide.

  • massi1008@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    10 hours ago

    > Build a yes-man

    > It is good at saying “yes”

    > Someone asks it a question

    > It says yes

    > Everyone complains

    ChatGPT is a (partially) stupid technology with not enough security. But it’s fundamentally just autocomplete. That’s the technology. It did what it was supposed to do.

    I hate to defend OpenAI on this but if you’re so mentally sick (dunno if that’s the right word here?) that you’d let yourself be driven to suicide by some online chats [1] then the people who gave you internet access are to blame too.

    [1] If this was a human encouraging him to suicide this wouldn’t be newsworthy…

    • SkyezOpen@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      ·
      10 hours ago

      You don’t think pushing glorified predictive text keyboard as a conversation partner is the least bit negligent?

      • massi1008@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 hours ago

        It is. But the chatGPT interface reminds you of that when you first create an account. (At least it did when I created mine).

        At some point we have to give the responsibility to the user. Just like with Kali OS or other pentesting tools. You wouldn’t (shouldn’t) blame them for the latest ransomeware attack too.

        • raspberriesareyummy@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          ·
          7 hours ago

          At some point we have to give the responsibility to the user.

          That is such a fucked up take on this. Instead of seeing the responsibility at the piece of shit billionaires force-feeding this glorified text prediction on everyone, and politicians allowing minors access to smartphones, you turn off your brain and hop straight over to victim-blaming. I hope you will slap yourself for this comment after some time to reflect on it.

    • Live Your Lives@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      8 hours ago

      I get where you’re coming from because people and those directly over them will always bear a large portion of the blame and you can only take safety so far.

      However, that blame can only go so far as well, because the designers of a thing who overlook or ignore safety loopholes should bear responsibility for their failures. We know some people will always be more susceptible to implicit suggestions than others are and that not everyone has someone who’s responsible over them in the first place, so we need to design AIs accordingly.

      Think of it like blaming an employee’s shift supervisor when an employee dies when the work environment is itself unsafe. Or think of it like only blaming a gun user and not the gun laws. Yes, individual responsibility is a thing, but the system as a whole has a responsibility all it’s own.