• slaneesh_is_right@lemmy.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      That’s what i always think when i read things like that. “Facebook is invading our privacy, same as instagram, and there is nothing we can do.”

      Idk man, not using it is pretty easy actually.

      • Grostleton@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        2 days ago

        With how so many services are forcing it upon us, I’d have to disagree.

        It’s also getting to be a bit of a chore to block AI elements on all the various websites implementing them, and a few of the worst offenders (Google is one that I know does this) add a random string of characters on the element that serve as a unique identifier that periodically changes and so requires me to readd them to my UBO blocklist. On each device…

        It is the most effective solution for sure, though.

        • tarknassus@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          With how so many services are forcing it upon us, I’d have to disagree.

          Maybe we need to add a term for anti-AI psychosis. Like an equivalent of ‘going postal’?

    • Lka1988@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 days ago

      I put in an IT ticket the other day over the fucking Copilot button on my work-issued Surface laptop. They actually told me to install Powertoys. So I did. And disabled that fucking button.

  • mhague@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    2 days ago

    I can’t run local models bigger than 7b q_4_k_m or so, so I’m safe for now. The idea of revealing my deeper personality to corporate LLMs is horrifying.

  • ExLisperA
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 day ago

    Fortunately AI is not big enough part of my life to care about this one way or another. It definitely has it’s uses but I never used as anything other than data transformation and as a search engine alternative. I don’t know what kind of people confuse AI with a companion and have sincere conversations with it, I don’t know how to help them and I don’t care how this will impact the AI industry.

    • breecher@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      The thing is you don’t have to use it personally, other people will use it for you and present it to you, possibly without you knowing it. AI bot accounts, AI news stories, AI art and so on. It is already a big part of the internet and it will continue to increase regardless of whether we personally use AI or not.

      • ExLisperA
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Yeah but I’m pretty sure reading AI news stories will not give me psychosis.

  • yesman@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    The American Psychological Association met with the FTC in February to urge regulators to address the use of AI chatbots as unlicensed therapists.

    Protect our revenue, er patients!

    • TheAlbatross@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      11
      ·
      2 days ago

      I think that’s a little cynical. I know a few people who work in psych, some in ER’s, and it’s becoming more common to hear people following advice they got via ChatGPT and harming themselves. One particularly egregious one was where the patient was using the program for therapy reasons then suddenly pivoted to asking what the highest buildings were locally, which, of course, the program answered.

      • dindonmasker@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 days ago

        Thr highest building will just make you regret your action for longer while falling. May i suggest this building close to your location that is perfectly as tall as it needs to do the job? Chatgpt probably.

        • TheAlbatross@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          10
          ·
          2 days ago

          Funny, but the reality is even darker. There are zero safeguards built into the program for these scenarios so it makes absolutely no correlation between the two topics, something even a self-styled, unlicensed “life coach” would easily do.

  • m3t00@piefed.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    might be entitled to compensation, for all the money you spent.

    ouija board lied to me

  • atticus88th@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    2 days ago

    One dude dies on his way to meet his cat fishing AI girlfriend and every new outlet pretends it’s Rise Of The Terminators.

    • slaneesh_is_right@lemmy.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      There are more cases where the use of AI greatly accelerate people’s schizophrenia and they fall into some cyber psychosis.

      I still have a hard time feeling bad for people who use AI as a self affirming tool and ask it for advice.