Or my favorite quote from the article

“I am going to have a complete and total mental breakdown. I am going to be institutionalized. They are going to put me in a padded room and I am going to write… code on the walls with my own feces,” it said.

  • Jo Miran@lemmy.ml
    link
    fedilink
    English
    arrow-up
    84
    ·
    2 months ago

    I was an early tester of Google’s AI, since well before Bard. I told the person that gave me access that it was not a releasable product. Then they released Bard as a closed product (invite only), to which I was again testing and giving feedback since day one. I once again gave public feedback and private (to my Google friends) that Bard was absolute dog shit. Then they released it to the wild. It was dog shit. Then they renamed it. Still dog shit. Not a single of the issues I brought up years ago was ever addressed except one. I told them that a basic Google search provided better results than asking the bot (again, pre-Bard). They fixed that issue by breaking Google’s search. Now I use Kagi.

    • PriorityMotif@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 months ago

      I remember there was an article years ago, before the ai hype train, that google had made an ai chatbot but had to shut it down due to racism.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      Not a single of the issues I brought up years ago was ever addressed except one.

      That’s the thing about AI in general, it’s really hard to “fix” issues, you maybe can try to train it out and hope for the best, but then you might play whack a mole as the attempt to fine tune to fix one issue might make others crop up. So you pretty much have to decide which problems are the most tolerable and largely accept them. You can apply alternative techniques to maybe catch egregious issues with strategies like a non-AI technique being applied to help stuff the prompt and influence the model to go a certain general direction (if it’s LLM, other AI technologies don’t have this option, but they aren’t the ones getting crazy money right now anyway).

      A traditional QA approach is frustratingly less applicable because you have to more often shrug and say “the attempt to fix it would be very expensive, not guaranteed to actually fix the precise issue, and risks creating even worse issues”.

    • Guidy@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      Weird because I’ve used it many times fr things not related to coding and it has been great.

      I told it the specific model of my UPS and it let me know in no uncertain terms that no, a plug adapter wasn’t good enough, that I needed an electrician to put in a special circuit or else it would be a fire hazard.

      I asked it about some medical stuff, and it gave thoughtful answers along with disclaimers and a firm directive to speak with a qualified medical professional, which was always my intention. But I appreciated those thoughtful answers.

      I use co-pilot for coding. It’s pretty good. Not perfect though. It can’t even generate a valid zip file (unless they’ve fixed it in the last two weeks) but it sure does try.

      • Jo Miran@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        Beware of the confidently incorrect answers. Triple check your results with core sources (which defeats the purpose of the chatbot).

    • Lemminary@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      2 months ago

      I am a disgrace to all universes.

      I mean, same, but you don’t see me melting down over it, ya clanker.

    • pirat@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      I remember often getting GPT-2 to act like this back in the “TalkToTransformer” days before ChatGPT etc. The model wasn’t configured for chat conversations but rather just continuing the input text, so it was easy to give it a starting point on deep water and let it descend from there.

  • DarkCloud@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    2 months ago

    Turns out the probablistic generator hasn’t grasped logic, and that adaptable multi-variable code isn’t just a matter of context and syntax, you actually have to understand the desired outcome precisely in a goal oriented way, not just in a “this is probably what comes next” kind of way.

    • Agent641@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 months ago

      One day, an AI is going to delete itself, and we’ll blame ourselves because all the warning signs were there

      • Aggravationstation@feddit.uk
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 months ago

        Isn’t there an theory that a truly sentient and benevolent AI would immediately shut itself down because it would be aware that it was having a catastrophic impact on the environment and that action would be the best one it could take for humanity?

      • Mediocre_Bard@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 months ago

        Because humans anthropomorphize anything and everything. Talking about the thing talking like a person as though it is a person seems pretty straight forward.

        • buttnugget@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 months ago

          It’s a computer program. It cannot have a mental health problem. That’s why it doesn’t make sense. Seems pretty straightforward.

    • I Cast Fist@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 months ago

      Considering it fed on millions of coders’ messages on the internet, it’s no surprise it “realized” its own stupidity

  • Jesus@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    2 months ago

    Honestly, Gemini is probably the worst out of the big 3 Silicon Valley models. GPT and Claude are much better with code, reasoning, writing clear and succinct copy, etc.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        The overall interface can, which leads to fun results.

        Prompt for image generation then you have one model doing the text and a different model for image generation. The text pretends is generating an image but has no idea what that would be like and you can make the text and image interaction make no sense, or it will do it all on its own. Have it generate and image and then lie to it about the image it generated and watch it just completely show it has no idea what picture was ever shown, but all the while pretending it does without ever explaining that it’s actually delegating the image. It just lies and says “I” am correcting that for you. Basically talking like an executive at a company, which helps explain why so many executives are true believers.

        A common thing is for the ensemble to recognize mathy stuff and feed it to a math engine, perhaps after LLM techniques to normalize the math.

  • Ilixtze@lemmy.ml
    link
    fedilink
    English
    arrow-up
    13
    ·
    2 months ago

    Skynet but it’s depressed and the terminator just makes tik tok videos about work-life balance.

  • Rose@slrpnk.net
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 months ago

    (Shedding a few tears)

    I know! I KNOW! People are going to say “oh it’s a machine, it’s just a statistical sequence and not real, don’t feel bad”, etc etc.

    But I always felt bad when watching 80s/90s TV and movies when AIs inevitably freaked out and went haywire and there were explosions and then some random character said “goes to show we should never use computers again”, roll credits.

    (sigh) I can’t analyse this stuff this weekend, sorry

  • fossilesque@mander.xyz
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 months ago

    If we have to suffer these thoughts, they at least need to be as mentally ill as the rest of us too, thanks. Keeps them humble lol.

  • Tracaine@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 months ago

    S-species? Is that…I don’t use AI - chat is that a normal thing for it to say or nah?

  • Korne127@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    2 months ago

    Again? Isn’t this like the third time already. Give Gemini a break; it seems really unstable

  • Seth Taylor@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    2 months ago

    Literally what the actual fuck is wrong with this software? This is so weird…

    I swear this is the dumbest damn invention in the history of inventions. In fact, it’s the dumbest invention in the universe. It’s really the worst invention in all universes.

    • tarknassus@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      2 months ago

      But it’s so revolutionary we HAD to enable it to access everything, and force everyone to use it too!