• finitebanjo@lemmy.world
    link
    fedilink
    arrow-up
    43
    ·
    edit-2
    5 days ago

    Ah man, what an absolute moron. History will remember this guy betting $4 Trillion USD on a dark horse and losing.

    AI as it currently exists is a bust. It’s less accurate than an average literate person which is basically as dumb as bears. The LLM models will never be able to reach human accuracy as detailed in studies publisbed by OpenAI and Deepmind years ago: it would take more than infinite training.

    As it samples itself it will get worse. LLM and similar generative AI is not the future, it is already the past.

    • kkj@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      14
      ·
      5 days ago

      LLMs are actually really good at a handful of specific tasks, like autocomplete. The problem arises when people think that they’re on the path to AGI and treat them like they know things.

      • finitebanjo@lemmy.world
        link
        fedilink
        arrow-up
        16
        ·
        5 days ago

        Nah mate, its shit for autocomplete. Before LLMs autocomplete was better with a simple dictionary weighted to use percentage.

        • kkj@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          6
          ·
          4 days ago

          I’ve found it better than the weighted dictionary for prose, and way better for code. Code autocompletion was always really limited, but now every couple dozen lines it suggests exactly what I was going to type anyway. Never on anything particularly clever, mind you, but it saves some tedium.

          • finitebanjo@lemmy.world
            link
            fedilink
            arrow-up
            8
            ·
            4 days ago

            It also sometimes halucinates entire libraries and documentation and is single handedly responsible for massive sector wide average vulnerabilities increase.

            Did you make sure to subtract all of that negative value before you even considered it as “good”?

            • kkj@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              7
              ·
              4 days ago

              Oh, it’s fucking horrible at writing entire codebases. I’m talking about specifically tab completion. You still have to read what it’s suggesting, just like with IntelliSense and other pre-LLM autocomplete tools, but it sometimes finishes your thoughts and saves you some typing.

              • Zacpod@lemmy.world
                link
                fedilink
                arrow-up
                6
                ·
                4 days ago

                Hard agree. Whole codebase in AI is a nightmare. I think MS’s 25% is even WAY too much, based on how shitty their products are becoming. But for autocompleting the line of code I’m writing? It’s fucking amazing. Doesn’t save any thought, but saves a while bunch of typing!

            • toddestan@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              4 days ago

              Just because a hammer makes for a lousy screwdriver doesn’t mean it’s not a good hammer. To me, AI just another tool. Like any other tool, there’s things it is good at and there are things it is bad at. I’ve also found it can be pretty good as a code completion engine. Not perfect, but there’s plenty of boilerplate stuff and repetitive things where it can figure out the pattern and I can bang out the lines of code pretty quickly with the AI’s help. On the other hand, there’s times it’s nearly useless and I switch back to the keyword completion engine as it’s the better tool for those situations.

              • finitebanjo@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                4 days ago

                If you invent a hammer which reduces the average structural stability anywhere from 5% to 40% then it should be banned.

      • Flax@feddit.uk
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 days ago

        Dunno why the downvotes. I think it’s useful for menial stuff like “create a json list of every book of the Bible with a number for the book and a true or false if it’s old or new testament” which it can do in seconds. Or to quickly create a template.

    • P03 Locke@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 days ago

      As it samples itself it will get worse. LLM and similar generative AI is not the future, it is already the past.

      This is such a delusional and uninformed take that I don’t know where to start.

      The people behind LLMs are scientists with PhDs. The idea that they don’t know how to uncover and repairs biases in the models, which is what you’re suggesting, is patently ridiculous. There’s already plenty of benchmarks to disprove your stupid theory. LLM tech is evolving at an alarming rate. To the point that almost anything 1-2 years old is considered obsolete.

      LLMs are useful tools, if you actually know what the fuck you’re doing. They will continue to get more useful as more research uncovers different ways to use it, and right now, there’s a metric shitton of money being poured into that research. This is not blockchain. This is not NFTs. This is not string theory. This is actual results with measurable impacts.

      I’m not trying to defend this rich asshole CEO’s comments. Satya can go fuck himself. But, I’m not so delusional that I’m going to ignore the tech as some NFT-like gamble.

      • Fizz@lemmy.nz
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        Its funny how anti ai people here are. Even if you hate AI which I do you have to recognise it has uses and is disrupting industries. Billions of people use AI every day. Chatgpt has like 500m daily users, every google search gives an ai summary, most developer uses it. This is already here and people are adopting it.

        Even at it’s current state AI is useful. Then when you look at the progress of benchmarks and watch them getting better and better and better. You see the tooling being built out new developments every week. Its moving very fast.