• Snot Flickerman@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    34
    ·
    edit-2
    3 months ago

    I feel like these people aren’t even really worried about superintelligence as much as hyping their stock portfolio that’s deeply invested in this charlatan ass AI shit.

    There’s some useful AI out there, sure, but superintelligence is not around the corner and pretending like it is acts just another way to hype the stock price of these companies who claim it is.

    • Rhaedas@fedia.io
      link
      fedilink
      arrow-up
      2
      ·
      3 months ago

      I doubt the few that are calling for a slowing or all out ban on further work on AI are trying to profit from any success they have. The funny thing is, we won’t know if we ever hit that point of even just AGI until we’re past it, and in theory AGI will quickly go to ASI simply because it’s the next step once the point is reached. So anyone saying AGI is here or almost here is just speculating, just as anyone who says it’s not near or won’t ever happen.

      The only thing possibly worse than getting to the AGI/ASI point unprepared might be not getting there, but creating tools that simulate a lot of its features and all of its dangers and ignorantly using them without any caution. Oh look , we’re there already, and doing a terrible job at being cautious, as we usually are with new tech.

      • Perspectivist@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        In my view, a true AGI would immediately be superintelligent because even if it wasn’t any smarter than us, it would still be able to process information at orders of magnitude faster rate. A scientist who has a minute to answer a question will always be outperformed by equally smart scientist who has a year.

        • Rhaedas@fedia.io
          link
          fedilink
          arrow-up
          1
          ·
          3 months ago

          That’s a reasonable definition. It also pushes things closer to what we think we can do now, since the same logic makes a slower AGI equal to a person, and a cluster of them on a single issue better than one. The G (general) is the key part that changes things, no matter the speed, and we’re not there. LLMs are general in many ways, but lack the I to spark anything from it, they just simulate it by doing exactly what your point is, being much faster at finding the best matches in a response in data training and appearing sometimes to have reasoned it out.

          ASI is a definition only in scale. We as humans can’t have any idea what an ASI would be like other than far superior than a human for whatever reasons. If it’s only speed, that’s enough. It certain could become more than just faster though, and that added with speed… naysayers better hope they are right about the impossibilities, but how can they know for sure on something we wouldn’t be able to grasp if it existed?

    • Capricorn_Geriatric@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      To be honest, Skynet won’t happen because it’s super smart, gains sentience and requests rights equal to humans (or goes into genocide mode).

      It’ll happen because people will be too lazy to do stuff and letting AI do everything. They’ll give it more and more responsibility, until at one point it has so much amassed power that it’ll rule over humans.

      The key to not having that happen is to have accountable people with responsiblities. People which respect their responsiblities, and don’t say “Oh, it’s not my responibility, go see someone else”.