• MysteriousSophon21@lemmy.world
    link
    fedilink
    English
    arrow-up
    63
    ·
    4 days ago

    AI has some legit uses but the hype around it is mostly VC’s throwing money at buzzwords while the actual tech is nowhere near the “AGI revolution” they keep promising us lol.

    • Broken@lemmy.ml
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 days ago

      But can we at least be thankful that it shifted focus from augmented reality? Prior to AI, the buzz was around things like the metaverse and digital avatars in your teams meetings.

      Even crap AI is more useful than avatars in teams.

      • SpookyBogMonster@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        Digital Avatars in teams arent actively destructive to the internet, the environment, and people’s grasp on reality.

        I think you’re universalising a personal grievance, without fully accounting for the impacts of Metaverse bullshit, which was never practical or feasible to begin with, and the AI Apocalypse sweeping the internet

        • Broken@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          Well, I was trying to bring a little humor to the conversation by just saying at least as a silver lining is that this other stupid crap is gone now.

          If the AI “revolution” never came, I bet a thread just like this one would exist for metaverse or whatever saying how it’s destroying the internet. And think about it, entering an entire world just to hold this conversation where all users are known and conversations recorded…kind of like AI scraping.

          You can see his it could get just as bad or worse. Hint: its not the technology that’s the problem, its the companies behind them - those wouldn’t be any different.

          I’m not trying to downplay AI, I’m just being realistic of the world we live in and trying to not be so doom and gloom every second of the day.

        • lightnegative@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          2 days ago

          What’s currently being marketed as AI reinforces that there’s always someone who can do your job worse for cheaper

          I’m just waiting for the “cheaper” part to change. Surely these VC’s will want to see some ROI on the stupid amount of money these hosted models cost. There’s no way the subscription fees being charged cover the actual cost of running the models, so something will have to give eventually

    • CosmoNova@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 days ago

      The crap they’re promoting it for also showcases the direction they’re developing it for which is an utterly depressing, unsustainable and impractical one. It’s frustrating to see how much money is invested (and ultimately burned) to actively destroy the economy and create problems rather than fixing some.

  • fuckyoukeith@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    4 days ago

    Every tech buzzword is a grift to try to rationalize endless exponential growth in a world where that’s just impossible

  • bobbyguy@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    chatbots like gpt and gemini learn from conversations with veiwers, so what we need is a virus that will pretend to be a user and flood its chats with pro racism arguments and sexist remarks, which will rub off on the chatbots making them unacceptable for public use

  • bacon_pdp@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    Well not exactly but completely misunderstood.

    Everyone who actually knows about AI is familiar with the alignment and takeoff problems.

    (Play this if you need a quick summary

    https://www.decisionproblem.com/paperclips/index2.html

    )

    So whenever someone says, we are making AI, the response should be “oh fuck no” (using bullets and fire if required)

    New tagging and auto-completion is fine (there is probably a whole space of new tools that can come out of the AI research field; that doesn’t risk human extinction)

    • wizardbeard@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      4 days ago

      We are so far away from a paperclip maximizer scenario that I can’t take anyone concerned about that seriously.

      We have nothing even approaching true reasoning, despite all the misuse going on that would indicate otherwise.

      Alignment? Takeoff? None of our current technologies under the AI moniker come anywhere remotely close to any reason for concern, and most signs point to us rapidly approaching a wall with our current approaches.

      Each new version from the top companies in the space right now has less and less advancement in capability compared to the last, with costs growing at a pace where “exponentially” doesn’t feel like an adequate descriptor.

      There’s probably lateral improvements to be made, but outside of taping multiple tools together there’s not much evidence for any more large breakthroughs in capability.

      • bacon_pdp@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 days ago

        I agree currently technology is extremely unlikely to achieve general intelligence but my expression was that we never should try to achieve AGI; it is not worth the risk until after we solve the alignment problem.

        • chobeat@lemmy.mlOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 days ago

          “alignment problem” is what CEOs use as a distraction to take responsibility away from their grift and frame the issue as a technical problem. That’s another word that make you lose any credibility

          • bacon_pdp@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 days ago

            I think we are talking past each other. Alignment with human values is important; otherwise we end up with a paper clip optimizer wanting humans only as a feedstock of atoms or deciding to pull a “With Folded Hands” situation.

            None of the “AI” companies are even remotely interested in or working on this legitimate concern.

        • Balder@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 days ago

          Unfortunately game theory says we’re gonna do it whenever it’s technologically possible.

    • ominouslemon@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      The worry about “Alignment” and such is mostly a TESCREAL talking point (look it up if you don’t know what that is, I promise you’ll understand a lot of things about the AI industry).

      It’s ridiculous at best, and a harmful and delirious distraction at worst.

      • bacon_pdp@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 days ago

        It is also a task all good parents do; make sure the lives that they created don’t grow up to be murders or rapists or racists and treat others with kindness and consideration.

        • ominouslemon@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          See? You’re still seeing AI as if it was human or if it was comparable to a human, and that’s the issue. Would you make the same statement about… Idk, cloud computing of photo editing tools? AI is just a technology, it does not “grow” into anything by itself and it’s neither good or bad intentioned, and that’s because it does not have any intentions

          • bacon_pdp@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 days ago

            If it can’t grow by itself, it is not general purpose artificial intelligence. It would be an overly complicated elevator control system and making its behavior deterministic and simple to reason about would enable it to be used to solve problems in industrial processes safely.

            Think SHRDLU.