Toxic template but funny meme nonetheless

  • zeezee@slrpnk.net
    link
    fedilink
    English
    arrow-up
    8
    ·
    14 days ago

    as someone in close proximity with ai doomers and with a reasonable knowledge of computer science - I don’t think AGI is as silly as this meme presents it - however climate change is a much more pressing and encompassing issue that I feel takes precedent over any super intelligence fears.

    like the “best” argument I’ve heard is that climate change isn’t as big a deal since super intelligence will solve it but then we’ll get subjugated by it - ok but how about it solves our most existential problem first and then we worry about the aftermath?

    they all seem to just throw up their hands and say we shouldn’t let it save us (if it even could) and just live out the rest of lives since there’s nothing we can do about either… weird tho how this argument only ever seems to come from comfortable westerners… if they were really serious they’d be out there destroying chip fabs in Taiwan…

    idk I just think it’s the new climate inaction narrative but in a tech doomerist clothing.

    • TotallynotJessica@lemmy.blahaj.zoneM
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      13 days ago

      The goofy part of that argument is assuming there’s some miracle solution that we just haven’t thought of. There have always been solutions, they just weren’t what power hungry imperialists and capitalists wanted to hear. The surest way some super intelligence could save us is by somehow getting our short sighted asses to set aside selfish advantage in favor of common good.

      Another flaw is assuming an AI could simply science infinitely faster than we can when science relies on data gathering and double checking work with different methodologies, which would still bottleneck an AI that could instantly data analyze and see insights no one else could.

      Such a super intelligence could also be prone to the same logical pitfalls and biases we are, even if it had millions of times the processing power. We already see machine learning producing “errors” similar to human minds, as such networks have a lot of the same limitations and strengths as our own. It could become convinced it’s correct and miss insights that contradict its preconceptions, stalling progress instead of helping it.

      edit: grammar

    • Tar_Alcaran@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      13 days ago

      AGI is terrifying in the same way that a time machine is terrifying. We should be scared shitless… If it existed. If we had some inkling of how such a thing is even built, we need to start worrying.

      Good thing we’re about as close to AGI as we are to a time machine. Of course, the LLM pushers who are hundreds of billions in the hole for somehow making a tiny bit of money back, have a half trillion collective dollar motivation to get people to at least consider the AI bubble as useful, and going “oooooh, AGI scaaaary” makes peopl think it’s capable of things.

    • ToastedPlanet@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      13 days ago

      The way to solve climate change is to stop carbon emissions. The trick is to convince everyone to do that. No one has written anything down that does that on its own or in aggregate so an LLM cannot regurgitate that answer.

      Solving the climate crisis will inevitably involve the working class overthrowing the owner class, because we need power to change the systems of government and business responsible for pollution. The working class is the only class incentivized to fix the climate crisis, because the workers can’t all fit in apocalypse bunkers the way the owner class can. And again no one wrote down the thing that will give all workers class consciousness so LLMs can’t regurgitate that either.

      The LLMs aren’t AGI. So the theoretical abilities of AGIs and the consequences of creating AGIs aren’t relevant. edit: typos