The University of Rhode Island’s AI lab estimates that GPT-5 averages just over 18 Wh per query, so putting all of ChatGPT’s reported 2.5 billion requests a day through the model could see energy usage as high as 45 GWh.

A daily energy use of 45 GWh is enormous. A typical modern nuclear power plant produces between 1 and 1.6 GW of electricity per reactor per hour, so data centers running OpenAI’s GPT-5 at 18 Wh per query could require the power equivalent of two to three nuclear power reactors, an amount that could be enough to power a small country.

  • TheGrandNagus@lemmy.world
    link
    fedilink
    English
    arrow-up
    79
    ·
    edit-2
    16 days ago

    I have an extreme dislike for OpenAI, Altman, and people like him, but the reasoning behind this article is just stuff some guy has pulled from his backside. There’s no facts here, it’s just “I believe XYX” with nothing to back it up.

    We don’t need to make up nonsense about the LLM bubble. There’s plenty of valid enough criticisms as is.

    By circulating a dumb figure like this, all you’re doing is granting OpenAI the power to come out and say “actually, it only uses X amount of power. We’re so great!”, where X is a figure that on its own would seem bad, but compared to this inflated figure sounds great. Don’t hand these shitty companies a marketing win.

  • yesman@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    ·
    16 days ago

    I think AI power usage has an upside. No amount of hype can pay the light bill.

    AI is either going to be the most valuable tech in history, or it’s going to be a giant pile of ash that used to be VC capital.

    • themurphy@lemmy.ml
      link
      fedilink
      English
      arrow-up
      13
      ·
      16 days ago

      It will not go away at this point. Too many daily users already, who uses it for study, work, chatting, looking things up.

      If not OpenAI, it will be another service.

      • krashmo@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        16 days ago

        Those same things were said about hundreds of other technologies that no longer exist in any meaningful sense. Current usage of a technology, which in this specific case I would argue is largely frivolous anyway, is not an accurate indicator of future usage.

        • rigatti@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          16 days ago

          Can you give some examples of those technologies? I’d be interested in how many weren’t replaced with something more efficient or convenient.

          • kautau@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            ·
            16 days ago

            https://en.wikipedia.org/wiki/Dot-com_bubble

            There were certainly companies that survived, because yes, the idea of websites being interactive rather than informational was huge, but everyone jumped on that bandwagon to build useless shit.

            As an example, this is today’s ProductHunt

            And yesterday’s was AI, and the day before that it was AI, but most of them are demonstrating little value with high valuations.

            LLMs will survive, likely improve into coordinator models that request data from SLMs and connect through MCP, but the investment bubble can’t sustain

          • themurphy@lemmy.ml
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            16 days ago

            Technologies come and go, but often when a worldwide popular one vanishes, it’s because it got replaced with something else.

            So lets say we need LLM’s to go away. What should that be? Impossible to answer, I know, but that’s what it would take.

            We cant even get rid of Facebook and Twitter.

            BUT that being said. LLMs will be 100x more efficient at some point - like any other new technology. We are just not there yet.

      • devfuuu@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        16 days ago

        And most importantly the Pandora box has been opened for deep perfect scams and illegal usage. Nobody will put it in the box again, because even if everyone agreed to make it illegal everywhere it’s already too late.

      • queermunist she/her@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        15 days ago

        Those users are not paying a sustainable price, they’re using chatbots because they’re kept artificially cheap to increase use rates.

        Force them to pay enough to make these bots profitable and I guarantee they’ll stop.

        • themurphy@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          15 days ago

          Or it will gate keep them from poor people. It will mean alot if the capabilities keep on improving.

          That being said, open source models will be a thing always, and I think with that in mind, it will not go away, unless it’s replaced with something better.

          • queermunist she/her@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            15 days ago

            I don’t think they can survive if they gatekeep and make it unaffordable to most people. There’s just not enough demand or revenue that can be generated from rich people asking for chatGPT to do their homework or pretend to be their friend. They need mass adoption to survive, which is why they’re trying to keep it artificially cheap in the first place.

            Why do you think they haven’t raised prices yet? They’re trying to make everyone use it and become reliant on it.

            And it’s not happening. The technology won’t “go away” per se, but these these expensive AI companies will fail.

            • themurphy@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              ·
              15 days ago

              Well, if they succeed, it’s because of efficiency and lowering costs. Second is how much the data and control is really worth.

              The big companies is not just developing LLM’s, so they might justify it with other kinds of AI that actually makes them alot of money, either trough the market or government contracts.

              But who knows. This is a very new technology. If they actually make a functioning personal assitant so good, that it’s inconvinient not to have it, it might work.

              • queermunist she/her@lemmy.ml
                link
                fedilink
                English
                arrow-up
                1
                ·
                15 days ago

                I can see government contracts making a lot of money regardless of how functional their technology actually is.

                It’s more about who you know than what you can actually do when it comes to getting money from the government.

    • Optional@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      16 days ago

      That capital was ash earlier this year. The latest $40 Billion-with-a-B financing round is just a temporary holdover until they can raise more fuel. And they already burned through Microsoft, who apparently got what they wanted and are all “see ya”.

  • Eager Eagle@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    16 days ago

    Bit of a clickbait. We can’t really say it without more info.

    But it’s important to point out that the lab’s test methodology is far from ideal.

    The team measured GPT-5’s power consumption by combining two key factors: how long the model took to respond to a given request, and the estimated average power draw of the hardware running it.

    What we do know is that the price went down. So this could be a strong indication the model is, in fact, more energy efficient. At least a stronger indicator than response time.

    • morrowind@lemmy.ml
      link
      fedilink
      English
      arrow-up
      7
      ·
      16 days ago

      That’s a terrible metric. By this providers that maximize hardware (and energy) use by having a queue of requests would be seen as having more energy use.

  • sp3ctr4l@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    20
    ·
    16 days ago

    Fucking Doc Brown could power a goddamn time machine with this many jiggawatts, fuck I hate being stuck in this timeline.

        • Encrypt-Keeper@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          15 days ago

          AI models require a LOT of VRAM to run. Failing that they need some serious CPU power but it’ll be dog slow.

          A consumer model that is only a small fraction of the capability of the latest ChatGPT model would require at least a $2,000+ graphics card, if not more than one.

          Like I run a local LLM with a etc 5070TI and the best model I can run with that thing is good for like ingesting some text to generate tags and such but not a whole lot else.

            • Evono@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              3
              ·
              15 days ago

              Basicly I can run 9b models on my 16gb gpu mostly fine like getting responses of lets say 10 lines in a few seconds.

              Bigger models if they don’t outright crash take for the same task then like 5x or 10x longer so long it isn’t even useful anymore

              So very worse.

            • Encrypt-Keeper@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              15 days ago

              Like make a query and then go make yourself a sandwich while it spits out a word every other second slow.

              There are very small models that can run on mid range graphics cards and all, but it’s not something you’d look at and say “Yeah this does most of what chatGPT does”

              I have a model running on a gtx 1660 and I use it with Hoarder to parse articles and create a handful a tags for them and it’s not… great at that.

            • gerryflap@feddit.nl
              link
              fedilink
              English
              arrow-up
              1
              ·
              14 days ago

              It’s horrendously slow, unusable imo. With the larger DeepSeek distilled models I tried that didn’t fit into VRAM you could easily wait 5 minutes until it was done writing its essay. Compared to just a few seconds when it does. Bit that’s with a RTX 3070 Ti, not something the average ChatGPT user has lying around probably.

    • ckmnstr@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      15 days ago

      Probably not a flash drive but you can get decent mileage out of 7b models that run on any old laptop for tasks like text generation, shortening or summarizing.

  • jsomae@lemmy.ml
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    15 days ago

    For reference, this is roughly equivalent to playing a PS5 game for 4 minutes (based on their estimate) to 10 minutes (their upper bound)

    calulation

    source https://www.ecoenergygeek.com/ps5-power-consumption/

    Typical PS5 usage: 200 W

    TV: 27 W - 134 W → call it 60 W

    URI’s estimate: 18 Wh / 260 W → 4 minutes

    URI’s upper bound: 48 Wh / 260 W →10 minutes

    • bier@feddit.nl
      link
      fedilink
      English
      arrow-up
      2
      ·
      14 days ago

      It is also the equivalent of letting a LED light bulb run for an entire day (depending on bright it is, some LED bulbs use under 2 watts of power).

  • Nightwatch Admin@feddit.nl
    link
    fedilink
    English
    arrow-up
    10
    ·
    16 days ago

    Of course there are comments doubting the accuracy, which by itself is valid, but they are merely doing it to defend AI. IMHO, even at a fifth of the estimates, we’re talking humongous amounts of power, all for a so-so search engine, half arsed chatbots and dubious nsfw images mostly. And let’s not forget: it may be inaccurate and estimates are TOO LOW. Now wouldn’t that be fun?

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    16 days ago

    I don’t buy the research paper at all. Of course we have no idea what OpenAI does because they aren’t open at all, but Deepseek’s publish papers suggest it’s much more complex than 1 model per node… I think they recommended like a 576 GPU cluster, with a scheme to split experts.

    That, and going by the really small active parameter count of gpt-oss, I bet the model is sparse as heck.

    There’s no way the effective batch size is 8, it has to be waaay higher than that.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      9
      ·
      16 days ago

      And perhaps even more importantly, the per-token cost of GPT-5’s API is less than GPT-4’s. That’s why OpenAI was so eager to move everyone onto it, it means more profit for them.

  • AgentOrangesicle@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    15 days ago

    Isn’t this the back plot of the game, Rain World? With the slug cats and the depressed robots stuck on a decaying world when the sapient, organic species all left?

  • kescusay@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    16 days ago

    How the hell are they going to sustain the expense to power that? Setting aside the environmental catastrophe that this kind of “AI” entails, they’re just not very profitable.

    • gdog05@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      16 days ago

      Look at all the layoffs they’ve been able to implement with the mere threat that AI has taken their jobs. It’s very profitable, just not in a sustainable way. But sustainability isn’t the goal. Feudal state mindset in the populace is.

  • pHr34kY@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    16 days ago

    Tech hasn’t improved that much in the last in the last decade. All that’s happened is that more cores have been added. The single-thread speed of a CPU is stagnant.

    My home PC consumes more power than my Pentium 3 consumed 25 years ago. All efficiency gains are lost to scaling for more processing power. All improvements in processing power are lost to shitty, bloated code.

    We don’t have the tech for AI. We’re just scaling up to the electrical senand demand of a small country and pretending we have the tech for AI.

      • Dasus@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        16 days ago

        i don’t judge you for that. honestly it matters fuck all at this point

    • Opisek@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      14 days ago

      This is my weekly time to tell lemmings about Kagi, the search engine that does not shove LLM in your face (but still lets you use it when you explicitly want it) and that you pay for with your money, not your data.

    • macaw_dean_settle@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      15 days ago

      Or just use any other better search like Bing or duckduckgo. googol sucks and was never any good. Quit pushing ignorant garbage.