Is there anyway to make it use less at it gets more advanced or will there be huge power plants just dedicated to AI all over the world soon?

  • SoftestSapphic@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    2 months ago

    The current algorithmic approach to AI hit a wall in 2022.

    Since then they have had to pump exponentially more electricity into these systems that result in exponentially diminishing returns.

    We should have stopped in 2022, but marketing teams had other plans.

    There’s not a way to do AI and use less electricity than the current models, and there most likely won’t be any more advances in AI until someone invents a fundamentally different approach.

  • mriswith@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    2 months ago

    will there be huge power plants just dedicated to AI all over the world soon?

    Construction has started(or will soon) to convert a retired coal power plant in Pennsylvania to gas power, specifically for data-centers. Upon completion in 2027 it will likely be the third most powerful plant in the US.

    The largest coal plant in North Dakota was considering shutting down in 2022 over financial issues, but is now approved to power a new data-center park.

    Location has been laid out for a new power plant in Texas, from a single AI company you’ve probably never heard of.

    And on it goes.

  • calamityjanitor@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    2 months ago

    OpenAI noticed that Generative Pre-trained Transformers get better when you make them bigger. GPT-1 had 120 million parameters. GPT-2 bumped it up to 1.5 billion. GPT-3 grew to 175 billion. Now we have models with over 300 billion.

    To run, every generated word requires doing math with every parameter, which nowadays is a massive amount of work, running on the most power hungry top of the line chips.

    There are efforts to make smaller models that are still effective, but we are still in the range of 7-30 billion to get anything useful out of them.

  • InvisibleShoe@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    2 months ago

    My understanding is that traditional AI essentially takes a bruteforce approach to learning and because it is hardwired, its ability to learn and make logical connections is impaired.

    Newer technologies like organic computers using neurons can change and adapt as it learns, forming new pathways for information to travel along, which reduces processing requirements and in turn, reduces power requirements.

    https://www.techradar.com/pro/a-breakthrough-in-computing-cortical-labs-cl1-is-the-first-living-biocomputer-and-costs-almost-the-same-as-apples-best-failure

    https://corticallabs.com/cl1.html

  • ThatGuy46475@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    2 months ago

    It takes a lot of energy to do something you are not meant to do, whether that’s a computer acting like a person or an introvert acting like an extrovert

  • vane@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    2 months ago

    If people continue investing in AI and computing power keeps growing we would need more than dedicated power plants.

  • rmuk@feddit.uk
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 months ago

    Imagine someone said “make a machine that can peel an orange”. You have a thousand shoeboxes full of Meccano. You give them a shake and tip out the contents and check which of the resulting scrap piles can best peel an orange. Odds are none of them can, so you repeat again. And again. And again. Eventually, one of boxes produces a contraption that can kinda, maybe, sorta touch the orange. That’s the best you’ve got so you copy bits of it into the other 999 shoeboxes and give them another shake. It’ll probably produce worse outcomes, but maybe one of them will be slightly better still and that becomes the basis of the next generation. You do this a trillion times and eventually you get a machine that can peel an orange. You don’t know if it can peel an egg, or a banana, or even how it peels an orange because it wasn’t designed but born through inefficient, random, brute-force evolution.

    Now imagine that it’s not a thousand shoeboxes, but a billion. And instead of shoeboxes, it’s files containing hundred gigabytes of utterly incomprehensible abstract connections between meaningless data points. And instead of one a few generations a day, it’s a thousand a second. And instead of “peel an orange” it’s “sustain a facsimile of sentience capable of instantly understanding arbitrary, highly abstracted knowledge and generating creative works to a standard approaching the point of being indistinguishable from humanity such that it can manipulate those that it interacts with to support the views of a billionaire nazi nepo-baby even against their own interests”. When someone asks for an LLM to generate a picture of a fucking cat astronaut or whatever, the unholy mess of scraps that behaves like a mind spits out a result and no-one knows how it does it aside from broad-stroke generalisation. The iteration that gets the most thumbs up from it’s users gets to be the basis of the next generation, the rest die, millions of times a day.

    What I just described is NEAT algorithms, which are pretty primitive by modern standards, but it’s a flavour of what’s going on.