• krooklochurm@lemmy.ca
    link
    fedilink
    arrow-up
    35
    ·
    4 days ago

    See, the thing is, I watch piss porn. Hear me out. I told my friend that the thing is, to do piss porn, you kind of have to be into it. You could try and fake it, but it wouldn’t be very convincing. So, my contention is, piss porn is more genuine than other types of porn, because the people partaking are statistically more likely to enjoy doing that type of porn. Which is great, I think, because then they really get into it, which is hot. It’s that enjoyment that gets me off. Their enjoyment.

    She said, “Krooklochurm, you’re an idiot. Anyone can fake liking getting pissed in the face.”

    So I said, “Well, if you’re so adamant, get in the tub and I’ll piss in your mouth, and let’s see if it’s as easy as you claim.”

    So she said, “All right. If I can fist you in the ass afterwards.”

    Which I felt was a fair deal, so I took it.

    My (formal) position was strengthened significantly by the former event. And I can also attest that I could not convincingly fake enjoying being ass-fisted.

    What does that have to do with anything, you ask? Genuinity. The real deal. That’s what.

  • 0nt0p0fth3w0rld@feddit.org
    link
    fedilink
    English
    arrow-up
    23
    ·
    4 days ago

    and some of the most intelligent people are cast out from society because they don’t fit the culture of arrogance.

  • Uriel238 [all pronouns]@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 days ago

    We use about 20% of our caloric intake (at rest, not doing math) for our bio intelligence. Having superpowers of social organization is expensive and power hungry.

    So it’s really no surprise that the computation machines that can run AI require tens of megawatts to think.

  • But_my_mom_says_im_cool@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    4 days ago

    I think the entire idea of ai and the Internet in general taking up power and water needs to be fleshed out and explained to everyone. Even to me it’s a vague notion, I heard about it a few years back but can’t explain it to someone like my parents who would have no idea the Internet requires water to run

    • asmoranomar@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 days ago

      It’s not too hard. AI requires a LOT of work. Work requires energy. Some energy is wasted during this and the byproduct is heat. The heat has to be removed for many reasons, and water is very good at doing that.

      It’s like sweating, it cools you down. But you need water to sweat.

  • TigerAce@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    1
    ·
    2 days ago

    1 gram of cocain equals roughly 150 grams of CO2 emissions due to production and shipping etc, plus the effect wears of very quickly. Cocain Als destroys your nostrils, it’s really, really bad. I would advice amphetamine instead. Can also be taken orally, for instance in the medicinal form of dexamphetamine. Another side effect is that you aren’t hungry anymore so you don’t need the Twix. Just dexamphetamine and you are able to achieve your goals better like becoming a dictator (like Hitler, he got daily shots of amphetamine) or invade France if you want (the German army had amphetamine pills which helped them advance into France day and night. The French assumed they would stop during the night to rest but since they didn’t, the French greatly miscalculated and were completely overrun. Thats why you should use amphetamine kids). It also really helps with ADHD to focus on things and think clearly.

  • DonEladio@feddit.org
    link
    fedilink
    arrow-up
    9
    ·
    4 days ago

    What’s with all the AI hate? I use it for work and it significantly decreases my workload. I’m getting stuff done in minutes instead of hours. AI slop aside.

    • [deleted]@piefed.world
      link
      fedilink
      English
      arrow-up
      29
      ·
      4 days ago

      The massive corporate AI (LLMs for the most part) are driving up electricity and water usage, negatively impacting communities. They are creating a stock market bubble that will eventually burst. They are sucking up all the hardware, from GPUs to memory, to hard drives and SSDs.

      On top of all of that they are in such a rush to expand that a lot of them are installing fossil fuel power on top of running the local grid ragged so they pollute, drive up costs, and all for a 45% average rate of incorrect results.

      There are a lot of ethical problems too, but those are the direct negatives to tons of people.

    • 0nt0p0fth3w0rld@feddit.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      4 days ago

      effect on environment, and the fact that we know it will definitely lose its good, like TV/Cable, Internet, and any honest useful invention that has been raped by the dark side of human culture within history.

      Within the structure of ego driven society we live in I don’t think we are capable of being a good species.

      would be cool if things were different, but Ive never seen it not turn out bad.

    • wischi@programming.dev
      link
      fedilink
      arrow-up
      3
      ·
      4 days ago

      Try to play tic tac toe against ChatGPT for example 🤣 (just ask for “let’s play ASCII tic tac toe”)

      Practically loses every game against my 4yo child - if it even manages to play according to the rules.

      AI: Trained on the entire internet using billions of dollars. 4yo: Just told her the rules of the game twice.

      Currently the best LLMs are certainly very “knowledgeable” (as in, they “know” much more than I - or practically any person - do for most topics) but they are certainly far away from intelligence.

      You should only use them of you are able to verify the correctness of the output yourself.

      • fonix232@fedia.io
        link
        fedilink
        arrow-up
        7
        ·
        4 days ago

        “See, no matter how much I’m trying to force this sewing machine to be a racecar, it just can’t do it, it’s a piece of shit”

        Just because there are similarities, if you misuse LLMs, they won’t perform well. You have to treat it as a tool, with a specific purpose. In case of LLMs that purpose is to take a bunch of input tokens, analyse them, and output the most likely output tokens that is statistically the “best response”. The intelligence is putting that together, not “understanding tic tac toe”. Mind you, you can tie in other ML frameworks for specific tasks that are better suited for those -e.g. you can hook up a chess engine (or tic tac toe engine), and that will beat you every single time.

        Or an even better example… Instead of asking the LLM to play tic-tac-toe with you, ask it to write a Bash/Python/JavaScript tic-tac-toe game, and try playing against that. You’ll be surprised.

        • wischi@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          3 days ago

          Nobody claimed that any sewing machine has PhD level intelligence in almost all topics.

          LLMs are marketed as “replaces jobs”, “PhD level intelligence”, “Reasoning models”, “Deep think”.

          And yet all that “PhD level intelligence” consistently gets the simplest things wrong.

          But, prove me wrong. Pick a game, prompt any LLM you like and share it here (the whole conversation not only a code snippet)

    • affenlehrer@feddit.org
      link
      fedilink
      arrow-up
      2
      ·
      4 days ago

      I hope analog hardware or some other trick will help us in the future to make at least local inference fast and low power.

      • fonix232@fedia.io
        link
        fedilink
        arrow-up
        2
        ·
        4 days ago

        Local inference isn’t really the issue. Relatively low power hardware can already do passable tokens per sec on medium to large size models (40b to 270b). Of course it won’t compare to an AWS Bedrock instance, but it is passable.

        The reason why you won’t get local AI systems - at least not completely - is due to the restrictive nature of the best models. Most actually good models are not open source. At best you’ll get a locally runnable GGUF, but not open weights, meaning re-training potential is lost. Not to mention that most of the good and usable solutions tend to have complex interconnected systems so you’re not just talking to an LLM but a series of models chained together.

        But that doesn’t mean that local (not hyperlocal, aka “always on your device” but local to your LAN) inference is impossible or hard. I have a £400 node running 3-4b models at lightning speed, at sub-100W (really sub-60W) power usage. For around £1500-2000 you can get a node that gets similar performance with 32-40b models. For about £4000, you can get a node that does the same with 120b models. Mind you I’m talking about lightning fast performance here, not passable.

        • affenlehrer@feddit.org
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          3 days ago

          At least for me the small 4-8b models turned out to be pretty useless. Extremely prone to hallucinations, not good at multiple languages and worst of all still pretty slow on my machine.

          I tried to create a simple note taking agent with just file io tools available. Without reasoning they fucked up even the simplest tasks in very creative ways and with reasoning it thought about it for 7 before finally doing it.

          The larger one require pretty power hungry and / or expensive hardware.

          I hope for analog hardware to change this.

  • LifeInMultipleChoice@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    4 days ago

    Wasn’t there an article posted yesterday about a group trying to create a biological computer that was living cells do to their efficiency of use on less power? (They are far from close, they basically took skin cells, ionized them, and had no idea how they were going to get them to stay alive long term yet.

    • fonix232@fedia.io
      link
      fedilink
      arrow-up
      8
      ·
      4 days ago

      Even that won’t be anywhere close to the efficiency of neurons.

      And actual neurons are not comparable to transistors at all. For starters the behaviour is completely different, closer to more complex logic gates built from transistors, and they’re multi-pathway, AND don’t behave as binary as transistors do.

      Which is why AI technology needs so much power. We’re basically virtualising a badly understood version of our own brains. Think of it like, say, PlayStation 4 emulation - it’s kinda working but most details are unknown and therefore don’t work well, or at best have a “close enough” approximaion of behaviour, at the cost of more resource usage. And virtualisation will always be costly.

      Or, I guess, a better example would be one of the many currently trending translation layers (e.g. SteamOS’s Proton or macOS’ Rosetta or whatever Microsoft was cooking for Windows for the same purpose, but also kinda FEX and Box86/Box64), versus virtual machines. The latter being an approximation of how AI relates to our brains (and by AI here I mean neural network based AI applications, not just LLMs).

      • applebusch@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        There’s already been some work on direct neural network creation to bypass the whole virtualization issue. Some people are working on basically an analog FPGA style silicon based neural network component you can just put in a SOM and integrate into existing PCB electronics. Rather than being traditional logic gates they directly implement the neural network functions in analog, making them much faster and more efficient. I forget what the technology is called but things like that seem like the future to me.

        • fonix232@fedia.io
          link
          fedilink
          arrow-up
          1
          ·
          3 days ago

          I’m very much aware of FPGA-style attempts, however I do feel the need to point out that FPGAs (and FPGA style computing) is even more hardware-strained than emulation.

          For example, current mainstream emulation FPGA DE10 Nano has up to 110k LE/LUT, and that gets you just barely passable PS1 emulation (primarily, it’s great for GBA emu, and mid to late 80s, early 90s game console hardware emulation). In fact it’s not even as performant as GBA emulation on ARM - it uses more power, costs more, and the only benefit is true to OG hardware execution (which isn’t always true for emulation).

          Simply said, while FPGAs provide versatility, they’re also much less performant than similarly priced SoCs with emulation of the specific architecture.

  • Drun@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    3 days ago

    You need a nuclear power plant not for a single AI, but for several million instances of it.

    Don’t forget that you can run full OSS ChatGPT on a single Mac Mini.

    AI is sick, and only silly people can deny that. Yes, future is little scary, but scary things should not stop us from progress.

    Yes, we do have a lot of AI slop, but don’t forget that AI can be an exceptionally good tool in right hands, and its spheres of usage grow every month. I can’t wait for generative tools for gauss splattering things, for example - because it can lead games developing process on another level.

  • wabafee@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    3 days ago

    I think we’re at a point were the hardware right now does not fit with the algorithm being used. Since they take so much power due to our computers being digital. Having a transistor only capable of holding 1 state (0V or 5V usually) is inefficient. The heat add up as you multiply especially with LLMs. There seems to be a potential for analog where a transistor acts more on a range 0 - 5v. Which in theory could store more information or directly represent what LLM runs on (floating point). For more context 1 float tends to be 32bits. 1 bit is 1 transistor so 1 float = 32 transistor. While an analog transistor could be 1 float = 1 analog transistor.

  • Melvin_Ferd@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    3 days ago

    Isn’t it more like they’re comparing all the hamburgers and everything else you have eating since you were born?

    That’s what they’re doing with AI enegry usages isn’t it? I thought it was including the training which is where the greatest costs come from vs just daily running.

    • lovely_reader@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      3 days ago

      No. “In practice, inference [which is to say, queries, not training] can account for up to 90% of the total energy consumed over a model’s lifecycle.” Source.