• 0 Posts
  • 14 Comments
Joined 3 years ago
cake
Cake day: September 11th, 2023

help-circle
  • If you prompted an LLM to review all of it’s database entries, generate a new response based on that data, then save that output to the database and repeat at regular intervals, I could see calling that a kind of thinking.

    That’s kind of what the current agentic AI products like Claude Code do. The problem is context rot. When the context window fills up, the model loses the ability to distinguish between what information is important and what’s not, and it inevitably starts to hallucinate.

    The current fixes are to prune irrelevant information from the context window, use sub-agents with their own context windows, or just occasionally start over from scratch. They’ve also developed conventional AGENTS.md and CLAUDE.md files where you can store long-term context and basically “advice” for the model, which is automatically read into the context window.

    However, I think an AGI inherently would need to be able to store that state internally, to have memory circuits, and “consciousness” circuits that are connected in a loop so it can work on its own internally encoded context. And ideally it would be able to modify its own weights and connections to “learn” in real time.

    The problem is that would not scale to current usage because you’d need to store all that internal state, including potentially a unique copy of the model, for every user. And the companies wouldn’t want that because they’d be giving up control over the model’s outputs since they’d have no feasible way to supervise the learning process.


  • I only have a rather high level understanding of current AI models, but I don’t see any way for the current generation of LLMs to actually be intelligent or conscious.

    They’re entirely stateless, once-through models: any activity in the model that could be remotely considered “thought” is completely lost the moment the model outputs a token. Then it starts over fresh for the next token with nothing but the previous inputs and outputs (the context window) to work with.

    That’s why it’s so stupid to ask an LLM “what were you thinking”, because even it doesn’t know! All it’s going to do is look at what it spat out last and hallucinate a reasonable-sounding answer.




  • “The nonce reuse issue seems to be a valid security issue, but it is by no means a critical vulnerability: it only affects applications that do more than four billion encryptions with a single HPKE setup,” said Valsorda. “The average application does one.”

    No implementation should be using the same asymmetric keypair for a key exchange* more than once. This is such a non-issue that it’s kind of hilarious. Sounds like the reporter was trying so desperately to get credit for anything they could put on their portfolio, and just wouldn’t take “no” for an answer.



  • I realized the fundamental limitation of the current generation of AI: it’s not afraid of fucking up. The fear of losing your job is a powerful source of motivation to actually get things right the first time.

    And this isn’t meant to glorify toxic working environments or anything like that; even in the most open and collaborative team that never tries to place blame on anyone, in general, no one likes fucking up.

    So you double check your work, you try to be reasonably confident in your answers, and you make sure your code actually does what it’s supposed to do. You take responsibility for your work, maybe even take pride in it.

    Even now we’re still having to lean on that, but we’re putting all the responsibility and blame on the shoulders of the gatekeeper, not the creator. We’re shooting a gun at a bulletproof vest and going “look, it’s completely safe!”



  • I say this in the most loving and accepting way possible, but I 100% think autism was somehow involved in the creation and spread of the butt trumpets.

    I’m convinced of this because I know with absolute certainty in my heart that if any of my friends on the spectrum were alive in the 13th century, they’d be sent to the seminary for being weird and would spend their days doodling butt trumpets in the margins of manuscripts.


  • What happens to the guy that was driving it? Does he just blink out of existence when the car shuts off? That’s my question. You might argue that there is no such thing, but my own conscious experience proves to myself that there’s something else there. I want to know what happens to that part.

    Hell, for all I know, you might just be a soulless meatbag automaton, and there really is no one in the driver’s seat for you. Or I could just be the only actual human talking in a thread full of bots. With 90% of the training data going into LLMs being vapid contrarian debates on social media, I could easily see that being the case here.


  • I don’t agree that the cessation of brain activity necessarily means the end of the subjective experience. That doesn’t mean I purport to know what actually happens at that point. I hope it’s some sort of reincarnation but that’s just because there’s more I want to experience in this universe than I possibly could in a single lifetime.

    “You only have one life, live it the best you can” is a nice motivational mantra, but however well I live my life, it’s highly unlikely I will live long enough to experience interstellar travel, for example, or first contact with alien life. I think that really fucking sucks, and I really hope I’ll have a chance on the next go-around. But if it’s something completely different, I’m cool with that, too.



  • I’m an atheist but I don’t actually preclude the existence of an afterlife. “There is no heaven or hell, it just all goes black and that’s it,” is just as patently unfalsifiable as any claim made by any religion.

    It’s just as likely to be something completely different and alien from anything conceivable in our limited world view. In an infinite space of probabilities, the likelihood of it being “literally nothing” actually seems pretty low.

    That kind of uncertainty is exactly what scares most people, but not me. I’m looking forward to finding out one day.