I feel like these people aren’t even really worried about superintelligence as much as hyping their stock portfolio that’s deeply invested in this charlatan ass AI shit.
There’s some useful AI out there, sure, but superintelligence is not around the corner and pretending like it is acts just another way to hype the stock price of these companies who claim it is.
Yes, this. AGI is a deflection tool against talking about income inequality.
I doubt the few that are calling for a slowing or all out ban on further work on AI are trying to profit from any success they have. The funny thing is, we won’t know if we ever hit that point of even just AGI until we’re past it, and in theory AGI will quickly go to ASI simply because it’s the next step once the point is reached. So anyone saying AGI is here or almost here is just speculating, just as anyone who says it’s not near or won’t ever happen.
The only thing possibly worse than getting to the AGI/ASI point unprepared might be not getting there, but creating tools that simulate a lot of its features and all of its dangers and ignorantly using them without any caution. Oh look , we’re there already, and doing a terrible job at being cautious, as we usually are with new tech.
In my view, a true AGI would immediately be superintelligent because even if it wasn’t any smarter than us, it would still be able to process information at orders of magnitude faster rate. A scientist who has a minute to answer a question will always be outperformed by equally smart scientist who has a year.
That’s a reasonable definition. It also pushes things closer to what we think we can do now, since the same logic makes a slower AGI equal to a person, and a cluster of them on a single issue better than one. The G (general) is the key part that changes things, no matter the speed, and we’re not there. LLMs are general in many ways, but lack the I to spark anything from it, they just simulate it by doing exactly what your point is, being much faster at finding the best matches in a response in data training and appearing sometimes to have reasoned it out.
ASI is a definition only in scale. We as humans can’t have any idea what an ASI would be like other than far superior than a human for whatever reasons. If it’s only speed, that’s enough. It certain could become more than just faster though, and that added with speed… naysayers better hope they are right about the impossibilities, but how can they know for sure on something we wouldn’t be able to grasp if it existed?
To be honest, Skynet won’t happen because it’s super smart, gains sentience and requests rights equal to humans (or goes into genocide mode).
It’ll happen because people will be too lazy to do stuff and letting AI do everything. They’ll give it more and more responsibility, until at one point it has so much amassed power that it’ll rule over humans.
The key to not having that happen is to have accountable people with responsiblities. People which respect their responsiblities, and don’t say “Oh, it’s not my responibility, go see someone else”.
Can’t. It’s an arms race.
Yup. We’re in a situation where everyone is thinking “if we don’t, then they will.” Bans are counterproductive. Instead we should be throwing our effort into “if we’re going to do it then we need to do it right.”
This is actually an interesting point I hadn’t thought about or see people considering with regards to the high investment cost into AI LLMs. Who blinks first when it comes to stopping investment into these systems if they don’t prove to be commercially viable (or viable quick enough)? What happens to the West if China holds out for longer and is successful?
Honestly just ban mass investment, mass power consumption and use of information acquired as part of mass survelince, military usage, etc.
Like those are all regulated industries. Idc if someone works on it at home, or even a small DC. AGI that can be democratized isn’t the threat, it’s those determined to make a super weapon for world domination. Those plans need to fucking stop regardless if it’s AGI or not
I genuinely don’t understand the people who are dismissing those sounding the alarm about AGI. That’s like mocking the people who warned against developing nuclear weapons when they were still just a theoretical concept. What are you even saying? “Go ahead with the Manhattan Project - I don’t care, because I in my infinite wisdom know you won’t succeed anyway”?
Speculating about whether we can actually build such a system, or how long it might take, completely misses the point. The argument isn’t about feasibility - it’s that we shouldn’t even be trying. It’s too fucking dangerous. You can’t put that rabbit back in the hat.
Here’s how I see it: we live in an attention economy where every initiative with a slew of celebrities attached to it is competing for eyeballs and buy in. It adds to information fatigue and analysis paralysis . In a very real sense if we are debating AGI we are not debating the other stuff. There are only so many hours in a day.
If you take the position that AGI is basically not possible or at least many decades away (I have a background in NLP/AI/LLMs and I take this view - not that it’s relevant in the broader context of my comment) then it makes sense to tell people to focus on solving more pressing issues e.g. nascent fascism, climate collapse, late stage capitalism etc.
The thing that takes inputs gargles it together without thought and spits it out again can’t be intelligent. It’s literally not capable of it. Now if you were to replicate the brain, sure, you could probably create something kinda „smart“. But we don’t know shit about our brain and evolution took thousands of years and humans are still insanely flawed.
Yup, AGI is terrifying; luckily it’s a few centuries off. The parlor-trick text predictor we have now is just bad for the environment and the economy.
Eh, probably not a few centuries. I could be, IDK, but I don’t think it makes sense to quantify like that.
We’re a few major breakthroughs away, and breakthroughs generally don’t happen all at once, they’re usually the product of tons of minor breakthroughs. If we put everyone a different their dog into R&D, we could dramatically increase the production of minor breakthroughs, and thereby reduce the time to AGI, but we aren’t doing that.
So yeah, maybe centuries, maybe decades, IDK. It’s hard to estimate the pace of research and what new obstacles we’ll find along the way that will need their own breakthroughs.
We’re a few major breakthroughs away
We are dozens of world-changing breakthroughs in the understanding of consciousness, sapience, sentience, and even more in computer and electrical engineering away from being able to even understand what the final product of an AGI development program would look like.
We are not anywhere near close to AGI.
We are not anywhere near close to AGI.
That’s my point.
The major breakthroughs I’m talking about don’t necessarily involve consciousness/sentience, those would be required to replicate a human, which isn’t the mark. The target is to learn, create, and adapt like a human would. Current AI products merely produce results that are derivatives of human-generated data, and merely replicate existing work in similar contexts. If I ask an AI tool to tell me what’s needed to achieve AGI, it would reference whatever research has been fed into the model, not perform some new research.
AI tools like LLMs and image generation can feel human because they’re derivative of human work, a proper AGI solution probably wouldn’t feel human since it would work differently to achieve the same ends. It’s like using a machine learning program to optimize an algorithm vs a mathematician, they’ll use different methods and their solutions will look very different, but they’ll achieve the same end goals (i.e. come up with a very similar answer). Think of Data in Star Trek, he is portrayed as using very different methods to solve problems, but he’s just as effective if not more effective than his human counterparts.
Personally, I think solving quantum computing is needed to achieve AGI, whether we use quantum computing or not in the end result, because that involves creating a deterministic machine out of a probabilistic one, and that’s similar to how going from human brains (which I believe are probabilistic) to digital brains would likely work, just in reverse. And we’re quite far from solving quantum computers for any reasonable size of data. I’m guessing practical quantum computers are 20-50 years out, and AGI is probably even further, but if we’re able to make a breakthrough in the next 10 years for quantum computing, I’d revise my estimate for AGI downward.
The current point of our human civilization is like cave men 10,000 years ago being given machine guns and hand grenades
What do you think are we going to do with all this new power?
Pornography.
“For in mankind’s endless, insatiable pursuits of power, there shall be no price too high, no life too valuable, and no value too sacred. Because war, war never changes.”
Okay, firstly, if we’re going to get superintelligent AIs, it’s not going to happen from better LLMs. Secondly, we seem to have already reached the limits of LLMs, so even if that were how to get there it doesn’t seem possible. Thirdly, this is an odd problem to list: “human economic obsolescence”.
What does that actually mean? Feels difficult to read it any way other than saying that money will become obsolete. Which…good? But I suppose not if you’re already a billionaire. Because how else would people know that you won capitalism?
It doesn’t matter. It’s too late. The goal is to build AI up enough that the poor can starve and die off in the coming recession while the rich just rely on AI to replace the humans they don’t want to pay.
We are doomed for the crimes of not being rich and not killing off the rich.
Too bad Woz is no longer part of Apple.
deleted by creator
Don’t care what Wozniak thinks.











