Taking “will it run doom” one step too far.
We need to take it back a step and see if it’ll run Linux.
Asking someone to compile the kernel with their mind sure is one way to give someone a headache
I wanna see Linus’ reaction to that pull request.
I can imagine a new AI hellscape where LLMs are run on human brain cells in a test tube. So you’re never quite sure if you’re talking to a mere algorithm … or an enslaved proto-human who might be conscious and whose entire existence revolves around answering your inane online queries.
oooo nice another method I hadn’t considered of It Getting Worse!
A closer example would be if it could use the console in linux since it’s only playing dom not running it, I wonder if wetware would be any better as a coding assistant than ai
playing dom
I do not think we should train a bioengineered intelligence with guns to do that.
Don’t kink shame
Hey human doms I’m 100% okay with, it’s these things I’m worried about
https://static.wikia.nocookie.net/doom/images/2/27/Meet_spider.png/revision/latest
Is anyone else really skeeved out by the term “wetware”, or is that just me
Shadowrun used the term “bioware” instead, do you like that better?
Nah I prefer wetware. This should heebie your jeebies
Yeah now it’s “will it speedrun DOOM?”
Can they run console commands?
It’s having more fun than I am.
Maybe the real hell is the life we’ve lived along the way.
Maybe this experience is just a higher being’s analog of Doom and we’re just cells in a petri dish running an MMORPG version it. Stupidest simulation theory ever, but hey man, we’re sorta doing it so how stupid can it be?
Having worked with human neurons harvested from dead people, there are worse ways to extend your life. At least these ones get to play games instead of getting poked and zapped by me.
Could you poke and zap me?
Poke, yes. Zap is too weak to do much in live people but if I break out the old electrolytic brain lesion maker you’ll feel it.
OwO
You know this isn’t even the weirdest flirting I’ve seen online today. It’s a notable thread, but not too bad.
Enjoy the shocks
I have neurons, can you zap me?
I mean, I was thinking more like TENS …
Sure you were, you fucking freak.
Maybe I misunderstood lemmy constituents.
… Am I missing something, or is this not like, the practical, if not lore accurate first step toward actually creating a:

Next step, give it spider legs and a gattling gun!
I mean, Boston Dynamics figured out how to build essentially robot mules and cats like a decade ago, and they’re actually currently building and improving on humanoid designs.
They got basically acquired by/folded into Hyundai, you know, an actual manufacturing company, unlike Elon’s ongoing fradulent shitshows.
the only missing components are a minigun, robotic spyder legs and positive reinforcement coctail whenever it kills a person.
“We decided to leave those out of our first test, staring down the barrels of a minigun during neural training were putting our scientists off”



Raises uncomfortable questions about consciousness. The only difference between these neurons and your own are the number of them and the structures they form. Of course it doesn’t know what it’s doing, but… Neither do our own neurons
Science and Ethics — the age old enmity between “I wanna know” and “I’m not allowed to find out”
Science and Ethics — the age old enmity between “I wanna know” and
“I’m not allowed to find out”“Am I able to find out without doing something monstrously inhumane”FTFY
I guess my point is that sometimes even if it’s illegal you can get away with it if done correctly, with ruling party aligned stated goals…or you have access to a shit tonne of money and powerful friends.
I simplified for comedic effect. You’re absolutely right that the “compromise” would be finding some humane and ethical solution, but “The most effective and direct way of finding out is cruel and callous” isn’t quite as snappy.
I guess my point is that sometimes even if it’s illegal you can get away with it if done correctly, with ruling party aligned stated goals…or you have access to a shit tonne of money and powerful friends.
That kinda dodges the conflict by not engaging with ethical concerns at all. I feel like calling it a solution would be morbid, but it does make the problem stop being a problem…
That kinda dodges the conflict by not engaging with ethical concerns at all.
I guess I…kinda lost the plot a bit when I wrote the second part, eh?
There’s ethics…and then there’s what the government in the country a scientist operates in views as “morally and ethically acceptable”.
Stem cell research was banned in most places for a long time. The US is banning CRISPR, if I remember right, the OG Nazis, Soviets and Empire of Japan (and honestly basically everyone else too, just those are the three that were highlighted when I was in school) rubber-stamped and funded research that should warrant execution by vivisection…die by your own methods and all that.You’re right it’s not really a solution. However the realities of modern society means that there’s room within what is morally and ethically acceptable in any country to operate in both a humane and inhumane fashion. And if it doesn’t then money and connections to those in power allow further leeway to be an example of humanity at it’s best…or a monster in a human suit…
I guess I…kinda lost the plot a bit when I wrote the second part, eh?
I think I got where you were going, I was just saying that someone trying to find a way around the legal restrictions indicates they’re not actually concerned about ethics, just about not getting in trouble for it. In that context, the problem “How do I do this in an ethically acceptable manner?” is “solved” with the answer “I don’t care”.
Generally, laws are the standard solution to ambiguities. Ethics are a murky and often subjective topic, so it makes sense to form some sort of common agreement on what is okay and what isn’t. And where there are laws, there are gonna be cunts proving exactly why we had to write it down in the first place…
Nueralink did pretty much the same thing to monkeys that are actually conscious. So it this different only because those are human neurons? Is human consciousness different than animal consciousness?
I’m not sure this is quite analagous to neuralink’s monkey experiments. That said,
So is this different only because those are human neurons?
To my mind, a neuron is a neuron. The only difference between your brain and a monkey brain is, again, the number of neurons and the structures they form. I don’t see this as any different from monkey or rat or ant or entirely digital neurons.
I’m not sure this is quite analagous to neuralink’s monkey experiments.
Why not? It’s a chip reading inputs from neurons. This meme doesn’t make it clear if the chip was also stimulation neurons but Neuralink has plans for neural stimulation and it’s possible this was also tested on monkeys. So what’s the difference?
You seem to be arguing against a point that no one has made.
You seem not to understand what is being discussed here.
Correct. That was basically my point – I don’t think anything is being discussed, people are talking past each other.
Yes. Because it’s us. Anything not us is always going to be less valuable. You’d kill 100 lions if it means saving 1 human.
Lions are not conscious. And I’m not asking about value. Of course we value human consciousness more than monkey consciousness. We don’t grant monkeys any rights. Hell, we assign more value to unconscious (brain dead) humans than to conscious monkeys. But how exactly is human consciousness different?
What leads you to assume that lions lack consciousness exactly?
Shit, turns out lions are conscious! They are just stupid. Stephen Hawking said it in 2012. I honestly didn’t know that.
That was just to try and make the equipment work at all, it wasn’t about doing anything with software. It’s the opposite where you’re only worried about the physical damage and infection.
I was focusing more on the “hooking up conscious brain to computer” part than about the damage and infection part.
Thought experiment: let’s say we have a dead brain patient. You have verified that there is no neural activity in the brain beyond cerebellum. There’s no consciousness in the brain. Legally it’s still considered a person. You can’t for example shoot them.
We also have a 5kg blob of lab grown human brain tissue. We have verified there is neural activity in the entire blob but we don’t know what it’s doing and we can’t communicate with it.
Which one is more conscious? Which one should be considered more human and should have more rights?
Hooking up to a computer is just installing a software keyboard in your brain, that doesnt really mean or do anything. It’s what software you load after that’s relevant.
Do those neurons interact with hormones like mine do?
And now bring artificial neural networks, i.e., AI, into the picture to make it even more spicy.

Finally, I knew when I saved this to my phone there would be a perfect moment. (Humanity is too predictable)
Attribution: https://lemmy.world/post/43077529
OK but hear me out here, I think I have the beginnings of a business plan:
-
Create the Torment Nexus
-
?
-
Profit
Some components of the plan are still under development, but let’s not lose momentum. We can advance with the initial phase while brainstorming to refine the plan in real time as we progress. It’s an exciting opportunity and we mustn’t forfeit our first-to-market advantage.
-
Wait is it a real cover? Was it made before or after squid game? It uses the same font
Cortical Labs are the ones who pulled this off. They already have biological computers running on 800,000 lab-grown neurons available for ~$35,000 (just going on what a quick Google search told me) and are planning to open up a cloud computing service with its own API soon.
This makes me feel uneasy. Imagine if reincarnation were a thing and you get brought back into this world, and your purpose is to learn how to play DOOM.
Personally my worry really isn’t reincarnation, there’s no reason to believe that that’s true. But if these are fundamentally the same neurons that make up our brains, then how much do you need to put together before they acquire some form of “sentience”? Does a clump of 800,000 human neurons experience pain, sadness, a sense of self? Where is the line between an emotionless biocomputer and torturing a living organism for its entire lifespan?
Despite the fact that I really hate “AI”, that question was of course already sort of relevant for the latest AI models, even though we can generally conclude that they’re not there yet at all. But real neurons are different, we know what they’re capable of. How many do you need before a clump of neurons has rights?
Large language models are not intelligent. They are predictive text applications with massive dictionaries of circumstantial sentence structures to choose from. Nothing more. They do not feel and do not think for themselves. The only time they do anything is when the API calls them to produce more text with an updated context string.
It has to be a full fetus with a heartbeat to have rights. /s In all seriousness, the human brain is estimated to have 86 billion neurons.
“Do lab grown neurons have a soul?”
I would say consciousness is required for that, so no.
aw sweet, man made horrors beyond my comprehension 😍
There’s another bunch of guys who are trying to do the same thing with rat neurons on the cheap using Gatorade as a growth medium.
Ha, i understand why it would make someone uneasy but personally that sounds like heaven to me. Seriously,.take a slice of my neurons and hook them up to play doom forever, that’s what I want done with my remains. (I guess the rest of me cremate or something idc)
Computer Scientists: We can make Doom run on any device!
Bioscientists: Watch this!
Cosmologists: *cracks knuckles* Check this shit out.
We are old gods who punish life for fun.

Ok, so maybe 200K brain cells would be sufficient to run for public office, but you can’t really call that a complete brain, containing approximately 100 billion cells.
Public office might still be borderline, but we have a living proof that POTUS is within reach.

Ah I see, so we’re adding the matrix to our dystopian horror show reality then.
So, uh… is it any good at it?
IIRC, it doesn’t actually pay the game itself. We prod the cells, they fire in a certain way and that response is read to convert it to an output for the game. The cells aren’t a rudimentary Doom bot, they’re the controller.
So no, then.
Iirc it’s slightly better than using a coin toss to fire the inputs. Fantastic for fundraising for this company tho
So we’re just going full head of steam into a combo of the torment nexus and AM
Torment Nexus but with a win condition
Am I the only one who wonders why, in a world where there are already concerns about machines rebellion, when we train rats, robots and a bench of neurons to play a game, it HAS to be Doom, we can’t think about another, non-violent, or let’s be bold: non-destructive game??
They trained a tiny patch of neurons to respond to low-voltage electric impulses. The cells don’t know they’re playing Doom. They don’t have any kind of social context or even video feedback.
Imagine if I stuck you in a sensory deprivation chamber, handed you an NES controller, and asked you to hit the buttons. Then, periodically, I said “Yes” or “No” based on the buttons you pressed. And when I pulled you out of the tube at the end of an hour, I told you “the yes and no messages were intended to encourage you to correctly navigate Mario through the first level of the original game.” What if, instead of Mario, I’d been telling you how to play Street Fighter?
It doesn’t matter if its Doom. They likely picked Doom because the I/O is so rudimentary that you can install the game on practically anything. The cellular matter has no idea what it’s doing beyond the “Yes/No” signaling.
I know there is no real association between the game and real life. It’s more a question on the mindset of the researchers. I’m sure there are other games that would fit their needs.
Tetris.
Pong.
you can do the exact same thing with a cockroach. Organoids are not brains.





















