“The new device is built from arrays of resistive random-access memory (RRAM) cells… The team was able to combine the speed of analog computation with the accuracy normally associated with digital processing. Crucially, the chip was manufactured using a commercial production process, meaning it could potentially be mass-produced.”
Article is based on this paper: https://www.nature.com/articles/s41928-025-01477-0
It uses 1% of the energy but is still 1000x faster than our current fastest cards? Yea, I’m calling bullshit. It’s either a one off, bullshit, or the next industrial revolution.
EDIT: Also, why do articles insist on using ##x less? You can just say it uses 1% of the energy. It’s so much easier to understand.
I mean it‘s like the 10th time I‘m reading about THE breakthrough in Chinese chip production on Lemmy so lets just say I‘m not holding my breath LoL.
https://www.nature.com/articles/s41928-025-01477-0
Here’s the paper published in Nature.
However, it’s worth noting that Nature has had to retract studies before:
https://en.wikipedia.org/wiki/Nature_(journal)#Retractions
From 2000 to 2001, a series of five fraudulent papers by Jan Hendrik Schön was published in Nature. The papers, about semiconductors, were revealed to contain falsified data and other scientific fraud. In 2003, Nature retracted the papers. The Schön scandal was not limited to Nature; other prominent journals, such as Science and Physical Review, also retracted papers by Schön.
Not saying that we shouldn’t trust anything published in scientific journals, but yes, we should wait until more studies that replicate these results exist before jumping to conclusions.
They’re real, but they aren’t general purpose and lack precision. It’s just analog.
Look, It’s one of those articles again. The bi-monthly “China invents earth-shattering technology breakthrough that we never hear about again.”
“1000x faster?” Learn to lie better. Real technological improvements are almost always incremental, like “10-20% faster, bigger, stronger.” Not 1000 freaking times faster. You lie like a child. Or like Trump.
Here’s a Veritasium video from 3 years ago about an American company making analog chips, explaining why they are so much more efficient in certain tasks. https://youtu.be/GVsUOuSjvcg
It is not an incremental improvement because it’s a radically different approach. This is not like making a new CPU architecture or adding more IPC, it’s doing computation in a whole different way, that is closer to a physical model using springs/gravity/gears/whatever to model something like the Antikythera mechanism or those water-based financial models than any digital computer.
Also, uncritically dismissing anything coming from China as a scam is not being resistant to Chinese propaganda, it’s just falling for the US’.
Yep! It’s a modal difference. Analogous to dismissing SSDs as a replacement for HDDs. HDDs get incrementally better as they improve their density capabilities. SSDs, meanwhile, came along and provided a “1000x” gain in speed. Let me tell folks here: that was MAGICAL. The future had arrived, at tremendous initial cost mind you, but it’s now the mainstream standard.
(Funny thing about HDDs — they’re serving a new niche in modern times. Ultra high densities have unlocked tremendously cheap bulk storage. Need to store an exabyte somewhere? Or need to read some data but don’t mind waiting a couple minutes/hours? SMR drives got ya covered. That’s the backbone of the cloud in 2025 with data storage exploding year over year.)
Analog is literally computing on the fabric of the universe.
You mean like all computers
You are dumb.
You have gotten close to understanding something profound and failed to understand that the thing in your hand also represents that profundity.
Dumby.
Computation is arranging the structure of the universe to reflect itself and something not a mind but a necessary component part thereof. All computing is literally computing on the structure of the universe. A well expressed thought to which you shall reply what? “Dumby” not even a word anyone uses. It’s not merely that you are incapable of expressing an alternative position you are not even capable of calling someone stupid correctly.
Why are you so dumby???
For the love of Christ this thumbnail is triggering, lol
Why? It’s standard socket in SMOBO design (sandwich Motherboard).
The CPU is upside down you dork.
Let me guess, you think The Onion is a real newspaper, right?
I mean, in 2025 it’s basically a preview. It’s gone from to satire to prophecy.
This was bound to happen. Neural networks are inherently analog processes, simulating them digitally is massively expensive in terms of hardware and power.
Digital domain is good for exact computation, analog is better for approximate computation, as required by neural networks.
That’s a good point. The model weights could be voltage levels instead of digital representations. Lots of audio tech uses analog for better fidelity.I also read that there’s a startup using particle beams for lithography. Exciting times.
(x) Doubt.
Same here. I wait to see real life calculations done by such circuits. They won’t be able to e.g. do a simple float addition without losing/mangling a bunch of digits.
But maybe the analog precision is sufficient for AI, which is an imprecise matter from the start.
Wouldn’t analog be a lot more precise?
Accurate, though, that’s a different story…
No, it wouldn’t. Because you cannot make it reproduceable on that scale.
Normal analog hardware, e.g. audio tops out at about 16 bits of precision. If you go individually tuned and high end and expensive (studio equipment) you get maybe 24 bits. That is eons from the 52 bits mantissa precision of a double float.
Analog audio hardware has no resolution or bit depth. An analog signal (voltage on a wire/trace) is something physical, so its exact value is only limited by the precision of the instrument you’re using to measure it. In a microphone-amp-speaker chain there are no bits, only waves. It’s when you sample it into a digital system that it gains those properties. You have this the wrong way around. Digital audio (sampling of any analog/“real” signal) will always be an approximation of the real thing, by nature, no matter how many bits you throw at it.
The problem is that both the generation as well as the sampling is imprecise. So there are losses at every conversion from the digital to the analog domain. On top of that are the analog losses through the on chip circuits themselves.
All in all this might be sufficient for some LLMs, but they are worthless junk producers anyway, so imprecision does not matter that much.
Not in a completely analog system, because there’s no conversion between the analog and digital domains. Sure, a big advantage of digital is that it’s much much less sensitive to signal degradation.
What you’re referring to as “analog audio hardware” seems to be just digital audio hardware, which will always have analog components because that’s what sound is. But again, amplifiers, microphones, analog mixers, speakers, etc have no bit depth or sampling rate. They have gains, resistances, SNR and power ratings that digital doesn’t have, which of course pose their own challenges
The maximum theoretical precision of an analog computer is limited by the charge of an electron, 10^-19 coulombs. A normal analog computer runs at a few milliamps, for a second max. So a max theoretical precision of 10^16, or 53 bits. This is the same as a double precision (64-bit) float. I believe 80-bit floats are standard in desktop computers.
In practice, just getting a good 24-bit ADC is expensive, and 12-bit or 16-bit ADCs are way more common. Analog computers aren’t solving anything that can’t be done faster by digitally simulating an analog computer.
What does this mean, in practice? In what application does that precision show its benefit? Crazy math?
Every operation your computer does. From displaying images on a screen to securely connecting to your bank.
It’s an interesting advancement and it will be neat if something comes of it down the line. The chances of it having a meaningful product in the next decade is close to zero.
They used to use analog computers to solve differential equations, back when every transistor was expensive (relays and tubes even more so) and clock rates were measured in kilohertz. There’s no practical purpose for them now.
In cases of number theory, and RSA cryptography, you need even more precision. They combine multiple integers together to get 4096-bit precision.
If you’re asking about the 24-bit ADC, I think that’s usually high-end audio recording.
You don’t need to simulate float addition. You can sum two voltages by just connecting two wires - and that’s real number addition
I know. My point was that this is horribly imprecise, even if their circuits are exceptionally good.
There is a reason why all other chips run digital…
How is it imprecise? It’s the same thing as taking two containers of water and pouring them into a third one. It will contain the sum of the precious two exactly. Or if you use gears to simulate orbits. Rounding errors are a digital thing.
Analog has its own set of issues (e.g. noise, losses, repeatability), but precision is not one of them. Arguably, the main reason digital took over is because it’s programmable and it’s good for general computing. Turing completeness means you can do anything if you throw enough memory and time at it, while analog circuits are purpose-made
So you list the reasons for imprecision: noise, losses, repeatability problems (there are more), but still consider it precise?
Adding two containers of water is only as precise as you can get leakage under control, and can rely on a repeatable shape of the containers. Both is something chip level logic simply does not deliver.
1000x!
Is this like medical articles about major cancer discoveries?
1000x yes!
yes, except the bullshit cancer discoveries are always in Israel, and the bullshit chip designs are in china.
Yes. Please remember to downvote this post and all others that are based on overblown articles from nobody science blogs.
This seems like promising technology, but the figures they are providing are almost certainly fiction.
This has all the hallmarks of a team of researchers looking to score an R&D budget.
sounds like bullshit.
read the paper
The problem is with the clickbait headline (on livescience.com), not the paper itself.
> See article preview image > AI crap CPU > Leaves immediatelyWhich is worse - AI slop, or people decrying everything they see as AI slop, even when it isn’t?
And it’ll be on sale through Temu and Wish.com
Edit: I removed a chatgtp generated summary because I thought it could have been useful.
Anyway just have a good day.This comment violates rule 8 of the community. Please get your AI generated garbage out of here.
In that case I’m editing it. I’m sorry for my mistake, I thought it would be useful to a point. That’s why I said it was AI.
I appreciate that you wanted to help people even if it didn’t land how you intended. :)
It was a decent summary, I was replying when you pulled it. Analog has its strengths (the first computers were analog, but electronics was much cruder 70 years ago) and it is def. a better fit for neural nets. Bound to happen.
Nice thorough commentary. The LiveScience article did a better job of describing it for people with no background in this stuff.
The original computers were analog. They were fast, but electronics was -so crude- at the time, it had to evolve a lot … and has in the last half-century.
This already a thing, there’s a US lab doing this
Ya but they just deported all the employees, probably
cool













