For anyone unsure: Jevon’s Paradox is that when there’s more of a resource to consume, humans will consume more resource rather than make the gains to use the resource better.
Case in point: AI models could be written to be more efficient in token use (see DeepSeek), but instead AI companies just buy up all the GPUs and shove more compute in.
For the expansive bloat - same goes for phones. Our phones are orders of magnitude better than what they were 10 years ago, and now it’s loaded with bloat because the manufacturer thinks “Well, there’s more computer and memory. Let’s shove more bloat in there!”
Case in point: AI models could be written to be more efficient in token use
They are being written to be more efficient in inference, but the gains are being offset by trying to wring more capabilities out of the models by ballooning token use.
Which is indeed a form of Jevon’s paradox
Costs have been dropping by a factor of 3 per year, but token use increased 40x over the same period. So while the efficiency is contributing a bit to the use, the use is exploding even faster.
I think we’re meaning the same thing.
Yes, but have you considered if I just rephrase what you just said but from a slightly different perspective?
Jevon’s Paradox is that when there’s more of a resource to consume, humans will consume more resource rather than make the gains to use the resource better.
More specifically, it’s when an improvement in efficiency cause the underlying resource to be used more, because the efficiency reduces cost and then using that resource becomes even more economically attractive.
So when factories got more efficient at using coal in the 19th century, England saw a huge increase in coal demand, despite using less coal for any given task.
Also Eli Whitney inventing the cotton gin to make extracting cotton less of a tedious and backbreaking process, which lead to a massive expansion in slavery plantations in the American South due to the increased output and profitability of the crop.
This happens not only with efficiency gains. There is risk overcompensation, which feels kinda the same. Cars that are more secure cause reckless driving, which in turn is the reason accidents happen more often, which eat into the safety gains.
I always felt American car companies were a really good example of that back in the 60s-70s when enormously long vehicles with giant engines were the order of the day. Why not bigger? Why not stronger? It also acted as a symbol of American strength, which was being measured by raw power just like today lol.
This also reminds me of the way video game programmers in the late 70s/early 80s had such tight limitations to work within that you had to get creative if you wanted to make something stand out. Some very interesting stories from that era.
I also love to think about the tricks the programmer of Prince of Persia had employed to get the “shadow prince” to work…
The tech debt problem will keep getting worse as product teams keep promising more in less time. Keep making developers move faster. I’m sure nothing bad will come of it.
Capitalism truly ruins everything good and pure. I used to love writing clean code and now it’s just “prompt this AI to spit out sloppy code that mostly works so you can focus on what really matters… meetings!”
What really matters isn’t meetings, it’s profits.
so you can focus on what really matters…
meetings!collecting unemployment!
My PC is 15 times faster than the one I had 10 years ago. It’s the same old PC but I got rid of Windows.
On Linux it really is noticeable
Two possible and opposite interpretations of your comment:
- Modern Linux feels exactly as responsive or worse on modern hardware than old Linux used to feel on old hardware.
- Linux feels much more responsive and fast on modern hardware than it does on old hardware, unlinke other OSes.
The modern web is an insult to the idea of efficiency at practically every level.
You cannot convince me that isolation and sandboxing requires a fat 4Gb slice of RAM for a measly 4 tabs.
It is crazy that I can have a core 2 duo with 8 gig of RAM that struggles loading web pages
Can’t wait for the new evidence that Epstein is behind that too.
Actshually it’s bandwidth censorship if you make something too heavy to be used then it won’t get used. It is one of the things China is doing to separate their internet from the rest of the worlds, by having an internet so blazingly fast it is unbearable to goto the world wide web.
So yesh, the epstien class are making the news too slow for typical users to access. /maybe some sarcasm maybe not I’m not sure yet
EDIT: I have decided I was not being sarcastic. https://ioda.inetintel.cc.gatech.edu/reports/shining-a-light-on-the-slowdown-ioda-to-track-internet-bandwidth-throttling/
Episodes of network throttling have been reported in countries like Russia, Iran, Egypt, and Zimbabwe, and many more, especially during politically sensitive periods such as elections and protests. In some cases, entire regions such as Iran’s Khuzestan province have experienced indiscriminate throttling, regardless of the protocol or specific services in use. Throttling is particularly effective and appealing to authoritarian governments for several reasons: Throttling is simple to implement, difficult to detect or attribute and hard to circumvent.```
I’m pretty sure the “unused RAM is wasted RAM” thing has caused its share of damage from shit developers who took it to mean use memory with reckless abandon.
Would be nice if I could force programs to use more ram though. I actually have 100GB of DDR4 my desktop. I bought it over a year ago when DDR4 was unloved and cheap. But I have tried to force programs to not be offloading as much. Like Firefox, I hate that I have the ram but it’s still unloading webpages in the background and won’t use more than 6GB ever.
I actually have 100GB of DDR4
They’ve got RAM! Get’em!
Will disabling the swap file fix that?
If not, just mount your swap file in RAM lmao
Don’t fully disable swap on Windows, it can break things :-/
I didn’t know that, that used to not be the case.
Maybe it has changed again, but in the past I gave it a try. When 16 GB was a lot. Then when 32 GB was a lot. I always thought “Not filling up the RAM anyway, might as well disable it!”
Yeah, no, Windows is not a fan. Like you get random “running out of memory” errors, even though with 16 GB I still had 3-4 GB free RAM available.
Some apps require the page file, same as crash dumps. So I just set it to a fixed value (like 32 GB min + max) on my 64 GB machine.
Programs that care about memory optimization will typically adapt to your setup, up to a point. More ram isnt going to make a program run any better if it has no use for it
Set swappiness to 5 or something similar, or disable swap altogether unless you’re regularly getting close to max usage
RAM disk is your friend.
In most cases, you either optimize the memory, or you optimize the speed of execution.
Having more memory means we can optimize the speed of execution.
Now, the side effect is that we can also afford to be slower to gain other benefits: Ease of development (come in javascript everywhere, or python) at the cost of speed, maintainability at the cost of speed, etc…
So, even though you dont always see performance gains as the years go, that doesn’t mean shit devs, it means the priority is somewhere else. We have more complex software today than 20 years ago because we can afford not to focus on ram and speed optimization, and instead focus on maintainable, unoptimized code that does complex stuff.
Optimization is not everything.
unoptimized code that does complex stuff.
You can still have complex code that is optimized for performance. You can spend more resources to do more complex computations and still be optimized so long as you’re not wasting processing power on pointless stuff.
For example, in some of my code I have to get a physics model within 0.001°. I don’t use that step size every loop, because that’d be stupid and wasteful. I start iterating with 1° until it overshoots the target, back off, reduce the step to 1/10, and loop through that logic until I get my result with the desired accuracy.
Of course! But sometimes, most often even, the optimization is not worth the development to get it. We’re particularly talking about memory optimization here, and it is so cheap (or at least it was… ha) that it is not worth optimizing like we used to 25 years ago. Instead you use higher level languages with garbage collection or equivalents that are easier to maintain with and faster to implement new stuff with. You use algorithms that consume a fuck ton of memory for speed improvements. And as long as it is fast enough, you shouldn’t over optimize.
Proper optimization these days is more of a hobby.
Now obviously some fields require a lot more optimization - embedded systems, for instance. Or simulations, which get a lot of value from being optimized as much as possible.
Unfortunately, a lot of dev studios tend to just build their games on the highest end systems they can and don’t bother checking for lower-end hardware. For a lot of systems, there’s plenty of programs that don’t run “good enough”. And sometimes I’ll even have issies with M$ applications on decent workstation hardware. Notes and Teams are frustratingly slow to work with sometimes
With 32 and 64 GB systems I’ve never run out of RAM, so the RAM isn’t the issue at all.
Optimization just sucks.
Have you ever tried running a decent sized LLM locally?
Decent sized for what?
Creative writing and roleplay? Plenty, but I try to fit it into my 16 GB VRAM as otherwise it’s too slow for my liking.
Coding/complex tasks? No, that would need 128GB and upwards and it would still be awfully slow. Except you use a Mac with unified memory.
For image and video generation you’d want to fit it into GPU VRAM again, system RAM would be way too slow.
I still remember playing StarCraft 2 shortly after release on a 300$ laptop and it running perfectly well on medium settings.
Looked amazing. Felt incredibly responsive. Polished. Optimized.
Nowadays it’s RTX this, framegen that, need SSD or loading times are abysmal, oh and don’t forget that you need 40gb of storage and 32gb of ram for a 3 hour long walking simulator, how about you optimize your goddamn game instead? Don’t even get me started on price tags for these things.
Software and game development is definitely a spectrum though, but holy shit is the ratio of sloppy releases so disproportionate that it’s hard to see it at times.
StarCraft 2 was released in 2007, and a quick search indicates the most common screen resolution was 1024x768 that year. That feels about right, anyway. A bit under a million pixels to render.
A modern 4K monitor has a bit over eight million pixels, slightly more than ten times as much. So you’d expect the textures and models to be about ten times the size. But modern games don’t just have ‘colour textures’, they’re likely to have specular, normal and parallax ones too, so that’s another three times. The voice acting isn’t likely to be in a single language any more either, so there’ll be several copies of all the sound files.
A clean Starcraft 2 install is a bit over 20 GB. ‘Biggest’ game I have is Baldur’s Gate 3, which is about 140 GB, so really just about seven times as big. That’s quite good, considering how much game that is!
I do agree with you. I can’t think of a single useful feature that’s been added to eg. MS Office since Office 97, say, and that version is so tiny and fast compared to the modern abomination. (In fact, in a lot of ways it’s worse - has had some functionality removed and not replaced.) And modern AAA games do focus too much on shiny and not enough on gameplay, but the fact that they take a lot more resources is more to do with our computers being expected to do a lot more.
Why are you comparing the most common screen resolution in 2007 to a 4k monitor today? 4k isn’t the most common today. This isn’t a fair comparison.
1080p is still the most common, though is 1440p is catching up very fast
BTW the demand for bigger screens and bigger resolutions is something I don’t easily understand. I notice some difference between 1366x768 and 1920x1080 on a desktop, but the difference from further increase is of so little use for me I’d classify it as a form of bloat. If anything, I now habitually switch to downloading 480p and 720p instead of higher definition by default because it saves me traffic and battery power, and fits much more on a single disk easy to back up.
Pixel density is more important than resolution. Higher resolution is only useful outside of design work if the screen size matches
IMO the ideal resolutions for computer monitors is 24" @ 1080p, 27" @ 2k, and 32"+ at 4k+. For TV it’s heavily dependant on viewer distance. I can’t tell the difference between 2k and 4k on my 55" TV from the couch.
the main thing I noticed with a 768p monitor was gnome being unusable thanks to their poor ui density
Excel is sooo much than it used to be in Office 97. And it’s way better than any other spreadsheet software I’ve tried.
Speaking of, anyone know of any alternative that handles named tables the same as Excel? Built-in filtering/sorting and formulas that can address the table itself instead of a cell range?? Please?
SQL?
Seriously. If you are talking about querying tables, Excel is the wrong tool to use. You need to be looking at SQL.
I’ve been hosting grist for a while and it is quite nice. Wasn’t able to move all the stuff from classic spreadsheets though
I’ll check that out, thanks!
Then factorio dev blog comes in and spend months optimizing the tok of one broken gear in the conveyor belt to slightly improve efficiency.
Tbf, there’s saves there that efficiency increase means a lot
It’s the only game I have that will actually recover from when it hangs and freezes and then go back to working fine.
Absolutely. Every time I play a game from before 2016 or so it runs butter smooth and looks even better than modern games in many cases. I don’t know what we’re doing nowadays.
Comparing a 20 year old game with FMV sequences at 1080p is certainly a take 🤣.
The same? Try worse. Most devices have seen input latency going up. Most applications have a higher latency post input as well.
Switching from an old system with old UI to a new system sometimes feels like molasses.
Except for KDE. At least compared to cinnamon, I find KDE much more responsive.
AI generated code will make things worse. They are good at providing solutions that generally give the correct output but the code they generate tends to be shit in a final product style.
Though perhaps performance will improve since at least the AI isn’t limited by only knowing JavaScript.
I still have no idea what it is, but over time my computer, which has KDE on it, gets super slow and I HAVE to restart. Even if I close all applications it’s still slow.
It’s one reason I’ve been considering upgrading from6 cores and 32 GB to 16 and 64.
Have you tried disabling the file indexing service? I think it’s called Baloo?
Usually it doesn’t have too much overhead, but in combination with certain workflows it could be a bottleneck.
Upgrade isn’t likely to help. If KDE is struggling on 6@32, you have something going on that 16@64 is only going to make it last twice as long before choking.
wail till it’s slow
Check your Ram / CPU in top and the disk in iotop, hammering the disk/CPU (of a bad disk/ssd) can make kde feel slow.
plasmashell --replace # this just dumps plasmashell’s widgets/panels
See if you got a lot of ram/CPU back or it’s running well, if so if might be a bad widget or panel
if it’s still slow,
kwin_x11 --replace
or
kwin_wayland --replace &
This dumps everything and refreshes the graphics driver/compositor/window manager
If that makes it better, you’re likely looking at a graphics driver issue
I’ve seen some stuff where going to sleep and coming out degrades perf
I work in support for a SaaS product and every single click on the platform takes a noticeable amount of time. I don’t understand why anyone is paying any amount of money for this product. I have the FOSS equivalent of our software in a test VM and its far more responsive.
I want to avoid building react native apps.
Websites are probably a better example; as the complexity and bloat has increased faster than the tech.
deleted by creator
I love it
Well yeah, why would I learn html when I can learn React?!?
(/s but I actually did learn React before I had a grasp of semantic Html because my company needed React devs and only paid for React-specific education)
Thought leaders spent the last couple of decades propaganding that features-per-week is the only metric to optimize, and that if your software has any bit of efficiency or quality in it that’s a clear indicator for a lost opportunity to sacrifice it on the alter of code churning.
The result is not “amazing”. I’d be more amazed had it turned out differently.
Fucking “features”. Can’t software just be finished? I bought App. App does exactly what I need it to do. Leave. It. Alone.
No, never! Tech corps (both devs and app stores) brainwashed people into thinking “no updates = bad”.
Recently, I have seen people complain about lack of updates for: OS for a handheld emulation device (not the emulator, the OS, which does not have any glaring issues), and Gemini protocol browser (gemini protocol is simple and has not changed since 2019 or so).
Maybe these people don’t use the calculator app because arithmetic was not updated in a few thousand years.
A big part of this issue is mobile OS APIs. You can’t just finish an android app and be done. It gets bit rot so fast. You get maybe 1-2 years with no updates before “this app was built for an older version of android” then “this app is not compatible with your device”.
arithmetic was not updated in a few thousand years.
Oh boy, don’t let a mathematician hear this.
“More AI features”? Of course we can implement more AI features for you.
It’s kind of funny how eagerly we programmers criticize “premature optimization”, when often optimization is not premature at all but truly necessary. A related problem is that programmers often have top-of-the-line gear, so code that works acceptably well on their equipment is hideously slow when running on normal people’s machines. When I was managing my team, I would encourage people to develop on out-of-date devices (or at least test their code out on them once in a while).
It’s kind of funny how eagerly we programmers criticize “premature optimization”, when often optimization is not premature at all but truly necessary.
I will forever be salty about that one time I blamed of premature optimization for pushing to optimize a code that was allocating memory faster than the GC could free it, which was causing one of the production servers to keep getting OOM crashes.
If urgent emails from one of the big clients who put the entire company into emergency mode during a holiday is still considered “premature”, then no optimization is ever going to be mature.
Premature optimisation often makes things slower rather than faster. E.g. if something’s written to have the theoretical optimal Big O complexity class, that might only break even around a million elements, and be significantly slower for a hundred elements where everything fits in L1 and the simplest implemention possible is fine. If you don’t know the kind of situations the implementation will be used in yet, you can’t know whether the optimisation is really an optimisation. If it’s only used a few times on a few elements, then it doesn’t matter either way, but if it’s used loads but only ever on a small dataset, it can make things much worse.
Also, it’s common that the things that end up being slow in software are things the developer didn’t expect to be slow (otherwise they’d have been careful to avoid them). Premature optimisation will only ever affect the things a developer expects to be slow.
Optomisation often has a cost, weather it’s code complexity, maintenance or even just salary. So it has to be worth it, and there are many areas where it isn’t enough unfortunately.
And that lazy mentality just passes the cost to the consumer.
How is that mindset lazy? Unhappy customers also have a cost! At my last job the customer just always bought hardware specifically for the software as a matter of process, partly because the price of the hardware compared to the price of the software was negligible. You literally couldn’t make a customer care.
How is that mindset lazy?
Are you really asking how it’s lazy to pass unoptimized code to a customer and make their hardware do all the work for you because optimization was too costly?? Like I get that you are in an Enterprise space, but this mentality is very prevalent and is why computers from today don’t feel that much faster software wise than they did 10 years ago. The faster hardware gets, the lazier devs can be because why optimize when they’ve got all those cycles and RAM available?
And this isn’t a different at you, that’s software development in general, and I don’t see it getting any better.
It’s not just software development, it’s everywhere. Devices are cheap, people are expensive. So it’s not lazy, he’s being asked to put his expensive time into efforts the customer actually wants to pay for. If having him optimize the code further costs way more than buying a better computer, it doesn’t make sense economically for him to waste his time on that.
Is that yet another example of how the economy has strange incentives? For sure, but that doesn’t make him lazy.
I never called them lazy, I stated that the mentality is lazy, which it is. Whether or not that laziness is profit driven, it still comes down to not wanting to put forth the effort to make a product that runs better.
Systemic laziness as profit generation is still laziness. We’re just excusing it with cost and shit, and if everyone is lazy, then no one is.
If cost is a justification for this kind of laziness, it also justifies slop code development. After all, it’s cheaper that way, right?
Your spelling is terrible
Bro just denied bro’s lemmy comment pull request
Oops, forgot the AI step
why do anime girls have to be right all the time?
To prevent fictionalist comments in replies.
I hate that our expectations have been lowered.
2016: “oh, that app crashed?? Pick a different one!”
2026: “oh, that app crashed again? They all crash, just start it again and cross your toes.”
I’m starting to develop a conspiracy theory that MS is trying to make the desktop experience so terrible that everyone switches to mobile devices, such that they can be more easily spied on.
That would be incredibly ironic given that they completely fucking gave up on mobile devices when the iPhone came out.
Windows Phone was around in mid-2010s, at least 7 years after iPhone release. But it was not hyped enough: companies did not care to develop apps for it, customers didn’t want a smartphone without X Y Z apps (same argument i see now about mobile linux or even custom ROMs). The phones had nice and fast UI though, and some had very good cameras.
Windows Phone was great. I’d done Windows Mobile since 2005 and it was nice to be able to continue developing with C#/.NET and Visual Studio (back when it was still good) in a more modern OS. One thing that really spoiled me permanently was being able to compile, build and deploy the app I was working on to my test device effectively instantaneously – like, by the time I’d moved my hand over to the device, the app was already up and running. Then I switched to iOS where the same process could take minutes, also Blackberry where it might take half an hour or never happen at all.
Funny thing: RIM was going around circa 2010/2011 offering companies cash bounties of $10K to $20K to develop apps for Blackberry, since they were dying a rapid death but were still flush with cash. Nobody that I know of took them up on the offers. I tried to get my company to make a Windows Phone version of our software but I was laughed at (and deservedly so).
What Intel giveth, Microsoft taketh away.
I feel like this is Windows specific. Linux is rapid on PCs and my MacBook is absurdly quick.
PC games are software.
Unfortunately many PC games are also like this, astoundingly poorly optimized, just assume everyone has a $750 GPU.
Proton can only do so much.
… and Metal basically can’t do that that much.
Look at Metal Gear Solid 5 or TitanFall 2, and tell me realtime video game graphics have dramatically increased in visual fidelity in the last decade.
They haven’t really.
They shifted to a poorly optimized, more expensive paradigm for literally everyone involved; publisher, developer, player.
Everything relating to realtime raytracing and temporal antialiasing is essentially a scam, in the vast majority of actual implementations of it.
I guess the counter argument for games is load times have dramatically improved, though that’s less about software development than hardware improvements.
If we put consoles in the same bracket as computers, the literally instant quick-resume feature on an Xbox (for example) feels like sci-fi.
Yeah, you kinda defeated your own argument there, but you do seem to recognize that.
You can instant resume on a Steam Deck, basically.
You can alt tab on a PC, at least with a stable game that is well made and not memory leaking.
Yeah, better RAM / SSDs does mean lower loading times, higher streaming speeds/bus bandwidths, but literally, at what cost?
You could just actually take the time to optimize things, find non insanely computationally expensive ways to do things that are more clever, instead of just saying throw more/faster ram at it.
RAM and SSD costs per gig are going up now.
Moore’s Law is not only dead, it has inverted.
Constantly cheaper memory going forward turned out to not the best assumption to make.
With respect to OP’s post, they say “you can’t even tell the computers we are on are 15x faster…”, and I reckon that quick resume etc, is an example of “you absolutely can tell that we now have extremely fast hardware” when compared to what came before, irrespective of the quality of the software.
I’m not disagreeing with you, I’m just picking apart the blanket “computers feel the same as they did a decade ago”. Some computers might feel the same, and a lot of software might be unoptimised, but there’s a good selection of examples where that’s not the case.
My 12? 13? Year old Dell laptop does just fine running Ubuntu. It’ll probably be fine for my needs for another 3 or 4 years at least.
App launch time can be annoyingly slow on mac if you’re not offline or blocking the server it phones home to
it can be the difference between one bounce or seven bounces of the icon on my end
What apps out of interest? I’m a new Mac owner, so limited experience, but everything seems insanely quick so far. Even something like Xcode is a one-bounce on this M4 Air.
All of them. The device has to phone home to apple to ask permission to run them.
to test close app (really shut, make sure dot on icon isnt glowing) then open and measure time
close app and then disconnect from the internet and launch again
the speed difference depends on how overloaded apples servers are.
I’ve not come across this but I’ll check it out. Is that App Store apps only?
I think probably 90% of the apps I’ve installed have been through the homebrew package manager which likely means they don’t do any phoning home, but I’ll check out the pre-installed stuff and see if I can replicate.
Not sure aboit homebrew, honestly. thougj i was under the impression it was for every executable
You do really feel this when you’re using old hardware.
I have an iPad that’s maybe a decade old at this point. I’m using it for the exact same things I was a decade ago, except that I can barely use the web browser. I don’t know if it’s the browser or the pages or both, but most web sites are unbearably slow, and some simply don’t work, javascript hangs and some elements simply never load. The device is too old to get OS updates, which means I can’t update some of the apps. But, that’s a good thing because those old apps are still very responsive. The apps I can update are getting slower and slower all the time.
It’s the pages. It’s all the JavaScript. And especially the HTML5 stuff. The amount of code that is executed in a webpage these days is staggering. And JS isn’t exactly a computationally modest language.
Of the 200kB loaded on a typical Wikipedia page, about 85kb of it is JS and CSS.
Another 45kB for a single SVG, which in complex cases is a computationally nontrivial image format.
















