cross-posted from: https://lemmy.world/post/31184706
C is one of the top languages in terms of speed, memory and energy
https://www.threads.com/@engineerscodex/post/C9_R-uhvGbv?hl=en
True but it’s also a cock to write in
What if we make a new language that extends it and makes it fun to write? What if we call it c+=1?
Ah this ancient nonsense. Typescript and JavaScript get different results!
It’s all based on
https://en.wikipedia.org/wiki/The_Computer_Language_Benchmarks_Game
Microbenchmarks which are heavily gamed. Though in fairness the overall results are fairly reasonable.
Still I don’t think this “energy efficiency” result is worth talking about. Faster languages are more energy efficient. Who new?
Edit: this also has some hilarious visualisation WTFs - using dendograms for performance figures (figures 4-6)! Why on earth do figures 7-12 include line graphs?
Microbenchmarks which are heavily gamed
Which benchmarks aren’t?
Private or obscure ones I guess.
Real-world (macro) benchmarks are at least harder to game, e.g. how long does it take to launch chrome and open Gmail? That’s actually a useful task so if you speed it up, great!
Also these benchmarks are particularly easy to game because it’s the actual benchmark itself that gets gamed (i.e. the code for each language); not the thing you are trying to measure with the benchmark (the compilers). Usually the benchmark is fixed and it’s the targets that contort themselves to it, which is at least a little harder.
For example some of the benchmarks for language X literally just call into C libraries to do the work.
Private or obscure ones I guess.
Private and obscure benchmarks are very often gamed by the benchmarkers. It’s very difficult to design a fair benchmark (e.g chrome can be optimized to load Gmail for obvious reasons. maybe we should choose a more fair website when comparing browsers? but which? how can we know that neither browser has optimizations specific for page X?). Obscure benchmarks are useless because we don’t know if they measure the same thing. Private benchmarks are definitely fun but only useful to the author.
If a benchmark is well established you can be sure everyone is trying to game it.
Typescript and JavaScript get different results!
It does make sense, if you skim through the research paper (page 11). They aren’t using
performance.now()
or whatever the state-of-the-art in JS currently is. Their measurements include invocation of the interpreter. And parsing TS involves bigger overhead than parsing JS.I assume (didn’t read the whole paper, honestly DGAF) they don’t do that with compiled languages, because there’s no way the gap between compiling C and Rust or C++ is that small.
Their measurements include invocation of the interpreter. And parsing TS involves bigger overhead than parsing JS.
But TS is compiled to JS so it’s the same interpreter in both cases. If they’re including the time for
tsc
in their benchmark then that’s an even bigger WTF.
and in most cases that’s not good enough to justify choosing c
I just learned about Zig, an effort to make a better C compatible language. It’s been really good so far, I definitely recommend checking it out! It’s early stages for the community, but the core language is pretty developed and is a breath of fresh air compared to C.
Your link links to facebook that links to https://haslab.github.io/SAFER/scp21.pdf
Written in 2021 and not including julia is weird imo. I’m not saying it’s faster but one should include it in a comparison.
And they used bit.ly on page 5 for references.
Haven’t read it yet, but already seems very non-serious to me.
I also didn’t read it. There’s lots of good comparisons already