I know JavaScript is a very special boi but c’mon, you’re embarrassing me in front of the wizards.
If only we had built the web on top of a language that did not have such insane handling of its numbers in the first place…
Could you recommend a language with a sane handling of 64b-NaN-to-32b-int conversion?
There are tons of languages that let you represent integers directly as integers rather than having everything be a float so that you do not have to worry about this problem at all.
So by insanity you mean having just one number type?
Yes. Types are good. Numeric operations have specific hardware behavior that depends on whether you’re using floating-point or not. Having exclusively floating-point semantics is wildly wrong for a programming language.
Types are good
Opinions vary on this topic, apparently. There’s a proliferation of untyped languages.
There’s a proliferation of dynamically and/or softly typed languages. There are very few, if any, truly untyped languages. (POSIX shells come close, though internally they have at least two types, strings and string-arrays, even if the array type isn’t directly usable without non-POSIX features.)
Forth is arguably an example of a truly untyped language.
TCL & CMake are fully stringly typed. Both pretty terrible languages (though TCL can at least claim to be a clever hack that was taken far too seriously).
Yeah. I think the smallest number of number types you can reasonably have is two - f64 and arbitrary precision integers types. One of the few good decisions Python made.
Well, I think I’m happy to never have to choose a number type in JS. I also think that insanity is how C and Intel handle NaN conversions.
What does it mean to access the element at index π of an array?
What does it mean to access the 0th element of an array?
Try interacting with anything that uses u64 and you’ll be a lot less happy!
Anyway JavaScript does have BigInt so technically you are choosing.
that insanity is how C and Intel handle NaN conversions.
It’s not actually quite as bad as the article says. While it’s UB for C, and it can return garbage. The actual x86 conversion instruction will never return garbage. Unfortunately the value it returns is 0x8000… whereas JS apparently wants 0. And it sets a floating point exception flag, so you still need extra instructions to handle it. Probably not many though.
Also in practice on a modern JS engine it won’t actually need to do this operation very often anyway.
Try interacting with anything that uses u64 and you’ll be a lot less happy!
I’m sorry you had to experience this, but in all my years of development I hadn’t.
…not actually quite as bad… While it’s UB for C, and it can return garbage. … the value it returns is 0x8000
0x8000 is garbage. Insane.
The whole reason why an entire instruction was added to ARM to facilitate conversion to integers is because people need integer semantics from their numbers and so the language has to support this efficiently. Thus, in practice there already two number types, it’s just that they have been merged together in this incredibly messy way so that you have the worst of both worlds.
JS just implicitly does what you, typed language developer, would have to do explicitly
JS just implicitly does what
…it wants, also sometimes it’s far from what you want or even could expect
I wonder how it is with nan etc in other languages
I’d definitely read a blog post about this, so if you decide to look into it you should write something up and post it. Maybe it’s standards based?
I don’t think I’ll dive deeper than quoting Wikipedia:
Most fixed-size integer formats cannot explicitly indicate invalid data. In such a case, when converting NaN to an integer type, the IEEE 754 standard requires that the invalid-operation exception be signaled.
For example in Java, such operations throw instances of java.lang.ArithmeticException.
In C, they lead to undefined behavior, but if annex F is supported, the operation yields an “invalid” floating-point exception (as required by the IEEE standard) and an unspecified value.
In the R language, the minimal signed value (i.e. 0x80000000) of integers is reserved for NA (Not available).[citation needed] Conversions from NaN (or double NA) to integers then yield a NA integer.
Perl’s Math::BigInt package uses “NaN” for the result of strings that do not represent valid integers.

