Yes. Types are good. Numeric operations have specific hardware behavior that depends on whether you’re using floating-point or not. Having exclusively floating-point semantics is wildly wrong for a programming language.
There’s a proliferation of dynamically and/or softly typed languages. There are very few, if any, truly untyped languages. (POSIX shells come close, though internally they have at least two types, strings and string-arrays, even if the array type isn’t directly usable without non-POSIX features.)
TCL & CMake are fully stringly typed. Both pretty terrible languages (though TCL can at least claim to be a clever hack that was taken far too seriously).
Yeah. I think the smallest number of number types you can reasonably have is two - f64 and arbitrary precision integers types. One of the few good decisions Python made.
Well, you tried to appeal to a common logic, and I appealed to even more common logic. If you arrange 3 apples on a table in an array, and ask anyone to take the 0th apple, they will be confused.
0-based is just a convention, not a law of the universe. Only using integer-type numbers to address array elements is too merely a convention of some programming languages. And note that no one suggests using non-integer numbers here, only numbers of non-integer type.
Try interacting with anything that uses u64 and you’ll be a lot less happy!
Anyway JavaScript does have BigInt so technically you are choosing.
that insanity is how C and Intel handle NaN conversions.
It’s not actually quite as bad as the article says. While it’s UB for C, and it can return garbage. The actual x86 conversion instruction will never return garbage. Unfortunately the value it returns is 0x8000… whereas JS apparently wants 0. And it sets a floating point exception flag, so you still need extra instructions to handle it. Probably not many though.
Also in practice on a modern JS engine it won’t actually need to do this operation very often anyway.
The whole reason why an entire instruction was added to ARM to facilitate conversion to integers is because people need integer semantics from their numbers and so the language has to support this efficiently. Thus, in practice there already two number types, it’s just that they have been merged together in this incredibly messy way so that you have the worst of both worlds.
So by insanity you mean having just one number type?
Yes. Types are good. Numeric operations have specific hardware behavior that depends on whether you’re using floating-point or not. Having exclusively floating-point semantics is wildly wrong for a programming language.
Opinions vary on this topic, apparently. There’s a proliferation of untyped languages.
There’s a proliferation of dynamically and/or softly typed languages. There are very few, if any, truly untyped languages. (POSIX shells come close, though internally they have at least two types, strings and string-arrays, even if the array type isn’t directly usable without non-POSIX features.)
TCL & CMake are fully stringly typed. Both pretty terrible languages (though TCL can at least claim to be a clever hack that was taken far too seriously).
Oof, yeah, those count. The fact that CMake was best-in-class when I wrote C++ professionally was…awful.
Forth is arguably an example of a truly untyped language.
Yeah. I think the smallest number of number types you can reasonably have is two - f64 and arbitrary precision integers types. One of the few good decisions Python made.
Well, I think I’m happy to never have to choose a number type in JS. I also think that insanity is how C and Intel handle NaN conversions.
What does it mean to access the element at index π of an array?
What does it mean to access the 0th element of an array?
It is the 0-th element after the start of the array. 0-based indexing is very common in both mathematics and computer science.
Well, you tried to appeal to a common logic, and I appealed to even more common logic. If you arrange 3 apples on a table in an array, and ask anyone to take the 0th apple, they will be confused.
0-based is just a convention, not a law of the universe. Only using integer-type numbers to address array elements is too merely a convention of some programming languages. And note that no one suggests using non-integer numbers here, only numbers of non-integer type.
Try interacting with anything that uses u64 and you’ll be a lot less happy!
Anyway JavaScript does have BigInt so technically you are choosing.
It’s not actually quite as bad as the article says. While it’s UB for C, and it can return garbage. The actual x86 conversion instruction will never return garbage. Unfortunately the value it returns is 0x8000… whereas JS apparently wants 0. And it sets a floating point exception flag, so you still need extra instructions to handle it. Probably not many though.
Also in practice on a modern JS engine it won’t actually need to do this operation very often anyway.
I’m sorry you had to experience this, but in all my years of development I hadn’t.
0x8000 is garbage. Insane.
It is INT_MIN. Seems like a much more sensible value than 0 IMO.
Obviously, opinions vary here as well
The whole reason why an entire instruction was added to ARM to facilitate conversion to integers is because people need integer semantics from their numbers and so the language has to support this efficiently. Thus, in practice there already two number types, it’s just that they have been merged together in this incredibly messy way so that you have the worst of both worlds.
JS just implicitly does what you, typed language developer, would have to do explicitly
…it wants, also sometimes it’s far from what you want or even could expect