

This is not uncommon for high-profile CVEs. For example, brokenwire.fail, heartbleed.com, spectreattack.com, etc…


This is not uncommon for high-profile CVEs. For example, brokenwire.fail, heartbleed.com, spectreattack.com, etc…


A solar flare is just one example of many possible causes. There are plenty of other ones. You didn’t touch on any of the others so let me explain - NASA reports on small satellite missions show that about 40% of satellites experience at least partial mission failure within their lifetime. Studies have shown the leading cause of satellite failure is propulsion systems, responsible for about half of all failures. This is not uncommon at all.
Most altitude ranges in LEO still have debris from decades ago, the exception being below 300km, which is basically still in the atmosphere. Unfortunately, debris strikes have regularly produced debris that are flung into higher orbits, so even collisions between satellites in this range are dangerous.
Edit: I also forgot to mention, the five day estimate (now three days actually) wasn’t for a close-call, it was for a debris-generating event.


Collisions aren’t theoretical, near misses are so common that there’s an entire department at NASA dedicated to detecting them and warning satellite owners to adjust course, I know because we were contacted about a possible collision involving our cubesat. Prior to megaconstellations being deployed if humanity stopped adjusting satellite orbits there would be a collision within a month, now there would be a collision within 5 days. It’s only a matter of time until both satellites on a collision course don’t have the ability to adjust course (engine failure or no propulsion/fuel/comms). In the event of a Carrington-style solar flare there’s a good chance a decent percentage of satellites would be knocked out, making this hypothetical into a reality. Further, we can only currently track objects down to about 10cm, but NASA estimates suggest about 500,000 objects exist between 1-10cm in size in LEO.


It seems to be even higher, several studies suggest it’s closer to 50%:
https://pubs.acs.org/doi/10.1021/acs.est.3c05002
Three different studies predicted emitted tire wear proportions (TWP and TRWP) of total emitted MP [microplastic] loads in the environment (both aquatic and terrestrial) for around 45%. (6,7,52) These calculations were mainly based on global, annual production data and matched the TWP proportions of around 40% in this study. However, since C-PVC was excluded here, a comparison of the percentages is not trivial.
Whenever you make a String you’re saying that you need a string that can grow or shrink, and all the extra code required to make that work. Because it can grow or shrink it can’t be stored on the (fast and efficient) stack, instead we need to ask the OS for some space on the heap, which is slow but usually has extra space around it so it can grow if needed. The OS gives back some heap space, which we can then use as a buffer to store the contents of the string.
If you just need to use the contents of a string you can accept a &str instead. A &str is really just a reference to an existing buffer (which can be either on the stack or in the heap), so if the buffer the user passes in is on the stack then we can avoid that whole ‘asking the OS for heap space’ part, and if it’s on the heap then we can use the existing buffer on the heap at no extra cost.
Compare this to taking a &String which is basically saying ‘this string must be on the heap in case it grows or shrinks, but because it’s an immutable reference I promise I won’t grow or shrink it’ which is a bit silly.


There’s nothing inherently wrong with digital ID, provided it’s implemented in a zero-knowledge-style manner. In fact, it would be much better than what we have now: Uploading your physical photo ID to every company that is legally required to ask for it.


Not really. While working at the OS-level can typically require ‘unsafe’ operations a core tenet of writing Rust is making safe abstractions around unsafe operations. Rust’s ‘unsafe’ mode doesn’t disable all safety checks either - there are still many invariants that the Rust compiler enforces that a C compiler won’t, even in an ‘unsafe’ block.
And even ignoring all of that, if 10% of the code needs to be written in Rust’s ‘unsafe’ mode that means the other 90% is automatically error-checked for you, compared with 0% if you’re writing C.
Sorry dude, I’m not paying $100 USD for a controller. The last controller I bought cost me ~$30 USD (GameSir T4 Kaleid) and has everything I need, including hall effect sticks. The steam controller having cool tech like touchpads and a gyro and whatever else is cool I guess, but I don’t need or want that. No amount of stuff like that is gonna make me spend that much on a controller.
So yes, the steam controller price raised my eyebrows, in the same way that basically all first-party controllers do. They’re all crazily overpriced imo.