

Reliance wouldn’t be my primary concern, but rather the privacy implication. It seems like Google has to step up its surveillance game /s. Fun project though


Reliance wouldn’t be my primary concern, but rather the privacy implication. It seems like Google has to step up its surveillance game /s. Fun project though


So a Mastodon ripoff, but its instances hosted by a single entity (effectively centralized): ensuring all instances residing within the European jurisdiction (allowing for full control over it). I don’t see how they genuinely believe, to have humans do the photo validation, when competing at the scale of X; especially when you run all the instances. Perhaps they could recruit volunteers to socialize the losses, as the platform privatizes the profits. Nothing but a privacy-centric approach however: said the privacy expert…
Zeiter emphasized that systemic disinformation is eroding public trust and weakening democratic decision-making … W will be legally the subsidiary of “We Don’t Have Time,” a media platform for climate action … A group of 54 members of the European Parliament [primarily Greens/EFA, Renew, The Left] called for European alternatives
If that doesn’t sound like a recipe, for swinging the pendulum to the other extreme (once more), I don’t know what does… Because can you imagine, a modern social media platform, not being a political echo chamber: not promoting extremism by use of filter bubbles, and instead allowing for deescalation through counter argumentation. One would almost start to think, for it all to be intentional: as a deeply divided population will never stand united, against their common oppressor.


Great, more hoops to jump thr… I mean… an “advanced flow”, for gaining the privilege of installing apps of your choosing


THIS is how you do it, looking at you Brave: requiring me to (re)type my queries in the URL bar (appending ‘&summary=0’ to it), so I’m not required to store a persistent cookie, keeping the damn setting off…


No worries! :)


The main paradox here, seems to be: the 70% boilerplate head-start being perceived faster, but the remaining 30% of fixing the AI-introduced mess, negating the marketed time-savings; or even leading to outright counterproductivity. At least in more demanding environments, not cherry picked by the industry, shoveling the tools.


I understand you’ve read the comment as a single thing, mainly because it is. However, the BLE part is an additional piece of critique, which is not directly related to this specific exploit; neither is the tangent on the headphone jack “substitution”. It’s, indeed, this fast pairing feature, which is the subject of the discussed exploit; so you understood that correctly (or I misunderstood it too…).
I’m however of the opinion, BLE being a major attack vector, by design. These are IoT devices that, especially when “find my device” is enabled (which in many cases isn’t even optional: “turned off” iPhones for example), do announce themselves periodically to the surrounding mesh, allowing for the precise location of these devices; and therefore also the persons carrying them. If bad actors gain access, to for example Google’s Sensorvault (legally in the case of state-actors), or would find ways of building such databases themselves; then I’d argue you’re in serious waters. Is it a convenient feature, to help one relocate lost devices? Yes. But this nice-to-have, also comes with this serious downside, which I believe doesn’t even near justify the means. Rob Braxman has a decent video about the subject if you’re interested.
It’s not even a case of kids not wanting to switch, most devices don’t even come with 3.5mm jack connectors anymore…


If the devices weren’t previously linked to a Google account … then a hacker could … also link it to their Google account.
This already severely limits the pool of potential victims; but still a more practical exploit indeed. It’s almost as if this BLE tracking is a feature, rather than an exploit. And if you want to be notified of a device following you around, one has to perpetually enable BLE on their smartphone. But of course, headphone jacks are a thing of the past, and wireless is clearly the future. :)


But you need to be in close proximity (~15m max) to stalk a victim? You might as well just follow them around physically then. Perhaps when the victim is in a private location, eavesdrop on their conversation or locating their position within there, might be a possibility. But ear raping would, of course, constitute the most significant danger of all. Also WhisperPair, not WhisPair?


AI reviews don’t replace maintainer code review, nor do they relieve maintainers from their due diligence.
I can’t help but to always be a bit skeptical, when reading something like this. To me it’s akin to having to do calculations manually, but there’s a calculator right beside you. For now, the technology might not yet be considered sufficiently trustworthy, but what if the clanker starts spitting out conclusions, which equal a maintainer’s, like 99% of the time? Wouldn’t (partial) automation of the process become extremely tempting, especially when the stack of pull request starts piling up (because of vibecoding)?
Such a policy would be near-impossible to enforce anyway. In fact, we’d rather have them transparently disclose the use of AI than hide it and submit the code against our terms. According to our policy, any significant use of AI in a pull request must be disclosed and labelled.
And how exactly do you enforce that? It seems like you’re just shifting the problem.
Certain more esoteric concerns about AI code being somehow inherently inferior to “real code” are not based in reality.
I mean, there’s hallucination concerns, there’s licensing conflicts. Sure, people can also copy code from other projects with incompatible licenses, but someone without programming experience is less likely to do so, than when vibecoding with a tool directly trained on such material.
Malicious and deceptive LLMs are absolutely conceivable, but that would bring us back to the saboteur.
If Microsoft itself, would be the saboteur, you’d be fucked. They know the maintainers, because GitHub is Microsoft property, and so is the proprietary AI model, directly implemented in the toolchain. A malicious version of Copilot could, hypothetically, be supplied to maintainers, specifically targeting this exploit. Microsoft is NOT your friend, it closely works together with government organizations; which are increasingly interested in compromising consumer privacy.
For now, I do believe this to be a sane approach to AI usage, and believe developers to have the freedom to choose their preferred environment. But the active usage of such tools, does come with a (healthy) dose of critique, especially with regards to privacy-oriented pieces of software; a field where AI has generally been rather invasive.


Of course, make an anti-feature part of an integral part, which coincidentally also happens to handle personal files…


Ah, the good ol’ revolving door politics


More eyes in the sky. It seems like even pigeons aren’t save from being replaced by technology…


Honestly, the tech seems quite impressive. But I wouldn’t touch Amazon-backed smart glasses, which “could also provide health insights, such as detecting dry eyes or monitoring posture”, with a ten-foot pole; especially when there’s also entirely passive bifocals and progressives.


India proposes requiring smartphone makers to share source code with the government and make several software changes as part of a raft of security measures.
How does that sound promising at all? Especially when initiated by a government, previously having attempted to enforce government spyware, to be installed on all consumer smartphones. The following excerpts are from India’s proposed phone security rules that are worrying tech firms
Devices must store security audit logs, including app installations and login attempts, for 12 months.
Phones must periodically scan for malware and identify potentially harmful applications.
Defined to be potentially harmful by who? Right.
Phone makers must notify a government organisation before releasing any major updates or security patches.
We cannot approve of the security patch just yet, as we must first extensively exploit the vulnerability…
Devices must detect if phones have been rooted or “jailbroken”, where users bypass built-in security restrictions, and display continuous warning banners to recommend corrective measures.
Phones must permanently block installation of older software versions, even if officially signed by the manufacturer, to prevent security downgrades.


It becomes more apparent to me everyday, we might be headed towards a society, dynamically managed by digital systems; a “smart society”, or rather a Society-as-a-Service. This seems to be the logical conclusion, if you continue the line of “smart buildings” being part of “smart cities”. With use of IoT sensors and unified digital platforms, data is continuously being gathered on the population, to be analyzed, and its extractions stored indefinitely (in pseudonymized form) by the many data centers, currently being constructed. This data is then used to dynamically adapt the system, to replace the “inefficient” democratic process and public services as a whole. Of course the open-source (too optimistic?) model used, is free of any bias; however nobody has access to the resources required to verify the claim. But given big-tech, historically never having shown any signs of tyranny, a utopian outcome can safely be assumed… Or I might simply just be a nut, with a brain making nonsensical connections, which have no basis in reality.


Interesting concept. If Microsoft were to implement this, one would have to perform a thousand ‘heavy block’ finger pushups, if the user dares to delete persistent bloatware; just for the data to slip out of their hands, and having to do it all over.


“Draft One”? More like “First Draft”


Optimus will ultimately be better than the best human surgeon with a level of precision that is impossible — that is beyond human…
Ever heard of robot-assisted surgery?
People always talked about eliminating poverty, but actually Optimus will actually eliminate poverty.
The wealth gap wouldn’t contribute to the problem of course, buy my solution instead to widen it some more… actually
I’ve reduced it from 9,000 heads to about 5,000 because I need less heads
I be damned, a man wanting less head…
They work hard 24 by 7, you don’t have to pay ’em, and they don’t need any lunch, and they don’t have any healthcare benefits, so they’re very affordable and that really complements our workforce
Why did we ever do away with slaves, am I right?
There’ll be ups and down. This is a revolution. Some people can get their heads cut off.
Friendly reminder: you might want to switch teams, when talking about cutting heads off, in relation to a revolution…
And regardless, these are all really just moments in time. The last week is a moment in time, too
Tech CEO’s in the shower when they see bubbles popping…
So multiple, nickel-titanium alloy tubes, are stretched and released within the refrigerator, causing a temperature change in the alloy, the heat of which (pulled from the interior) transferred to the calcium chloride fluid, being pumped around through the tubes; to be transferred to the outdoor climate, by use of an exterior heat exchanger. Something along those lines?