Well that’s not terrifying at all.
Our names, numbers, and home addresses used to be in a book delivered to everyone’s door or found stacked in a phone booth on the street. That was normal for generations.
It’s funny how much fuckwits can change the course of society and how we can’t have nice things.
Still are. I got a phone book delivered a week ago, I shit thee not. Granted I’m on a small island and the book is small too. But like, you can pay to have your number removed from the book. Can you have it removed from this? Not to mention all the 2FA stuff that can be connected to the phone number. Someone clones your number or takes it and suddenly they’ve got access to a whole lot of your login stuff.
My phone book is smaller than a novel and only has yellow pages these days.
$5,000
This is like 1/10th of what a good blackhat hacker would have gotten out of it.
I always wonder what’s stopping security researchers from selling these exploits to Blackhat marketplaces, getting the money, waiting a bit, then telling the original company, so they end up patching it.
Probably break some contractual agreements, but if you’re doing this as a career surely you’d know how to hide your identity properly.
Chances that such an old exploit get found at the same time by a whitehat and a blackhat are very small. It would be hard not to be suspicious.
Yes, but I was saying the Blackhat marketplaces wouldn’t really have much recourse if the person selling the exploit knew how to cover their tracks. i.e. they wouldn’t have anyone to sue or go after.
I’m saying blackhat hackers can make far more money off the exploit by itself. I’ve seen far worse techniques being used to sell services for hundreds of dollars and the people behind those are making thousands. An example is the slow bruteforcing of blocked words on YouTube channel as they might have blocked their name, phone number, or address.
What you’re talking about is playing both sides, and that is just not worth doing for multiple reasons. It’s very obvious when somebody is doing that. People don’t just find the same exploit at the same time in years old software.
Google, Apple, and rest of big tech are pregnable despite their access to vast amounts of capital, and labor resources.
I used to be a big supporter of using their “social sign on” (or more generally speaking, single sign on) as a federated authentication mechanism. They have access to brilliant engineers thus naively thought - "well these companies are well funded, and security focused. What could go wrong having them handle a critical entry point for services?”
Well as this position continues to age poorly, many fucking aspects can go wrong!
- These authentication services owned by big tech are much more attractive to attack. Finding that one vulnerability in their massive attack vector is difficult but not impossible.
- If you use big tech to authenticate to services, you are now subject to the vague terms of service of big tech. Oh you forgot to pay Google store bill because card on file expired? Now your Google account is locked out and now lose access to hundreds of services that have no direct relation to Google/Apple
- Using third party auth mechanisms like Google often complicate the relationship between service provider and consumer. Support costs increase because when a 80 yr old forgot password or 2FA method to Google account. They will go to the service provider instead of Google to fix it. Then you spend inordinate amounts of time/resources trying to fix issue. These costs eventually passed on to customer in some form or another
Which is why my new position is for federated authentication protocols. Similar to how Lemmy and the fediverse work but for authentication and authorization.
Having your own IdP won’t fix the 3rd issue, but at least it will alleviate 1st and 2nd concerns
God, I hate security “researchers”. If I posted an article about how to poison everyone in my neighborhood, I’d be getting a knock on the door. This kind of shit doesn’t help anyone. “Oh but the state-funded attackers, remember stuxnet”. Fuck off.
This disclosure was from last year and the exploit was patched before the researcher published the findings to the public.
I think the method of researching and then informing the affected companies confidentially is a good way to do it but companies often ignore these findings. It has to be publicized somehow to pressure them into fixing the problem.
Indeed, then it becomes a market and it incentivises more research on that area. Which I don’t think is helpful for anyone. It’s like your job description being “professional pessimist”. We could be putting that amount of effort into building more secure software to begin with.