3bitswalkintoabarandoneflips
3bitswalkintoabarandoneflips
So, is the account actually read-only?
Winamp Collaborative License
A “source-available” license that says you can’t fork the project, which means it is illegal to click the fork button on GitHub (which is a violation of the GitHub ToS) and also makes it impossible to create any pull requests without push access to the original repo. Source
Oh yeah, I remember that whole fiasco from a few months back.
let us resurrect the ancient art of Bittorrent
haha
Very interesting. A trove of experience and practical knowledge.
They were able to anticipate most of the loss scenarios in advance because they saw that logical arguments were not prevailing; when that happens, ““there’s only one explanation and that’s an alternative motive””. His ““number one recommendation”” is to ensure, even before the project gets started, that it has the right champion and backing inside the agency or organization; that is the real determiner for whether a project will succeed or fail.
Not very surprising, but still tragic and sad.
I love Nushell in Windows Terminal with Starship as an evolution and a leap of shell. Structured data, native format transformations, strong querying capabilities, expressive state information.
I was surprised that the linked article went an entirely different direction. It seems mainly driven by mouse interactions, but I think it has interesting suggestions or ideas even if you disregard mouse control or make it optional.
I don’t know. Can you?
- Fixed a compilation failure caused by the inclusion of the unused and obsolete header <sys/file.h>. (Reported by Michael Mikonos).
- Ed now reads the initial window size for the z command from the environment variable LINES. (Suggested by Artyom Bologov).
Man this was hard to find for some simple release notes when you’re already looking at the announcement of it…
ROCm is an implementation/superset of OpenCL.
ROCm ships its installable client driver (ICD) loader and an OpenCL implementation bundled together. As of January 2022, ROCm 4.5.2 ships OpenCL 2.2
Shaders are computational visual [post-]processing - think pixel position based adjustments to rendering.
OpenCL and CUDA are computation frameworks where you can use the GPU for other processing than rendering. You can use it for more general computing.
nVidia has always been focusing on proprietary technology. Introduce a technology, and try to make it a closed market, where people are forced to buy and use nVidia for it. AMD has always been supporting and developing open standards as a counterplay to that.
Arguably, the openness is in that the EU OS can switch from one to another at some point if it becomes necessary.
Supporting multiple alternatives within the same platform and OS is costly. Not only the integration, but also user training and troubleshooting, specifically about the many, big and small subtle differences. Focusing on one, for now anyway, makes sense.
I would separate concerns. For the scraping, I would dump data as json onto disk. I would consider the folder structure I put them into, whether as individual files, or a JSON document per line in bigger files for grouping. If the website has good URL structure, the path could be useful for speaking author and or id identifiers in folders or files.
Storing json as text is simple. Depending on the amount, storing plain text is wasteful, and simple text compression could significantly reduce storage size. For text-only stories it’s unlikely to become significant though, and not compressing makes the scraping process, and potentially validating completeness of scraped data simpler.
I would then keep this data separate from any modifications or prototyping I would do regarding modification or extension of data and presentation/interfacing.
I think I need an AI to parse these confusing graphs and images for me.
you evil AI you! /s
Unless you continuously change you IP I don’t see how locking DNS resolution behind a signup would solve it. You only need to resolve once, and then you know the mapping of domain to IP and can use it elsewhere. That mapping doesn’t change often for hosted services.
Any wall you build up will also apply to regular users you want to reach.
They’re using svn for sources :( mirrored to GitHub at least.
Damn, for a thief they’re really stomping and dragging (if that’s the right en term) their feet in the test video. Such loud and sandy foot steps.
and include expensive endpoints like git blame, every page of every git log, and every commit in your repository. They do so using random User-Agents from tens of thousands of IP addresses, each one making no more than one HTTP request, trying to blend in with user traffic.
That’s insane. They also mention crawling happening every 6 hours instead of only once. And the vast majority of traffic coming from a few AI companies.
It’s a shame. The US won’t regulate - and certainly not under the current administration. China is unlikely to.
So what can be done? Is this how the internet splits into authorized and not? Or into largely blocked areas? Maybe responses could include errors that humans could identify and ignore but LLMS would not to poison them?
When you think about the economic and environmental cost of this it’s insane. I knew AI is expensive to train and run. But now I have to consider where they leech from for training and live queries too.
| sh
stands for shake head at bad practices
How am I supposed to remember those?
On word boundaries? But that would be way too predictable!