• 0 Posts
  • 42 Comments
Joined 3 years ago
cake
Cake day: June 21st, 2023

help-circle
  • This wouldn’t be “illigal”, but if that’s the case Annas Archive should be “fine”… (I know that they are distributing, and this is the fight)

    I don’t know much about European law, but redistribution changes things a lot here in the US. At least here, it then gets into copyright law, and you’d be reproducing copyrighted works without authorization (the Internet Archive attempted to get around this with books by getting legitimate copies of the books, digitizing them, then “lending” the digital copies of those books).

    So if I prefer to download the Anna’s dataset instead of scrape myself, would this be illigal?

    No idea in Europe. In the US, it might be, depending on what the contents of the work are. I believe Anna’s Archive would count as piracy in this case, though scraping directly from Spotify might not be because they are redistributing the music with authorization from the copyright holder. It gets pretty confusing, honestly.

    Regardless, if you aren’t doing things at large scale, even if you are breaking a law by downloading pirated content, it’s unlikely anyone will care. People usually only really start caring if you start redistributing stuff, so as long as you aren’t hosting what you’re scraping, you’re unlikely to run into any trouble.


  • There’s no obvious answer to your question without more information (for example, where are you?) but I’m not aware of scraping being illegal anywhere, with some exceptions. For example, in the US (where I am), as long as you’re not doing “illegal hacking” to scrape your data, you’re probably fine.

    There are TOSs that websites like to impose as well. If you have to agree to one to access any data, you should follow it. Breaking the TOS isn’t really “illegal” in a criminal sense (in the US), but you may expose yourself to anything from being blocked from the site to a lawsuit. Bypassing blocks might also be illegal, though you’d have to speak to a lawyer to know more about that.




  • Skills Are the New CLI

    4th paragraph:

    Skills don’t replace CLIs.

    Great start.

    Anyway, skills are basically an alternative to tools. I believe Anthropic made a big deal about them. They come with all the same downsides using an LLM at all comes with, which means they’re fallible, nondeterministic, and possibly even an attack vector. But hey, it saves you remembering a few flags for git so whatever I guess.





  • You could put it into the archinstall script and just never finish the installation if there is no age set. You could also prevent a user from logging into an account that has no age set, this could be achived by modified core packages in the base package.

    My (rather limited) understanding is that Arch can be installed both without the archinstall script and without a user. Also, the rest of your comment covers how stupid it is to require a value anyway since people can put whatever they want.

    Outside of that, it’s all open source. It’s possible to fork and remove the field entirely from an install script, distro, or even systemd itself.

    Nobody can enforce this in the open source world. This is honestly the strongest argument for an open source exemption in these laws. It cannot be enforced on open source OSs.







  • The success rate of main branch builds compounds this further. It has fallen to 70.8%, its lowest in over five years – 30% of attempts to merge code for production are now failing.

    The integration bottleneck finding is credible. If you’re generating code faster than your team can review and integrate it, that’s a genuine problem this data is consistent with.

    I disagree here. If more attempts are failing, then more attempts are needed to merge a branch. If the pipeline is running more and fewer branches are merging, it’s also possible that people need to go through more revisions to merge their code than they needed to before.

    People using AI to write their entire PR will find that fixing issues with it takes more work. They often don’t know how the PR works. I wouldn’t be surprised if this resulted in PRs taking longer to merge as a result, which would contradict CircleCI’s claims of teams benefiting from AI.

    I believe the report has insufficient data to draw any meaningful conclusions. The data is interesting, at least.


  • This decoupling of commands from effects is interesting, but I don’t think I’d use it in most places. In this specific example, passing in an interface for an API client (or whatever other thing you want to call) lets you create a mock client and pass that in during testing, and different environments should be configured differently anyway.

    There is one place I’d consider this though, and it’s incredibly specific: a MTG rules engine. Because of replacement effects, triggered abilities, and so on, being able to intercept everything from starting turns to taking damage means you can apply the various different game effects when they come up rather than mixing that logic up all over the codebase. I’m tempted to try this and see if it works, actually.



  • What I usually push for is that every CI task either sets up the environment or executes that one command™ for that task. For example, that command can be uv run ruff check or cargo fmt --all -- --check or whatever.

    Where the CI-runs-one-script-only (or no-CI) approach falls apart for me is when you want to have a deployment pipeline. It’s usually best not to have deployment secrets stored in any dev machine, so a good place to keep them is in your CI configs (and all major platforms support secrets stored with an environment, variable groups, etc). Of course, I’m referring here to work on a larger team, where permission to deploy needs to be transferrable, but you don’t really want to be rotating deployment secrets all the time either. This means you’re running code in the pipeline that you can’t run locally in order to deploy it.

    It also doesn’t work well when you build for multiple platforms. For example, I have Rust projects that build and test on Windows, MacOS, and Linux which is only possible by running those on multiple runners (each on a different OS and, in MacOS’s case, CPU architecture).

    The compromise of one-script-per-task can usually work even in these situations, from my experience. You still get to use things like GitHub’s matrix, for example, to run multiple runners in parallel. It just means you have different commands for different things now.