• communism@lemmy.ml
    link
    fedilink
    arrow-up
    5
    ·
    13 hours ago

    The issue is that it’s easy for AI generated code to be subtly wrong in ways that are not immediately obvious to a human. The Linux kernel is written in C, a language that lets you do nearly anything, and is also inherently a privileged piece of software, making Linux bugs more serious to begin with.

    The other problem is, of course, you can block someone submitting AI slop but there’s a lot of people in the world. If there’s a barrage of AI slop patches from lots of different people it’s going to be a real problem for the maintainers.

    • ExLisperA
      link
      fedilink
      arrow-up
      5
      ·
      13 hours ago

      The issue is that it’s easy for AI generated code to be subtly wrong in ways that are not immediately obvious to a human.

      Same with human generated code. AI bug are not magically more creative than human bugs. If the code is not readable/doesn’t follow conventions you reject it regardless of what generated it.

      The other problem is, of course, you can block someone submitting AI slop but there’s a lot of people in the world. If there’s a barrage of AI slop patches from lots of different people it’s going to be a real problem for the maintainers.

      You don’t need official policy to reject a barrage of AI slop patches. If you receive to many patches to process you change the submission process. It doesn’t matter if the patches are AI slop or not.

      Spamming maintainers is obviously bad but saying that anything AI generated in the kernel is a problem in itself is bullshit.

      • communism@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        10 hours ago

        saying that anything AI generated in the kernel is a problem in itself is bullshit.

        I never said that.

        Same with human generated code. AI bug are not magically more creative than human bugs. If the code is not readable/doesn’t follow conventions you reject it regardless of what generated it.

        You may think that, but preliminary controlled studies do show that more security vulns appear in code written by a programmer who used an AI assistant: https://dl.acm.org/doi/10.1145/3576915.3623157

        More research is needed of course, but I imagine that because humans are capable of more sophisticated reasoning than LLMs, the process of a human writing the code and deriving an implementation from a human mind is what leads to producing, on average, more robust code.

        I’m not categorically opposed to use of LLMs in the kernel but it is obviously an area where caution needs to be exercised, given that it’s for a kernel that millions of people use.

    • jaykrown@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      13 hours ago

      It’s about the people. If the AI generated code is subtly wrong, then it’s on the community to test it and spot it. That’s why it’s important to have protocols and testing. The funny thing is you can also use AI to highlight bad code.