• ExLisperA
    link
    fedilink
    arrow-up
    3
    ·
    11 hours ago

    Ok, so you’re suggesting that people are submitting kernel patches that somehow modify the architecture of the kernel/it’s components, that the new architecture is very complex and hard to analyze, that the those architectural changes are part of roadmap and are not rejected right away and that those big, complex architectural level patches are submitted with high frequency. Somehow I doubt all of it.

    I think the slop patches are small fixes suggested by some AI code analysis tools, that architectural and complex changes are part of well defined roadmap and don’t come out of nowhere and that code that doesn’t follow conventions is easily spotted and rejected. The linked article talks only about marking the code as AI generated (IMHO useless but harmless) and increasing volume of AI slop patches. The idea that maintainers spend time analyzing complex LLM generated code submitted by random amateurs looking for possible architectural bugs sounds like a fantasy to me.

    • Senal@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      10 hours ago

      TL;DR;

      You asked why it mattered if it’s LLM generated or not, i provided examples where it does matter, nothing you’ve said in your reply seems to refute that so I’ll just assume we’ve agreed on this point.

      The rest of this reply is just me replying to your additional arguments.


      Ok, so you’re suggesting that people are submitting kernel patches that somehow modify the architecture of the kernel/it’s components, that the new architecture is very complex and hard to analyze, that the those architectural changes are part of roadmap and are not rejected right away and that those big, complex architectural level patches are submitted with high frequency. Somehow I doubt all of it.

      I mean, i didn’t say any of that but feel free to doubt a position you just made up.

      I think the slop patches are small fixes suggested by some AI code analysis tools.

      There’s no reason to believe that LLM usage is limited to small patches.

      that architectural and complex changes are part of well defined roadmap and don’t come out of nowhere and that code that doesn’t follow conventions is easily spotted and rejected.

      In a well maintained project, sure, ish, but let’s just say you’re right about the plan/roadmap phase.

      The spotting and rejection you mentioned are now significantly more time and resource consuming for the reasons i stated in the previous reply.

      Also when i used the word architecturally i was referring to the logical domain of the patch and the things it interacts with, i wasn’t implying that LLM’s would get a chance at re-architecting an entire project as large as the Linux kernel.

      At least i’d hope not.

      The linked article talks only about marking the code as AI generated (IMHO useless but harmless) and increasing volume of AI slop patches.

      I’m not sure of the usefulness of this kind of marking in practice, but i can tell you a way in which it might be useful.

      The way you need to go about evaluating LLM generated code vs human code can be different.

      And before you get on your high horse I’m not saying we shouldn’t be doing a good job reviewing in general, of course we should.

      Review and testing resources are limited in most practical settings, we should be focusing on best utilising that resource in the most efficient manner possible.

      There are tools specifically geared towards evaluating LLM generated code for specific mistakes, this marking would enable a more efficient usage/allocation of review resources over and above the baseline code-quality tests.

      The idea that maintainers spend time analyzing complex LLM generated code submitted by random amateurs looking for possible architectural bugs sounds like a fantasy to me

      Which is clear from your answers, if you don’t understand how pull request review works in practice you’re going to struggle to make a coherent argument that requires that understanding.

      To answer the statement directly, there’s sometimes no efficient way to tell which patches are from amateurs, even without LLM’s.

      The issue isn’t even just relegated to amateurs, i would like to assume a competent dev of any skill level wouldn’t be submitting patches they don’t understand but that’s just not always the case.

      and again, think architecture with a ‘little a’ rather than a ‘big A’.

      Logical flow and domain understanding in a relatively limited scope, rather than system-wide structural change.

      The difference between tactics and strategy.

        • Senal@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          10 hours ago

          No.

          You ?

          edit: If any of my answers made it seem like i was, let me know and i’ll adjust them, that was not my intention.

          • ExLisperA
            link
            fedilink
            arrow-up
            1
            ·
            10 hours ago

            No. Let’s wait for someone who knows what they are talking about.

            • Senal@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              ·
              9 hours ago

              You mean like a software developer who has to deal with PR’s from sources that may or may not include LLM generated code ?

              If that’s the case, i might know someone…

              Wait… unless your original assertion was very specifically about only linux kernel development and not about the principles that apply to software PR review and LLM’s as a whole ?

              In that case, i don’t have anyone to hand and you should probably mark it “Active Linux Kernel Contributors Only”.

              It’s clearer that way.