• zieg989@programming.dev
    link
    fedilink
    English
    arrow-up
    155
    ·
    8 days ago

    I would not be surprized if Anthropic would actually hire a real developer to make these PRs as a marketing stunt

    • BestBouclettes@jlai.lu
      link
      fedilink
      arrow-up
      175
      ·
      8 days ago

      Well, if the model detected an issue, and a human tested it to make sure it was real and then fixed it, I think that’s an acceptable use of AI tools.

    • In 2021, when Amazon launched its first “just walk out” grocery store in the UK in Ealing, west London, this newspaper reported on the cutting-edge technologies that Amazon said made it all possible: facial-recognition cameras, sensors on the shelves and, of course, “artificial intelligence”.
      An employee who worked on the technology said that actual humans – albeit distant and invisible ones, based in India – reviewed about 70% of sales made in the “cashier-less” shops as of mid-2022

      Source: The Guardian

      UK AI company builder.ai has been tricking customers and investors for eight years – selling an advanced code-writing AI that, it turns out, is actually an Indian software farm employing 700 human developers.

      Source: ACS Information Age

  • General_Effort@lemmy.worldOP
    link
    fedilink
    arrow-up
    88
    ·
    8 days ago

    (In case someone has been living under a rock in the last 48 hours. Anthropic’s new model “Mythos” has been finding a lot of new vulnerabilities. This is about patching one.)

  • CannonFodder@lemmy.world
    link
    fedilink
    arrow-up
    80
    ·
    8 days ago

    ai tools can detect potential vulnerabilities and suggest fixes. You can still go in by hand and verify the problem carefully apply a fix.

    • shirasho@feddit.online
      link
      fedilink
      English
      arrow-up
      31
      ·
      8 days ago

      AI is actually SUPER good at this and is one of the few places I think AI should be used (as one of many tools, ignoring the awful environmental impacts of AI and assuming an on-prem model). AI is also good at detecting code performance issues.

      With that said, all of the fix recommendations should be fixed by hand.

      • _hovi_@lemmy.world
        link
        fedilink
        arrow-up
        10
        ·
        8 days ago

        Yeah I would add also ignoring how the training data is usually sourced. I agree AI can be useful but it just feels so unethical that I find it hard to justify.

        I’m a big LLM hater atm but once we’re using models that are efficient, local and trained on ethically sourced data I think I could finally feel more comfortable with it all. Can’t be writing code for me though - why would I want the bot to do the fun part?

        • shirasho@feddit.online
          link
          fedilink
          English
          arrow-up
          5
          ·
          8 days ago

          Exactly my thought. I got into software development because designing and writing good code is fun. It is almost a game to see how well you can optimize it while keeping it maintainable. Why would I let something else do that for me? I am a software engineer, not a prompt writer.

  • spectrums_coherence@piefed.social
    link
    fedilink
    English
    arrow-up
    73
    ·
    edit-2
    8 days ago

    LLM is very good at programming when there are huge number of guardrails against them. For example, exploit testing is a great usecase because getting a shell is getting a shell.

    They kind of acts as a smarter version of infinite monkey that can try and iterate much more efficiently than human does.

    On the other hand, in tasks that requires creativity, architecture, and projects without guard rail, they tend to do a terrible job, and often yielding solution that is more convoluted than it needs to be or just plain old incorrect.

    I find it is yet another replacement for “pure labor”, where the most unintelligent part of programming, i.e. writing the code, is automated away. While I will still write code from scratch when I am trying to learn, I likely will be able automate some code writing, if I know exactly how to implement it in my head, and I also have access to plenty of testing to gaurentee correctness.

    • Serinus@lemmy.world
      link
      fedilink
      arrow-up
      39
      ·
      8 days ago

      People have trouble with the middle ground. AI is useful in coding. It’s not a full replacement. That should be fine, except you’ve got the ai techbros and CEOs on one end thinking it will replace all labor, and the you’ve got the backlash to that on the other end that want to constantly talk about how useless it is.

        • MinnesotaGoddam@lemmy.world
          link
          fedilink
          arrow-up
          5
          ·
          8 days ago

          the times i trust LLMs: when i am using it to look up stuff i have already learned, but i can’t remember and just need to refresh my memory. there’s no point memorizing shit i can look up and am not going to use regularly, and i’m the effective guardrail against the LLMs being wrong when I’m using them.

          the times i don’t trust the LLMs: all the other times. if i can’t effectively verify the information myself, why am i going to an unreliable source?

          having to explain that nuance over and over, it’s just shorter and easier to say the llm is an unreliable source. which it is. when i’m not doing lazy output, it doesn’t need testing (it still gets at least 2 reviews, but the last time those reviews caught anything was years ago). the llm’s output always needs testing.

      • brianpeiris@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        8 days ago

        I suspect the problem is that there are many developers nowadays who don’t care about code quality, actual engineering, and maintenance. So the people who are complaining are right to be concerned that there is going to be a ton of slop code produced by AI-bro developers, and the developers who actually care will be left to deal with the aftermath. I’d be very happy if lead developers are prepared to try things with AI, and importantly to throw the output away if it doesn’t meet coding standards. Instead I think even lead developers and CTOs are chasing “productivity” metrics, which just translates to a ton of sloppy code.

        • Serinus@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          8 days ago

          Yeah, I don’t plan to leave in two years, so I’m motivated to not say “oh fuck” when I have to maintain the thing I built later.

          Plus, you know, I don’t want people to groan when they have to work on my code.

    • lonesomeCat@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      ·
      8 days ago

      The thing is, you know how it is in your head and you need to lay out that entire context.

      And after that you MUST review the code because you’d never know. Wouldn’t call it automation if I have to double check EVERY TIME

      • definitemaybe@lemmy.ca
        link
        fedilink
        arrow-up
        2
        ·
        7 days ago

        It’s great for coding things that you don’t care if it gets it wrong, though. Like, I vibe coded a JavaScript injection to add a client-side accessibility feature to a website running a fairly complex tech stack. I don’t know JavaScript, but I know how to code, and I know enough HTML and CSS to do simple things.

        It failed quite a few times, but each time I just needed to refresh the page for a clean slate, tell the LLM how it fucked up, and try again. In about an hour, I had a functional script I could inject in the site to bolt on a new feature.

        I was reading the code along the way, so I know what it’s doing for the most part (not some of the JavaScript things, like why there are extra brackets in places I wouldn’t expect, but whatever.) It wasn’t doing anything dangerous.

        Not mission critical. A small block of code to do one simple thing. There was no real downside or cost of failure, aside from wasted time. And it’s small enough that it’s easy to understand from scratch; it’ll be fairly easy to update and maintain.

        On the other hand, it sounds like Microslop and NVidia (and many others) are using AI slop in complex, mission-critical projects. I’d be nervous for their future, if I cared about them.

  • SkunkWorkz@lemmy.world
    link
    fedilink
    arrow-up
    46
    ·
    7 days ago

    The ffmpeg team was mad at Google when they reported a bug that was found and reported automatically with an AI. Google reported the bug without providing a fix and also gave an ultimatum. Google would publicize the bug report after 60 days. That’s what pissed off the ffmpeg devs. Not to mention that it was a very obscure bug, like ffmpeg didn’t decode a video file from a 90’s videogame correctly.

    Anthropic on the other hand found a bug and provided a fix. So why would they be mad if the fix is properly written and fixes the bug ?

      • General_Effort@lemmy.worldOP
        link
        fedilink
        arrow-up
        2
        ·
        7 days ago

        It’s really only a minority, or else the world would not work. Think how the theory of evolution gained mainstream acceptance, despite resistance by fanatics who had support by society,

  • sun_is_ra@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    26
    ·
    8 days ago

    Maybe he meant code quality was so good its like a human wrote it.

    After all if the code is good and follow all best practices of the project, why reject it just because it was an AI who wrote it. That’s racism against machines.

  • Owl@mander.xyz
    link
    fedilink
    arrow-up
    22
    ·
    7 days ago

    So they read them, and the patches were good (according to this message)

    Why hate then?

  • Onno (VK6FLAB)@lemmy.radio
    link
    fedilink
    arrow-up
    15
    ·
    8 days ago

    Hold on, wasn’t one of the “features” of the “leaked” Assumed Intelligence source code the “human”-like version?