• Solumbran@lemmy.world
    link
    fedilink
    arrow-up
    34
    ·
    3 months ago

    How many more years are going to be wasted with this crap?

    Everyone knows that both in theory and practice, AIs are shit are producing code; the only ones who don’t are the ones who are themselves unable to produce decent code and refuse to see the problem.

    But yes, let’s keep on pushing more and more until everything is drowned in worthless crap, as if we didn’t already have enough issues with humans producing crap like web technologies, now they’ll be riddled by even more crap.

    • entwine@programming.dev
      link
      fedilink
      arrow-up
      15
      ·
      3 months ago

      I think people are too polite to call shitty programmers out on being shitty. It’s probably not a fair assumption, but whenever I see someone admit they use some AI coding tool, I immediately assume they’re either a junior, or one of those people who just were never intelligent enough to be a good developer, and ended up getting filtered into some low skill web dev job. Those are the kinds of people who probably feel threatened by AI, and I feel are more likely to use it.

      We need to make elitism and public shaming cool again.

    • cassandrafatigue@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      3 months ago

      how many more years

      As many as possible running down the clock on climate change and putting our whole economy towards less-than-useless bullshit.

      Killing truth, sloppifying everyone is the point.

  • codeinabox@programming.devOP
    link
    fedilink
    English
    arrow-up
    17
    ·
    3 months ago

    This quote from the article very much sums up my own experience of Claude:

    In my recent experience at least, these improvements mean you can generate good quality code, with the right guardrails in place. However without them (or when it ignores them, which is another matter) the output still trends towards the same issues: long functions, heavy nesting of conditional logic, unnecessary comments, repeated logic – code that is far more complex than it needs to be.

    AI coding tools definitely helpful with boilerplate code but they still require a lot of supervision. I am interested to see if these tools can be used to tackle tech debt, as often the argument for not addressing tech debt is a lack of time, or if they would just contribute it to it, even with thorough instructions and guardrails.

    • Thorry@feddit.org
      link
      fedilink
      arrow-up
      15
      ·
      edit-2
      3 months ago

      Omg the comments are so out of hand. I regularly do code reviews on colleagues who use AI to write code (some whilst protesting, but still). The comments are usually the worst part.

      The thing writes entire novels in the summary that do nothing but confuse and add cognitive load. It adds comments to super obvious things, describing what the code does instead of why. Yes AI I can read code, I know assigning a variable a value is how shit works. And I have still got PTSD from those kinds of comments from a legacy system I worked on for years that did the exact same, except the comments and the code didn’t match up, so it was a continuous guess which one was the intended one. It also likes to put responses to the prompt in the comments. So for example when it assigned A to a variable and it was supposed to be B, when you point this out it adds a comment saying something like: This is supposed to be B not A. But when you read those comments after the fact, it makes zero sense. Like of course it should be B? Why should it ever be A?

      And it often generates a bunch of markdown docs which are plain drivel, luckily most devs just delete those before I see them.

      My personal experience is in 30% of cases the AI is just plain wrong and the result is nonsense, delete that shit and try again. In the 70% that does have some kind of answer there is ALWAYS at least one big issue and usually multiple. It’s a 50/50 if the code is workable with some kinks to work out, or if it is seriously flawed and needs a lot of work. For experienced devs it can be a helpful thing if they have writers block, to give them something to be angry about, showing them how they can do better. But for inexperienced devs it’s just plain terrible, the code is shit and the dev doesn’t even know. And worse still the dev doesn’t learn. I try to sit down with them, explain the shortcomings and how to do better. But they don’t learn, they just know what stuff to write in the prompt, in order to not get me on their case. Or they will say stuff like: but it works right? Facepalm

      That company I do work for also tried getting their sysadmins and devops people to use AI. Till one day there was a permissions issue, which admittedly was pretty complicated, where they ended up solving it with AI. The team was happy, the upper management was happy, high fives all around. Till the grumpy old sysadmin who has 40 years of experience takes a look and hits the big ol’ red alarm button of doom. Full investigation later, the AI had fucked up and created a huge hole in the security. There was zero evidence it had been exploited, but that doesn’t matter. All the work still needed to be done, all the paperwork filed, proper agencies informed, because the security issue was there. Management eased up on AI usage for those people real fast.

      It’s so weird how people in charge want to use AI, but aren’t even really sure of what it is and what it isn’t. And they don’t listen to what the people with actual knowledge have to say. In their minds we are probably all just covering our asses to not be out of a job.

      But for real if anyone in management is listening, take it from an old asshole who has done this job since the 80s: AI fucking sucks!

      • sup@lemmy.ca
        link
        fedilink
        arrow-up
        3
        ·
        3 months ago

        Very well said. This is 100% my experience and could be written by me. This is exactly what it is. We’re going to be seeing a lot of low quality code after 2024/5 sadly :(

    • jasory@programming.dev
      link
      fedilink
      arrow-up
      6
      ·
      3 months ago

      These might be of interest to software developers but it’s all just style nothing here actually effects the computation. The problem I encounter with LLMs is that they are incapable of doing anything but rehearsing the same algorithms you get off of blogs. I can’t even successfully force them to implement a novel algorithm they will simply deny that it is valid and revert back to citing their training data.

      I don’t see LLMs actually furthering the field in any real way ( even if by accident, since they can’t actually perform deductive reasoning).

    • Sentient Loom@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      3 months ago

      I am interested to see if these tools can be used to tackle tech debt

      Having it rewrite existing functioning code seems like a terrible idea. QA would at least have to re-test all functionality.

    • Michal@programming.dev
      link
      fedilink
      arrow-up
      2
      ·
      3 months ago

      From what I’ve seen I wouldn’t trust it to tackle technical debt. Quite the opposite, I’d use LLM ro build and MVP, then consider it’s output to be technical debt that will have to be cleaned up over time as the product matures,.by someone who knows what he’s doing.

      LLM right now is more of a junior developer.

  • idriss@lemmy.ml
    link
    fedilink
    arrow-up
    13
    ·
    3 months ago

    My brief experience is similar, even if conditions are perfect (you treat it like an editor, like this exactly the change you should make, flow exactly this naming and testing styles, run the tests so it’s clear you didn’t screw anything) it will still screw things up here and there.