• floofloof@lemmy.ca
    link
    fedilink
    arrow-up
    35
    ·
    edit-2
    10 days ago

    Yeah, the places to use it are (1) boilerplate code that is so predictable a machine can do it, and (2) with a big pinch of salt for advice when a web search didn’t give you what you need. In the second case, expect at best a half-right answer that’s enough to get you thinking. You can’t use it for anything sophisticated or critical. But you now have a bit more time to think that stuff through because the LLM cranked out some of the more tedious code.

    • Corngood@lemmy.ml
      link
      fedilink
      arrow-up
      42
      ·
      10 days ago

      (1) boilerplate code that is so predictable a machine can do it

      The thing I hate most about it is that we should be putting effort into removing the need for boilerplate. Generating it with a non-deterministic 3rd party black box is insane.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        10 days ago

        Hard disagree. There is a certain level of boilerplate that is necessary for an app to do everything it needs. Django, for example, requires you to specify model files, admin files, view files, form files, etc. that all look quite similar but are dependent on your specific use case. You can easily have an AI write these boilerplate for you because they are strongly related to one another, but they can’t easily be distilled down to something simpler because there are key decisions that need specified.

          • Pennomi@lemmy.world
            link
            fedilink
            English
            arrow-up
            9
            ·
            10 days ago

            Because it’s not worth inventing a whole tool for a one-time use. Maybe you’re the kind of person who has to spin up 20 similar Django projects a year and it would be valuable to you.

            But for the average person, it’s far more efficient to just have an LLM kick out the first 90% of the boilerplate and code up the last 10% themself.

            • Feyd@programming.dev
              link
              fedilink
              arrow-up
              15
              ·
              10 days ago

              I’d rather use some tool bundled with the framework that outputs code that is up to the current standards and patterns than a tool that will pull defunct patterns from it’s training data, make shit up, and make mistakes that easily missed by a reviewer glazing over it

              • Pennomi@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                10 days ago

                I honestly don’t think such a generic tool is possible, at least in a Django context. The boilerplate is about as minimal as is possible while still maintaining the flexibility to build anything.

                • Feyd@programming.dev
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  9 days ago

                  If it’s as minimal as possible, then the responsible play is to write it thoughtfully and intentionally rather than have something that can make subtle errors to slip through reviews.

            • Feyd@programming.dev
              link
              fedilink
              arrow-up
              6
              ·
              10 days ago

              Easier and quicker, but finding subtle errors in what looks like it should be extremely hard to fuck up code because someone used an LLM for it is getting really fucking old already, and I shudder at all the things like that are surely being missed. “It will be reviewed” is obviously not sufficient

        • expr@programming.dev
          link
          fedilink
          arrow-up
          3
          ·
          9 days ago

          All of that can be automated with tools built for the task. None of this is actually that hard to solve at all. We should automate away pain points instead of boiling the world in the hopes that a linguistic, stochastic model can just so happen to accurately predictively generate the tokens you want in order to save a few fucking hours.

          The hubris around this whole topic is astounding to me.

          • Pennomi@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 days ago

            I think you underestimate the amount of business logic contained in boilerplate. (Or maybe we’re just talking about different definitions of what boilerplate is). LLMs can understand that business need while most code generators cannot.

            • expr@programming.dev
              link
              fedilink
              arrow-up
              2
              ·
              9 days ago

              LLMs do not understand anything. There is no semantic understanding whatsoever. It is merely stochastic generation of tokens according to a probability distribution derived from linguistic correlations in its training data.

              Also, it is incredibly common for engineers at businesses to have their engineers write code to automate away boilerplate and otherwise inefficient processes. Nowhere did I say that automation must always be done via open source tooling (though that is certainly preferable when possible, of course).

              What do you think people and businesses were doing before all of this LLM insanity? Exactly what I’m describing. It’s hardly novel or even interesting.

              • Pennomi@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                9 days ago

                OK sure if you want to be pedantic. The point is that LLMs can do things traditional code generators can’t.

                You don’t have to like it or use it. I myself am very vocal about the weaknesses and existential dangers of AI code. It’s going to cause the worst security nightmares in humanity’s recorded history. I recommend to companies that they DON’T trust LLMs for their coding because it creates unmaintainable nightmares of spaghetti code.

                But pretending that they have NO advantages over traditional code generators is utter silliness perpetuated by people who refuse to argue in good faith.

          • Pennomi@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            10 days ago

            Sure but it’s a lot less flexible. As much hate as they get, LLMs are the best natural language processors we have. By FAR.

        • yes_this_time@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          9 days ago

          I would agree that the interest will wain in some domains where they aren’t aiding in productivity.

          But LLMs for coding are productive right now in other domains and people aren’t going to want to give that up.

          Inference is already financially viable.

          Now, I think what could crush the SOTA models is if they get sued into bankruptcy for copyright violations. Which is a related but separate thread.

      • expr@programming.dev
        link
        fedilink
        arrow-up
        3
        ·
        9 days ago

        …regular coding, again. We’ve been doing this for decades now and this LLM bullshit is wholely unnecessary and extremely detrimental.

        The AI bubble will pop. Shit will get even more expensive or nonexistent (as these companies go bust, because they are ludicrously unprofitable), because the endless supply of speculative and circular investments will dry up, much like the dotcom crash.

        It’s such an incredibly stupid thing to not only bet on, but to become dependent on to function. Absolute lunacy.

        • yes_this_time@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          9 days ago

          I would bet on LLMs being around and continuing to be useful for some subset of coding in 10 years.

          I would not bet my retirement funds on current AI related companies.

          • expr@programming.dev
            link
            fedilink
            arrow-up
            2
            ·
            9 days ago

            They aren’t useful now, but even assuming they were, the fundamental issue is that it’s extremely expensive to train and run them, and there is no current inkling of a business model where they actually make sense, financially. You would need to charge far more than what people could actually afford to pay to make them anywhere near profitable. Every AI company is burning through cash at an insane rate. When the bubble pops and the money runs out, no one will want to train and host them anymore for commercial purposes.

            • yes_this_time@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              9 days ago

              They may not be useful to you… but you can’t speak for everyone.

              You are incorrect on inference costs. But yes training models is expensive and the economics are concerning.

  • chicken@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    11
    ·
    10 days ago

    We’re replacing that journey and all the learning, with a dialogue with an inconsistent idiot.

    I like this about it, because it gets me to write down and organize my thoughts on what I’m trying to do and how, where otherwise I would just be writing code and trying to maintain the higher level outline of it in my head, which will usually have big gaps I don’t notice until spending way too long spinning my wheels, or otherwise fail to hold together. Sometimes a LLM will do things better than you would have, in which case you can just use that code. When it gives you code that is wrong, you don’t have to use it, you can write it yourself at that point, after having thought about what’s wrong with the AI approach and how what you requested should be done instead.

      • chicken@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        2
        ·
        8 days ago

        I use local models, and it barely doubles the electricity use of my computer while it’s actively generating, which is a very small proportion of the time I’m doing work; the environmental impact is negligible.

  • Riskable@programming.dev
    link
    fedilink
    English
    arrow-up
    11
    ·
    10 days ago

    I’m having the opposite experience: It’s been super fun! It can be frustrating though when the AI can’t figure things out but overall I’ve found it quite pleasant when using Claude Code (and ollama gpt-oss:120b for when I run out of credits haha). The codex extension and the entire range of OpenAI gpt5 models don’t provide the same level of “wow, that just worked!” Or “wow, this code is actually well-documented and readable.”

    Seriously: If you haven’t tried Claude Code (in VS Code via that extension of the same name), you’re missing out. It’s really a full generation or two ahead of the other coding assistant models. It’s that good.

    Spend $20 and give it a try. Then join the rest of us bitching that $20 doesn’t give you enough credits and the gap between $20/month and $100/month is too large 😁

      • Riskable@programming.dev
        link
        fedilink
        English
        arrow-up
        4
        ·
        10 days ago

        A pet project… A web novel publishing platform. It’s very fancy: Uses yjs (CRDTs) for collaborative editing, GSAP for special effects (that authors can use in their novels), and it’s built on Vue 3 (with Vueuse and PrimeVue) and Python 3.13 on the backend using FastAPI.

        The editor TipTap with a handful of custom extensions that the AI helped me write. I used AI for two reasons: I don’t know TipTap all that well and I really want to see what AI code assist tools are capable of.

        I’ve evaluated Claud Code (Sonnet 4.5), gpt5, gpt5-codex, gpt5-mini, Gemini 2.5 (it’s such shit; don’t even bother), qwen3-coder:480b, glm-4.6, gpt-oss:120b, and gpt-oss:20b (running locally on my 4060 Ti 16GB). My findings thus far:

        • Claude Code: Fantastic and fast. It makes mistakes but it can correct its own mistakes really fast if you tell it that it made a mistake. When it cleans up after itself like that it does a pretty good job too.
        • gpt5-codex (medium) is OK. Marginally better than gpt5 when it comes to frontend stuff (vite + Typescript + oh-god-what-else-now haha). All the gpt5 (including mini) are fantastic with Python. All the gpt5 models just love to hallucinate and randomly delete huge swaths of code for no f’ing reason. It’ll randomly change your variables around too so you really have to keep an eye on it. It’s hard to describe the types of abominations it’ll create if you let it but here’s an example: In a bash script I had something like SOMEVAR="$BASE_PATH/etc/somepath/somefile" and it changed it to SOMEVAR="/etc/somepath/somefile" for no fucking reason. That change had nothing at all to do with the prompt! So when I say, “You have to be careful” I mean it!
        • gpt-oss:120b (running via Ollama cloud): Absolutely fantastic. So fast! Also, I haven’t found it to make random hallucinations/total bullshit changes the way gpt5 does.
        • gpt-oss:20b: Surprisingly good! Also, faster than you’d think it’d be—even when giving it a huge refactor. This model has lead me to believe that the future of AI-assisted coding is local. It’s like 90% of the way there. A few generations of PC hardware/GPUs and we won’t need the cloud anymore.
        • glm-4.6 and qwen3-coder:480b-cloud: About the same as gpt5-mini. Not as fast as gpt-oss:120b so why bother? They’re all about the same (for my use cases).

        For reference, ALL the models are great with Python. For whatever reason, that language is king when it comes to AI code assist.

  • forrcaho@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    9 days ago

    I recently asked ChatGPT to generate some boilerplate code in C to use libsndfile to write out a WAV file with samples from a function I would fill in. The code it generated casted the double samples from the placeholder function it wrote to floats to use sf_writef_float to write to the file. Having coded with libsndfile over a decade ago, I knew that sf_writef_double existed and would write my calculated sample values with no loss of precision. It probably wouldn’t have made any audible difference to my finished result but it was still obviously stupidly inferior code for no reason.

    This is the kind of stupid shit LLMs do all the time. I know I’ve also realized months later that some LLM-generated code I used was doing something in a stupid way, but I can’t remember the details now.

    LLMs can get you started and generate boilerplate, but if you’re asking it to write code in a domain you’re not familiar with, you have to understand that — if the code even works — it’s highly likely that it’s doing something in a boneheaded way.

  • Evotech@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    9 days ago

    I use ai for my docker compose services. I basically just point it at a repo and ask it to Start the service for me. It creates docker compose files tries to run it, rwads logs and troubleshoots without intervention

    When I need to update an image i just ask it to do so.

    Ai also controls my git workflow. I tell it to create a branch and push or revert or do whatever. Super nice

    Ai isn’t perfect but it’s hella nice for us who used to work closely with tech a decade ago but have since moved to move architect / resale roles with kids and just don’t have the time and resources.

    I know I’ll get hate for this on lemmy though

    But yeah, I think it’s pretty great. As long as you have basic understanding of whatever it’s going you can get pretty far and do a lot of fun stuff