• markovs_gun@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 hours ago

    A lot of hate in the comments but IMO this is one of the few things that LLMs are actually really good for. It’s a shit job nobody wants to do that LLMs are really good at. Notice that they said 70% and not 100%. Yeah that means they’re probably going to have 30 people doing the work that 100 people used to do but people are still in the picture overseeing things. Automation isn’t, by itself, bad. The bad part is that our whole society is built on the idea that your entire value as a person is based on being able to work and make money and job loss is way worse than it should be.

  • termaxima@slrpnk.net
    link
    fedilink
    English
    arrow-up
    10
    ·
    19 hours ago

    Be prepared for Square Enix games to fail even EA’s QA standards in the near future 😅

  • Grass@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 day ago

    it’s a bit late in the game to be making idiotic claims but I guess the default state for corpos is being out of touch

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      3 days ago

      I would initially tap the breaks on this, if for no other reason than “AI doing Q&A” reads more like corporate buzzwords than material policy. Big software developers should already have much of their Q&A automated, at least at the base layer. Further automating Q&A is generally a better business practice, as it helps catch more bugs in the Dev/Test cycle sooner.

      Then consider that Q&A work by end users is historically a miserable and soul-sucking job. Converting those roles to debuggers and active devs does a lot for both the business and the workforce. When compared to “AI is doing the art” this is night-and-day, the very definition of the “Getting rid of the jobs people hate so they can do the work they love” that AI was supposed to deliver.

      Finally, I’m forced to drag out the old “95% of AI implementations fail” statistic. Far more worried that they’re going to implement a model that costs a fortune and delivers mediocre results than that they’ll implement an AI driven round of end-user testing.

      Turning Q&A over to the Roomba AI to find corners of the setting that snag the user would be Gud Aktuly.

      • Nate Cox@programming.dev
        link
        fedilink
        English
        arrow-up
        13
        ·
        3 days ago

        Converting those roles to debuggers and active devs does a lot for both the business and the workforce.

        Hahahahaha… on wait you’re serious. Let me laugh even harder.

        They’re just gonna lay them off.

        • UnderpantsWeevil@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          They’re just gonna lay them off.

          And hire other people with the excess budget. Hell, depending on how badly these systems are implemented, you can end up with more staff supporting the testing system than you had doing the testing.

        • pixxelkick@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 days ago

          The thing about QA is the work is truly endless.

          If they can do their work more efficiently, they don’t get laid off.

          It just means a better % of edge cases can get covered, even if you made QAs operate at 100x efficiency, they’d still have edge cases not getting covered.

      • binarytobis@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 days ago

        I was going to say, this is one job that actually makes sense to automate. I don’t know any QA testers personally, but I’ve heard plenty of accounts of them absolutely hating their jobs and getting laid off after the time crunch anyway.

      • Mikina@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        They already have a really cool solution for that, which they talked about in their GDC talk.. I don’t think there’s any need to slap a glorified chatbot into this, it already seems to work well and have just the right amount of human input to be reliable, while also leaving the “testcase replay gruntwork” to a script instead of a human.

  • Taldan@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    ·
    2 days ago

    So Square Enix is demanding OpenAI stop using their content, but is 100% okay using AI built off stolen content to make more money themselves

    As a developer, it bothers me that my code is being used to train AI that Square Enix is using while trying to deny anyone else the ability to use their work

    I could go either way on whether or not AI should be able to train on available data, but no one should get to have it both ways