• octopus_ink@slrpnk.net
    link
    fedilink
    English
    arrow-up
    68
    ·
    edit-2
    5 days ago

    I love that the only AI goal the oligarchy can focus on is making sure we can all use it to work more.

    • PrettyFlyForAFatGuy@feddit.uk
      link
      fedilink
      English
      arrow-up
      41
      ·
      edit-2
      5 days ago

      If you can be in three meetings at once with AI then every single one of those meetings could have been an email

      Or a group chat

      • Frezik@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        17
        ·
        5 days ago

        There’s meetings other people need to have and I just need to know broadly what was said. Transcription and summerizing would be great for that

        That is, if I could trust its accuracy. Which I don’t.

          • Lyrl@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 days ago

            The skills of both writing useful minutes and prioritizing actually sending them out are frustratingly rare. An average meeting with five or six people has even odds of not including someone with both of those skills. I can see where reliably having a mediocre AI summary might be an advantage over sometimes having superb human-written minutes and sometimes having nothing.

      • mrgoosmoos@lemmy.ca
        link
        fedilink
        English
        arrow-up
        6
        ·
        5 days ago

        that’s pretty much where we are now

        shit minimum wage, corporations owning housing, and monopolies in pretty much every market. it’s just slavery with the illusion of freedom because you can choose which shitty apartment building to live in for over half your income, and which franchise stores you shop at, while your essentials are getting price gouged and constantly worse quality for higher cost, yet the workers don’t make more

        that’s just slavery with extra steps

        • _g_be@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          Too true. The real steal of the century is convincing the commons that their lack of success is a personal failing rather than a system designed to keep then down

  • Sam_Bass@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    4 days ago

    Pretty sure its main function is to back up your data to cloud fully accessible by microsloth

    • ExcessShiv@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      27
      ·
      5 days ago

      According to the M365 Copilot monitoring dashboard made available in the trial, an average of 72 M365 Copilot actions were taken per user.

      “Based on there being 63 working days during the pilot, this is an average of 1.14 M365 Copilot actions taken per user per day,” the study says. Word, Teams, and Outlook were the most used, and Loop and OneNote usage rates were described as “very low,” less than 1 percent and 3 percent per day, respectively.

      Yeah that probably won’t have the intended effect…this basically just shows that AI assistants provide no benefit when they’re not used and nothing else.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        13
        ·
        edit-2
        5 days ago

        We have it on our system at work. When we asked what management expected it to be used for they didn’t have an answer.

        We have a shell script that ingests a list of user IDs and resets their active directory passwords, then locks the account, then sends them an email telling them to contact the support desk to unlock the account. It a cron job that runs ever Monday morning.

        Why do a need an AI for when we can just use that? A script that can be easily read understood and upgraded, with no concerns about it going off-piste and doing something random and unpredictable.

        So yeah, they don’t use it, because it won’t work.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 days ago

          Well yeah, AI shouldn’t replace existing, working solutions, it should be used in the research phase for new solutions as a companion to existing tools.

      • Jhex@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        5 days ago

        .this basically just shows that AI assistants provide no benefit when they’re not used and nothing else.

        so you think they may be useful but people just like to work harder? or perhps, they tried and saw no benefit at all and moved on?

        • ExcessShiv@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          5 days ago

          Having been part of multiple projects introducing new software tools (not AI) to departments before, people are usually just stubborn and don’t want to change their ways, even if it enables a smoother work-flow with minimal training/practice. So yeah, basically people are so set in their ways,it is often hard to convince them something new will actually make their job easier.

          • Jhex@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            4 days ago

            The devil is in the details… what you describe screams to me what I call the “new boss syndrome”. New boss comes in and they feel the need to pee on everyone to mark their territory so they MUST bring in some genius change.

            99% of the time, they are bringing in some forced change for the sake of change or something that worked on their previous place without taking into consideration the context.

            I do not know anyone who prefers to work harder… either the changes proposed make no sense (or it’s too complex for people to understand the benefit) or the change is superfluous. That is usually where resistance to change comes from.

          • rebelsimile@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            5 days ago

            In all your software deployments did you blame the users for not getting it or did you redesign the software because it sucked (according to your users)?

            • Lyrl@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              4 days ago

              I’ve occasionally been part of training hourly workers on software new to them. Having really, really detailed work instructions and walking through all the steps with themthe first time has helped me win over people who were initially really opposed to the products.

              My experience with salaried workers has been they are more likely to try new software on their own, but if they don’t have much flexible time they usually choose to keep doing the established less efficient routine over investing one-time learning curve and setup time to start a new more efficient routine. Myself included - I have for many years been aware of software my employer provides that would reduce the time spent on regular tasks, but I know the learning curve and setup is in the dozens of hours, and I haven’t carved out time to do that.

              So to answer the question, neither. The problem may be neither the software nor the users, but something else about the work environment.

              • rebelsimile@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                5 days ago

                That’s not what I’m asking. You designed or built something for some users. They didn’t like it, or didn’t use it as you expected. Was your response to change the software or blame the users for not using it correctly?

                • sugar_in_your_tea@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  5 days ago

                  That depends on the issue. Sometimes it’s a lack of training, sometimes it’s obtuse software. That’s a call the product owner needs to make.

                  For something like AI, it does take some practice to learn what it’s good at and what it’s not good at. So there’s always going to be some amount of training needed before user complaints should be taken at face value. That’s true for most tools, I wouldn’t expect someone to jump in to my workflow and be productive, because many of the tools I use require a fair amount of learning to use properly. That doesn’t mean the tools are bad, it just means they’re complex.

      • panda_abyss@lemmy.ca
        link
        fedilink
        English
        arrow-up
        6
        ·
        5 days ago

        Worth noting the average includes the people who did use it a lot too.

        So you can conclude people basically did not use it at all.

  • cerebralhawks@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    45
    ·
    5 days ago

    Yeah, no shit. But they nearly doubled the price. I canceled my membership, but I doubt enough did to actually matter.

    I was fine paying $60 a year for Office. I was never gonna use the AI stuff. When they said it was $100, I bailed. So now they don’t get the $60. But enough people will go on paying that they will actually make more money on Office in the next year, not less.

    Not enough people are willing to vote with their wallets or even their feet to effect any meaningful change. At least not when it comes to their tech toys.

    • FlashMobOfOne@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      5 days ago

      Not enough people are willing to vote with their wallets

      That and most governments are wrapped up in Windows, and therefore kinda just captive to the insane pricing. I get everything I need out of LibreOffice, personally.

    • jubilationtcornpone@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      ·
      5 days ago

      The sole reason I still pay the Microsoft tax is Excel. Other office suite components are generally good enough to fill in for their Microsoft counterparts. But, spreadsheet programs are one area where open source competitors need to get their shit together.

      Most of them can do the basics but Excel is still in a class by itself for power users and advanced functionality. That’s a real bummer because I would love to stop paying the Microsoft tax.

      • Zexks@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        Its the vba. Its proprietary and not for sale anymore and theres not a good free replacement. Been writing a reporting system tha5needs scripting and have had to use javascrip amd heavily cover things for end users to even understamd what is happening.

  • tekato@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    5 days ago

    I don’t see where a government would need a chatbot. Anyways, chances are that half the staff was already using some form of LLM before this trial.

      • Treczoks@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        ·
        5 days ago

        The point is that this is all happening in a cloud. One that is probably located in the US. Not a good thing for a non-US government to send potentially confidential or even secret data to.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          ·
          5 days ago

          It doesn’t have to, you can run LLMs locally. We do at my org, and we only have a few dozen people using it, and it’s running on relatively modest hardware (Mac Mini for smaller models, Mac Studio for larger models).

          • squaresinger@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            4 days ago

            Yeah, shitty toy ones. This here is about productivity, not about a hobby. And not even real state-of-the-art models were able to actually give a productivity advantage.

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 days ago

              Our self-hosted ones are quite good and get the job done. We use them a lot for research, and it seems to do a better job than most search engines. We also link it to internal docs and it works pretty well for that too.

              If you run a smaller model at home because you have limited RAM, yeah, you’ll have less effective models. We can’t run the top models on our hardware, but we can run much larger models than most hobbyists. We’ve compared against the larger commercial models, and they work well, if little slowly.

  • Katana314@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    4 days ago

    Ugh, thought this could’ve referred to a Trial as in “All rise for the judge”, not Trial as in “Your free trial has expired”.

    We’re way overdue to put AIs on former trials.

    • Animated_beans@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 days ago

      I use Copilot for generating images of concepts for presentations at work. It helps me get my point across and no accuracy is needed because it is taking the place of clip art and Google image searches. There is absolutely a place for Generative AI in the workplace. Whether it is worth the cost and whether people are trusting it too much is another question.

    • Lemminary@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 days ago

      It helps me get there more often than not, anywhere from programming I’m unfamiliar with to brainstorming in graphic design. I see a lot of anti-AI folks diss it without considering how it’s actually used. It’s a tool like any other, and you get what you make of it.

  • AceBonobo@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    5 days ago

    From reading the study, it seems like the workers didn’t even use it. Less than 2 queries per day? A third of participants used it once per week?

    This is a study of resistance to change or of malicious compliance. Or maybe it’s a study of how people react when you’re obviously trying to take their jobs.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      26
      ·
      5 days ago

      I don’t think it’s people being resistant to change I think it’s people understanding the technology isn’t useful. The tagline explains it best.

      AI tech shows promise writing emails or summarizing meetings. Don’t bother with anything more complex

      It’s a gimmick, not a fully fleshed out productivity tool, of course no one uses it. That’s like complaining that no one uses MS paint for the production of a high quality graphics.

    • thehatfox@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      5 days ago

      The figures are the averages for the full trial period.

      So it’s possible they were making more queries at the start of the trial, but then mostly stopped when if they found using Copilot was more a hindrance than a help.

      • Elvith Ma'for@feddit.org
        link
        fedilink
        English
        arrow-up
        11
        ·
        5 days ago

        I have a Copilot license at work. We also have an in house „ChatGPT clone“ - basically a private deployment of that model so that (hopefully) no input data gets used to train the models.

        There are some usecases that are neat. E.g. we’re a multilingual team, so having it transcribe, translate (and summarize) a meeting so that it’s easier to finalize and check a protocol. Coming back from a vacation and just ask it summarize everything you missed for a specific area of your work (to get on track before just checking everything chronologically) can be nice, too.

        Also we finetuned a model to assist us in writing and explaining code from a domain specific language with many strange quirks that we use for a tool and that has poor support from off the shelf LLMs.

        But all of these cases have one thing in common: They do not replace the actual work and are things that will be checked anyways (even the code one, as we know there are still many flaws, but it’s usually great at explaining the code now - not so at writing it). It’s just a convenient method to check your own work - and LLM hallucinations will usually be caught anyway.

  • Schlemmy@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    5 days ago

    Because they don’t know how to use it.

    I work for the government and we’re trialing Copilot too.

    Yesterday I gave copilot several legal documents and our departments long term goals and asked to analyse those documents and find opportunities, legal complications and a matrix of proposed actions.

    In less than 5 minutes I have a great overview to start talks with local politicians. This would have taken me at least a day before AI.

    • Kit@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      7
      ·
      5 days ago

      100%. I’m also trialing Copilot at a medium-sized corpo job and it saves me roughly 12-20 hours of work per week.

      I use it often in PowerShell scripting. It occasionally hallucinates and makes up commands, so sometimes it takes a bit of back and forth to get it to do what I want, but it’s still a hundred times easier than writing from scratch or tweaking+combining similar scripts I find online.

      Probably my favorite part is being able to ask it “Where did I leave off with John on x issue last week?” And it will remind me that I’m supposed to do x and John is supposed to do y. Or even, “I helped a user with this specific issue six months ago. How did I fix it?” and it pulls the exact email and Teams chats outlining what we did, and I can click the link to open those messages and ensure it didn’t misinterperate. Way easier than digging by hand.

      Finally, I absolutely hate making PowerPoints so I’ve been having it make all of my rough drafts from transcription notes in meetings. Super nice time saver.

      Something I’m concerned about and playing with this week is pronoun usage in transcripts. I’m working with our LGBTQ ERG to ensure that we can make Copilot use preferred pronouns for everyone. If it can’t, we’ll need to pull back certain features.

      It’s far from perfect but it genuinely makes my job a lot easier and I’d hate to lose it. I think it will only get better from here.

  • jaykrown@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    “speeding up some tasks yet making others slower due to lower quality outputs”

    So use it for the tasks that were made more efficient, and stop using it for the ones that slowed down or were low quality.

  • Schlemmy@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    5 days ago

    I’ve show my coworkers some practical implementations of copilot and that was enough to kickstart the use.

    If you’re composing the same mails a lot, for example, you can ask copilot to make a template text and then when you have to compose the same email again you ask copilot to compose and personalize the mail for you. That’s an awesome function.

    I’ve made an agent that answers HR related questions of my team. This saves me and HR a lot of time and they are assured their questions are handled discrete.

  • Prior_Industry@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 days ago

    Seeing a big uptake in use in the education sector. Teachers paying for their own ChatGPT pro license to lesson plan etc.

    Can’t comment at this point if that’s right or wrong, you hope the teachers using it would identify hallucinations etc. But you can see there is already a change occurring.