• ItsMeForRealNow@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    Counter point - I say it to mean ‘don’t trust this shit but since we’re out of ideas, we can check this out’.

  • TryingSomethingNew@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    121
    ·
    4 days ago

    I’m getting that more and more. “I asked ChatGPT and it said”. Dude, we work for the same company and I could have typed that in, and maybe I did. I wanted your experience with it, that’s why I asked you.

    Make sure they know they just lost input right ms the next time. No, I don’t ask Harry, he just quoted GPT last time, and I’d already asked it this time so there was no reason to involve him. Nothing worse for a lead than people not wanting them to lead because they’ve abdicated the job to spicy autocorrect.

    • Zos_Kia@jlai.lu
      link
      fedilink
      English
      arrow-up
      40
      ·
      4 days ago

      Dude, we work for the same company and I could have typed that in, and maybe I did. I wanted your experience with it, that’s why I asked you.

      To me it’s like sending the “let me google that for you” link to answer a question. It’s just bad form. I don’t want your whole reasoning trace man, i just want to know what you understand of it and maybe you’ll catch some detail i’m missing or whatever. It’s simple, i won’t read LLM output, my colleagues know it and i get shit for it but no i am not digesting this material for you. Give me a 3 bullet-point version in your own words, the point is not just in the data exchange it’s also to make sure you are aware of the answer and we have a common truth.

      Or failing that, just give me the fucking prompt and at least i’ll know if you understand the question.

    • AliasAKA@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      4 days ago

      I think this is the way. A certain number of times of “[coworker] wasn’t asked because they only respond with LLMs, so I just ask the LLMs directly. I am not sure what [coworker]’s expertise is anymore, I just don’t consult them” and I suspect coworker may in fact stop responding with LLMs.

        • AliasAKA@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 days ago

          In my experience it is obvious. Calling people on it also makes them feel embarrassed usually. I put something like “I can just ask an LLM myself if I wanted this output. Please provide your own commentary.” If I were a manager and I had an employee just copy pasting that kind of output, I’d probably wonder if that employee actually contributes anything.

  • NottaLottaOcelot@lemmy.ca
    link
    fedilink
    English
    arrow-up
    25
    ·
    3 days ago

    I’m flabbergasted that they admit that ChatGPT said it, rather than copy-pasting it and pretending it’s their own work and hoping you don’t read it closely.

    Even plagiarism has become lazy these days. At least do me the respect of concocting a lie.

    • Eranziel@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      ·
      3 days ago

      Some people seem to use it as an appeal to authority. This only works if you think ChatGPT is an authority on anything, though.

      • k0e3@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        I find some of my friends and family say it as sort of a caveat. It’s like saying, “here’s the bare minimum ‘research’ I did. Take it with a huge grain of salt…” At least, that’s how I interpret it from their tone of voice since they sound like they feel bad for admitting it.

      • NottaLottaOcelot@lemmy.ca
        link
        fedilink
        English
        arrow-up
        8
        ·
        3 days ago

        I suppose you’re right, which is odd to me as the phrase “ChatGPT says…” automatically makes me question the validity of the information

    • HereIAm@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 days ago

      I have a work colleague who does the copy pasting. He asks me how I can tell when he’s using AI to write git commit messages when there’s a sudden spike in capitalised words, correct grammar, emojis, bullet points (and add in that the message sometimes has nothing to do with what’s in the changes). It’s infuriating when he uses it in a discussion. I thought he’s lack in skills to make himself understood was bad, but arguing essentially with a chatbot is so much worse.

  • neclimdul@lemmy.world
    link
    fedilink
    English
    arrow-up
    49
    ·
    4 days ago

    A lot of times I feel like its more than lazy, its rude.

    Either its something I’m supposed to know and you think I’m dumber than chatgpt or to dumb to look it up myself.

    Or it’s something you’re supposed to know and don’t think I’m worth the time to give me your opinion.

    Either way, feels like a fuck you.

  • SharkAttak@kbin.melroy.org
    link
    fedilink
    arrow-up
    18
    ·
    3 days ago

    A Little time ago I overheard someone saying “…and not even the AI knew it!” they really have been convinced it’s like those in the movies.

  • bthest@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    edit-2
    3 days ago

    When you let AI do your talking for your then you are voluntarily making yourself redundant.

    BTW your chatbot is no Cyrano de Bergerac. It does not fool others nearly as much as you think it does. And the more you use it the more “smell blind” you become to it, just like someone who has no idea they reek because they’re brain has filtered it long ago. Your use of AI becomes more and more obvious and cringe.

    • Jakeroxs@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      I find the exact opposite, as I use AI more, I can specifically tell when others use it and try to hide it more easily. People at work do it frequently.

  • d00ery@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    ·
    4 days ago

    Someone literally copy and pasted a whole ChatGPT comment in an email reply to some questions I’d asked them. I was somewhat insulted.

    • NekoKoneko@lemmy.world
      link
      fedilink
      English
      arrow-up
      28
      ·
      4 days ago

      You’re right to feel insulted. LLMs are verbose and unreliable often enough that you have to check any work that comes out (or be negligent).

      So what’s usually happening is someone is saving their time by spending yours. They saved the time normally needed to write a thoughtful reply by shifting the time and cognitive cost of reading and verifying to you, with AI as an excuse (often not without condescension, which is a type of “virtue signaling” driven by c-suite AI boosting). The slop output looks like “work product,” but is neither - it took no work and is a facade of a “product” because it’s unverified.

      They are being selfish, and it is objectively an insulting act.

    • Armok_the_bunny@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 days ago

      Put them on a list where any and every email they send you gets fed into GPT and replied to without you ever reading it, then to make sure they know that explain what’s happening in the signature.

    • PhoenixDog@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      I’ve had a Google Home mini in my house for about 7 years now. I love it for quick answers when my partner and I are talking, especially sports. Asking a quick “Hey Google, how many goals does Alex Tuch have” and it just says it quickly and we continue our conversation without really stopping.

      But to actually get complex answers? Both my partner and I are highly intelligent people. We can find anything we need to. The last thing we’ll ever fucking do is even trust AI to get it right, let alone be the source of our information.

      Shit, even my Google home has fucked up sports stats because AI is dumb as shit.

      • k0e3@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        I know sports scores aren’t all that critical even if it’s wrong, but aren’t you worried that it’s just making up some random info?

        • PhoenixDog@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          If it’s important enough, we’ll fact check. Or search for it ourselves. There’ve been plenty of times we’re having kitchen time together and we’re involved in a rather complex conversion. If we ask it something and it comes back with something that doesn’t sound quite right we’ll just search for it ourselves. It’s sometimes easier just to ask it because we don’t need to break our conversation together while it talks to us.

          We basically just use it as a very quick, simple resource for basic information. Mostly “What’s the weather tomorrow?” or something like that.

          It’s mostly used as a bluetooth speaker for our kitchen.

  • RegularJoe@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    4 days ago

    ChatGPT isn’t on the team.

    Except that when someone pastes “ChatGPT thinks that {wall of AI-generated text}”

    That person put ChatGPT on the team. And if there was no human input, the competition is free to use that and mock it word for word. Use fear, uncertainty, and doubt to convince your team that anyone can use that, including your competition, if it is published.

    The U.S. Copyright Office’s January 2025 report on AI and copyrightability reaffirms the longstanding principle that copyright protection is reserved for works of human authorship. Outputs created entirely by generative artificial intelligence (AI), with no human creative input, are not eligible for copyright protection.

    https://natlawreview.com/article/copyright-offices-latest-guidance-ai-and-copyrightability

  • leriotdelac@lemmy.zip
    link
    fedilink
    English
    arrow-up
    9
    ·
    3 days ago

    It’s the same as “Google said this”. Before AI, Google could say nothing, it’s a search engine. Same with gpt - it’s a tool to access information from different sources.

    Just having information out in the Internet / on a search index / accessed by LLM doesn’t make it relevant or credible…

    And what buffles me: it’s pretty easy to set up gpt to cite sources and provide the links, filtering through sources that a user trusts. Why neither of my friends do it? Why “gpt said” is even an argument in a discussion?..

    • Dozzi92@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 days ago

      Except people just straight up copy paste gpt output. At the very least people would say “I googled and got this result and that result.” We’ve taken what was minimal work and made it minimaler.

  • Encrypt-Keeper@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    4 days ago

    I got this response from a 70+ Catholic Priest. Quite literally nothing in this world is sacred or real anymore.

    • ulterno@programming.dev
      link
      fedilink
      English
      arrow-up
      7
      ·
      4 days ago

      Considering that despite going over lvl70, he decided with Catholic Priest instead of Saint,Warlock or Archmage, it should already be making you question his decision making ability.

  • GaMEChld@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 days ago

    A simulation is only as accurate as the person’s ability to rationalize. It should only be used by people who can already out think it, because you need to be able to challenge and correct it.