You’re right to feel insulted. LLMs are verbose and unreliable often enough that you have to check any work that comes out (or be negligent).
So what’s usually happening is someone is saving their time by spending yours. They saved the time normally needed to write a thoughtful reply by shifting the time and cognitive cost of reading and verifying to you, with AI as an excuse (often not without condescension, which is a type of “virtue signaling” driven by c-suite AI boosting). The slop output looks like “work product,” but is neither - it took no work and is a facade of a “product” because it’s unverified.
They are being selfish, and it is objectively an insulting act.
Put them on a list where any and every email they send you gets fed into GPT and replied to without you ever reading it, then to make sure they know that explain what’s happening in the signature.
Someone literally copy and pasted a whole ChatGPT comment in an email reply to some questions I’d asked them. I was somewhat insulted.
You’re right to feel insulted. LLMs are verbose and unreliable often enough that you have to check any work that comes out (or be negligent).
So what’s usually happening is someone is saving their time by spending yours. They saved the time normally needed to write a thoughtful reply by shifting the time and cognitive cost of reading and verifying to you, with AI as an excuse (often not without condescension, which is a type of “virtue signaling” driven by c-suite AI boosting). The slop output looks like “work product,” but is neither - it took no work and is a facade of a “product” because it’s unverified.
They are being selfish, and it is objectively an insulting act.
Put them on a list where any and every email they send you gets fed into GPT and replied to without you ever reading it, then to make sure they know that explain what’s happening in the signature.