As someone who works with integrating AI- it’s failing badly.
At best, it’s good for transcription- at least until it hallucinates and adds things to your medical record that don’t exist. Which it does and when the providers don’t check for errors - which few do regularly- congrats- you now have a medical record of whatever it hallucinated today.
And they are no better than answering machines for customer service. Sure, they can answer basic questions, but so can the automated phone systems.
They can’t consistently do anything more complex without making errors- and most people are frankly too dumb or lazy to properly verify outputs. And that’s why this bubble is so huge.
And they are no better than answering machines for customer service. Sure, they can answer basic questions, but so can the automated phone systems.
This is what drives nuts the most about it. We had so many incredibly efficient, purpose-built tools using the same technologies (machine learning and neural networks) and we threw them away in favor of wildly inefficient, general-purpose LLMs that can’t do a single thing right. All because of marketing hype convincing billionaires they won’t need to pay people anymore.
I tried having it identify an unknown integrated circuit. It hallucinated a chip. It kept giving me non-existent datasheets and 404 links to digikey/mouser/etc.
As someone who is actually an AI tool developer (I just use existing models) - it’s absolutely NOT failing.
Lemmy is ironically incredibly tech illiterate.
It can be working and good and still be a bubble - you know that right? A lot of AI is overvalued but to say it’s “failing badly” is absurd and really helps absolutely no one.
If you want to define “failing” as unable to do everything correctly, then sure, I’d concur.
However, if you want to define “failing” as replacing people in their jobs, I’d disagree. It’s doing that, even though it’s not meeting the criteria to pass the first test.
As someone who works with integrating AI- it’s failing badly.
At best, it’s good for transcription- at least until it hallucinates and adds things to your medical record that don’t exist. Which it does and when the providers don’t check for errors - which few do regularly- congrats- you now have a medical record of whatever it hallucinated today.
And they are no better than answering machines for customer service. Sure, they can answer basic questions, but so can the automated phone systems.
They can’t consistently do anything more complex without making errors- and most people are frankly too dumb or lazy to properly verify outputs. And that’s why this bubble is so huge.
It is going to pop, messily.
This is what drives nuts the most about it. We had so many incredibly efficient, purpose-built tools using the same technologies (machine learning and neural networks) and we threw them away in favor of wildly inefficient, general-purpose LLMs that can’t do a single thing right. All because of marketing hype convincing billionaires they won’t need to pay people anymore.
I tried having it identify an unknown integrated circuit. It hallucinated a chip. It kept giving me non-existent datasheets and 404 links to digikey/mouser/etc.
As someone who is actually an AI tool developer (I just use existing models) - it’s absolutely NOT failing.
Lemmy is ironically incredibly tech illiterate.
It can be working and good and still be a bubble - you know that right? A lot of AI is overvalued but to say it’s “failing badly” is absurd and really helps absolutely no one.
insurance companies, oh no, insurance companies !!! AArrrggghhh !!!
If you want to define “failing” as unable to do everything correctly, then sure, I’d concur.
However, if you want to define “failing” as replacing people in their jobs, I’d disagree. It’s doing that, even though it’s not meeting the criteria to pass the first test.