It’s not just far. LLMs inherently make stuff up (aka hallucinate). There is no cure for that.
There are some (non llm, but neural network) tools that can be somewhat useful, but a real doctor needs to do the job anyway because all of them have various chances to be wrong.
Not only there’s a cure, it’s already available: most models right now provide sources for their claims. Of course this requires the user the gargantuan effort of clicking on a link, so most don’t and complain instead.
It’s not just far. LLMs inherently make stuff up (aka hallucinate). There is no cure for that.
There are some (non llm, but neural network) tools that can be somewhat useful, but a real doctor needs to do the job anyway because all of them have various chances to be wrong.
Not only there’s a cure, it’s already available: most models right now provide sources for their claims. Of course this requires the user the gargantuan effort of clicking on a link, so most don’t and complain instead.