“AI is nowhere near to being ready to replace you at your job. It is, however, ready enough to convince your boss that it’s ready to replace you at your job.”
Probably bosses are trying to convince AI that AI is ready.
This is nothing new though. For decades, managers have fallen for “solution in a box” sales pitches even though front line workers know it’s doomed to fail as soon as they set eyes on it. This time the solution just happens to be “AI.”
Unpopular opinion incoming:
I don’t think we should ignore AI diagnosis just because they are wrong sometimes. The whole point of AI diagnosis is to catch things physicians don’t. No AI diagnosis comes without a physician double checking anyway.
For that reason, I don’t think it’s necessarily a bad thing that an AI got it wrong. Suspicion was still there and physicians double checked. To me, that means this tool is working as intended.
If the patient was insistent enough that something was wrong, they would have had them double check or would have gotten a second opinion anyway.
Flaming the AI for not being correct is missing the point of using it in the first place.
I don’t think it’s necessarily a bad thing that an AI got it wrong.
I think the bigger issue is why the AI model got it wrong. It got the diagnosis wrong because it is a language model and is fundamentally not fit for use as a diagnostic tool. Not even a screening/aid tool for physicians.
There are AI tools designed for medical diagnoses, and those are indeed a major value-add for patients and physicians.
Fair enough
“AI convinced me of something I later learned was completely incorrect, isn’t that amazing!”
No. No, this is bad. Very bad.
That just sounds like a magic 8 ball with some statistics sprinkled over
The minute I see some tool praising the glory of AI, I block them. Engaging with them is a futile waste of time.