Welcome to the modern reality of generative AI, and its application for our health. There have always been health sites online, but this tool is qualitatively different, and bears scrutiny.
Many have already pivoted to this form of information, and it’s totally understandable. There are SO many avenues to attain health information, and it can be overwhelming. AI just gives you the answer — poof, easy peasy.
But of course there’s a down side, because you can’t know whether the outcomes it produces for you are correct, misleading, are incorrect altogether.
And this confusion very much needs to be clarified, because research shows that almost 40% of consumers use generative artificial intelligence (AI) for health-related purposes.
That may sound good, but these AI systems are prone to a phenomenon called ‘AI hallucination’: where results vary between misleading, flat out wrong, or simply make up information.
An analysis of these hallucinations indicate that they can occur between 3 and 30% of the time, depending on the product.
The unfortunate and somewhat scary aspect is that AI hallucinations appear perfectly normal and reasonable, even though they may be valid, may be a waste of time, or may be dangerous to your health.
So, the bottom line is that this technology holds great promise to help us live healthier lives, but we have a long way to go still before completely letting our guard down around it.
Do your own fact checking. If you ask it about a certain health condition, medication, or symptom, the system you’re using should cite references where it got the information from.
But just citing research isn’t enough to completely trust the advice you’re getting, because AI Hallucinations can even create completely fraudulent medical research citations.
So even when it does provide these, open the link and scan through them to confirm the outcome they talked about. And for any medical changes or concerns, always, always, check with your doctor as a guide to help you navigate this new territory of AI health.