Sharing Health Data with Chatbots: A Risky Proposition
Your health data is sacred, but should you trust a chatbot with it? With the rise of AI, millions are turning to chatbots like ChatGPT for health advice, a trend that has sparked both excitement and concern.
Every week, an astonishing 230 million people seek health and wellness guidance from ChatGPT, according to OpenAI. The company positions its chatbot as a trusted ally, helping users navigate the complex world of insurance and paperwork. But here's the catch: they want you to share your most private medical details, from diagnoses to test results. It's like a virtual doctor's office, but without the legal obligations of actual healthcare providers. Experts warn that users should think twice before sharing their records.
The health and wellness sector is becoming a battleground for AI companies, testing users' willingness to embrace these technologies. This month, OpenAI and Anthropic made significant moves into healthcare. OpenAI introduced ChatGPT Health, a dedicated space for health queries, promising enhanced security and personalization. Anthropic launched Claude for Healthcare, a product compliant with HIPAA (Health Insurance Portability and Accountability Act) regulations. Interestingly, Google, with its widely used Gemini chatbot, has been relatively quiet on the healthcare front, despite recent updates to its MedGemma medical AI model.
OpenAI encourages users to share sensitive health data, including medical records and lab results, in exchange for personalized insights. They assure users that this data will be kept confidential and won't be used for AI training. However, the company's track record and the evolving nature of privacy policies leave room for doubt.
OpenAI's simultaneous launch of ChatGPT for Healthcare, a product with tighter security for businesses and clinicians, adds to the confusion. The similar names and launch dates make it easy to mistake the consumer-facing ChatGPT Health for its more secure counterpart. This confusion could lead users to believe their data is safer than it actually is.
But here's where it gets controversial: Even if a company promises to protect your data, can you trust that promise? Experts argue that current privacy laws offer limited protection. Users often rely on terms of use and privacy policies, which can change over time. As Sara Gerke, a law professor, points out, data protection for AI tools like ChatGPT Health is largely based on company promises, not legal safeguards.
Despite OpenAI's encryption of health data, Hannah van Kolfschooten, a digital health law researcher, warns that ChatGPT's terms of use can change, leaving users unprotected. Carmel Shachar, a law professor at Harvard, agrees, emphasizing the limited protection and the possibility of changing privacy practices.
Compliance with HIPAA, a key healthcare data protection law, may not provide much reassurance either. As Shachar notes, voluntary compliance doesn't carry the same weight as legal obligation. The true value of HIPAA lies in its enforcement mechanisms.
Why is medicine so heavily regulated? It's not just about privacy. The strict regulations in medicine exist for a reason: mistakes can be deadly. Chatbots have a history of providing false or misleading health information. For instance, ChatGPT once suggested replacing salt with sodium bromide, a historical sedative, leading to a rare condition. Google's AI Overviews have also given dangerous advice to cancer patients.
OpenAI claims ChatGPT is not for diagnosis or treatment, yet they've invested heavily in showcasing its medical capabilities. They've even invited patients on stage to share how ChatGPT helped them. The company has developed HealthBench, a benchmark to assess ChatGPT's medical skills, but critics argue it lacks transparency. Studies, some small or company-run, suggest ChatGPT's potential in passing medical exams, improving patient communication, and even outperforming doctors in certain diagnoses.
OpenAI's efforts to establish ChatGPT Health as a trusted health resource may overshadow its medical disclaimers. Van Kolfschooten points out that personalized systems with an air of authority may not easily lose users' trust, even with disclaimers.
OpenAI and Anthropic are vying for a prominent position in the healthcare AI market, and the numbers suggest a growing reliance on AI chatbots for health. Given the global health disparities and access challenges, this trend could be beneficial. However, the question remains: is this trust in AI chatbots warranted? Healthcare providers earn our trust through years of rigorous training and ethical practice. Can AI companies, known for rapid innovation, earn the same level of trust with our most sensitive data?