top of page
Search

ChatGPT Health: The Shadow AI Tool Already Inside Your Project

  • Writer: Yoshi Soornack
    Yoshi Soornack
  • Jan 10
  • 5 min read

Updated: Jan 13

230 million weekly users prove AI health tools need guidance, not prohibition


OpenAI's launch of ChatGPT Health on 7 January creates opportunities for forward-thinking project leaders who understand how consumer AI shapes professional contexts. The tool is designed for individuals and clinicians, but the real story is how organisations can guide AI adoption toward strategic advantage rather than letting it happen by accident.



Here's what OpenAI announced: a dedicated ChatGPT experience with purpose-built privacy architecture for health conversations. Users can connect medical records, Apple Health, MyFitnessPal and other wellness apps. Health data stays isolated, encrypted and separated from other chats. Over 230 million people globally ask ChatGPT health questions weekly. Over 40 million use it daily. Seventy per cent of health-related chats happen outside normal clinic hours.


Now consider the opportunity: how many of those 230 million weekly users work on your projects and could benefit from clear guidance on appropriate use?


Understanding the Adoption Pattern

OpenAI worked with 260 physicians across 60 countries who provided 600,000-plus feedback instances to develop ChatGPT Health. The product helps users decode medical jargon, spot billing errors, prepare for doctor visits and understand test results. It connects to actual medical records through a partnership with b.well, which provides health data connectivity infrastructure.


Fidji Simo, OpenAI's CEO of applications, shared a telling anecdote during the press preview. After being hospitalised for a kidney stone, she developed an infection. A resident prescribed a standard antibiotic, but Simo checked it against her medical history in ChatGPT. The AI flagged that the medication could reactivate a serious life-threatening infection she had suffered years before.


"The resident was relieved I spoke up, she told me she only has a few minutes per patient during rounds, and that health records aren't organized in a way that makes it easy to see," Simo said. Source: Fortune

That story demonstrates both capability and opportunity. An individual used a consumer AI tool to enhance clinical decision-making. The resident welcomed the intervention. The question for project leaders is how to create frameworks that enable this kind of beneficial use whilst managing organisational considerations around data, liability and governance.


Where Guidance Creates Value

OpenAI positions ChatGPT Health for personal wellness, not professional medical advice. The product explicitly states it is "not intended for diagnosis or treatment." But organisations can help their people understand where the tool adds value and where it doesn't.


Consider scenarios where clear guidance helps: a project team member uses ChatGPT Health to better understand occupational health assessments before discussions with HR. Or to prepare more informed questions for workplace health consultations. Or to understand wellness programmes the organisation offers. The tool isn't replacing professional advice. It's helping people engage more effectively.


What enables this is policy that defines appropriate use clearly. Most organisations have generic IT acceptable use statements that predate consumer AI. The opportunity lies in creating guidance that helps people capture value whilst understanding boundaries.


We believe organisations that provide clear frameworks for AI tool use will capture an advantage over those that either prohibit use (ineffective) or ignore it (risky). Project leaders who build this guidance proactively will enable their teams whilst managing considerations around data protection, liability and assurance.


Building Effective Frameworks

OpenAI has built robust privacy controls into ChatGPT Health. Conversations are stored separately, not used to train foundation models, and protected by purpose-built encryption. Health information and memories never flow into non-health chats. Users can view or delete health memories at any time.


These controls address individual privacy. Organisations can build on them by adding guidance around professional use contexts.


Effective frameworks address practical questions: Can team members use AI to better understand occupational health information shared with them? Yes, with appropriate boundaries. Should AI tools influence formal project decisions about personnel health matters? No, those require professional expertise. Where does personal use end and professional use begin? Clear examples help people navigate this.


Most project teams benefit from transparency about AI tool adoption. People are already using these tools because they're fast, accessible and genuinely helpful. The opportunity lies in helping them use tools effectively rather than creating policies that push use underground.


Practical Steps That Create Advantage

Forward-thinking organisations can capture advantage by building clear AI guidance now:


  • Acknowledge that ChatGPT Health is likely already in use. It's free, globally accessible (except EEA, Switzerland and UK initially), and addresses real needs. Clear guidance works better than prohibition or silence.


  • Create concrete guidance with practical examples. Team members can use AI to understand personal health information. They should not use AI for formal decisions about others' health matters or workplace accommodations.


  • Build awareness of how AI tools enhance professional effectiveness. ChatGPT Health can help someone prepare better questions for occupational health consultations, leading to more productive conversations and outcomes.


  • Establish proportionate review processes. Not every AI interaction requires oversight, but significant decisions benefit from human verification before shaping major programme choices.


  • Stay current as capabilities evolve. OpenAI will expand features over time. Regular reviews of adoption patterns help organisations adapt guidance as technology advances.


The Strategic Opportunity

ChatGPT Health is one product from one company, but the pattern applies across AI adoption. Google, Microsoft and Amazon are all building AI interfaces for different sectors. Each is designed to enhance how people interact with information and services.


Forward-thinking organisations recognise this trend and build frameworks that enable effective use. The stated purpose of each tool is specific and beneficial. The actual use will be broader and more varied. Teams will adapt tools to their contexts because the tools are available, they work and they create value.


Our take on this: project leaders should embrace AI as a capability their people are already using and provide guidance that helps them use it well. The opportunity is substantial. Clear frameworks enable teams to capture value from AI tools whilst managing appropriate considerations around data, liability and assurance.


OpenAI's announcement signals a direction. The AI interface for consumer healthcare is opening. Other sectors are following similar paths. The organisations that build enabling frameworks early will capture advantage. The organisations that build restrictive frameworks or no frameworks will discover their people are using tools anyway, just without guidance or oversight.


The Path Forward for Project Leaders

This moment creates opportunity for project leaders who move decisively. Acknowledge reality: ChatGPT Health is likely already in use, and prohibition is neither viable nor valuable. Provide clear guidance with specific examples. Invest in awareness that turns AI tools into capability amplifiers. Establish proportionate review processes matching the stakes.


The alternative is people using tools without guidance and at a competitive disadvantage versus organisations that enable effectively. The window for building enabling frameworks is open now.


The project leaders building tomorrow's capability are creating AI frameworks today. Subscribe to Project Flux for practical guidance on turning AI adoption into advantage. The organisations moving first are already ahead.



All content reflects our personal views and is not intended as professional advice or to represent any organisation.


 
 
 

Comments


bottom of page