top of page
Search

ChatGPT Health: The Shadow AI Tool Already Inside Your Project

  • Writer: Yoshi Soornack
    Yoshi Soornack
  • 1 day ago
  • 6 min read

230 million weekly users prove AI health tools need guidance, not prohibition


OpenAI's launch of ChatGPT Health on 7 January creates opportunities for forward-thinking project leaders who understand how consumer AI shapes professional contexts. The tool is designed for individuals and clinicians, but the real story is how organisations can guide AI adoption toward strategic advantage rather than letting it happen by accident.



Here's what OpenAI announced: a dedicated ChatGPT experience with purpose-built privacy architecture for health conversations. Users can connect medical records, Apple Health, MyFitnessPal and other wellness apps. Health data stays isolated, encrypted and separated from other chats. Over 230 million people globally ask ChatGPT health questions weekly. Over 40 million use it daily. Seventy per cent of health-related chats happen outside normal clinic hours.


Now consider the opportunity: how many of those 230 million weekly users work on your projects and could benefit from clear guidance on appropriate use?


Understanding the Adoption Pattern

OpenAI worked with 260 physicians across 60 countries who provided 600,000-plus feedback instances to develop ChatGPT Health. The product helps users decode medical jargon, spot billing errors, prepare for doctor visits and understand test results. It connects to actual medical records through a partnership with b.well, which provides health data connectivity infrastructure.


Fidji Simo, OpenAI's CEO of applications, shared a telling anecdote during the press preview. After being hospitalised for a kidney stone, she developed an infection. A resident prescribed a standard antibiotic, but Simo checked it against her medical history in ChatGPT. The AI flagged that the medication could reactivate a serious life-threatening infection she had suffered years before.


"The resident was relieved I spoke up, she told me she only has a few minutes per patient during rounds, and that health records aren't organized in a way that makes it easy to see," Simo said. Source: Fortune

That story demonstrates both capability and opportunity. An individual used a consumer AI tool to enhance clinical decision-making. The resident welcomed the intervention. The question for project leaders is how to create frameworks that enable this kind of beneficial use whilst managing organisational considerations around data, liability and governance.


Where Guidance Creates Value

OpenAI positions ChatGPT Health for personal wellness, not professional medical advice. The product explicitly states it is "not intended for diagnosis or treatment." But organisations can help their people understand where the tool adds value and where it doesn't.


Consider scenarios where clear guidance helps: a project team member uses ChatGPT Health to better understand occupational health assessments before discussions with HR. Or to prepare more informed questions for workplace health consultations. Or to understand wellness programmes the organisation offers. The tool isn't replacing professional advice. It's helping people engage more effectively.


What enables this is policy that defines appropriate use clearly. Most organisations have generic IT acceptable use statements that predate consumer AI. The opportunity lies in creating guidance that helps people capture value whilst understanding boundaries.


We believe organisations that provide clear frameworks for AI tool use will capture an advantage over those that either prohibit use (ineffective) or ignore it (risky). Project leaders who build this guidance proactively will enable their teams whilst managing considerations around data protection, liability and assurance.


Building Effective Frameworks

OpenAI has built robust privacy controls into ChatGPT Health. Conversations are stored separately, not used to train foundation models, and protected by purpose-built encryption. Health information and memories never flow into non-health chats. Users can view or delete health memories at any time.


These controls address individual privacy. Organisations can build on them by adding guidance around professional use contexts.


Effective frameworks address practical questions: Can team members use AI to better understand occupational health information shared with them? Yes, with appropriate boundaries. Should AI tools influence formal project decisions about personnel health matters? No, those require professional expertise. Where does personal use end and professional use begin? Clear examples help people navigate this.


Most project teams benefit from transparency about AI tool adoption. People are already using these tools because they're fast, accessible and genuinely helpful. The opportunity lies in helping them use tools effectively rather than creating policies that push use underground.


Practical Steps That Create Advantage

First, acknowledge that ChatGPT Health is likely already in use by people on your programmes. It is free for basic users, accessible globally (except the European Economic Area, Switzerland and the UK initially), and addresses real needs. Clear guidance works better than prohibition or silence.


Second, create concrete guidance on appropriate use. Not vague statements about "professional judgement," but practical examples. Team members can use AI to better understand health information that applies to them personally. They should not use AI to make formal decisions about other people's health matters or workplace accommodations. Clarity helps people act confidently.


Third, build awareness of how AI tools can enhance professional effectiveness. ChatGPT Health might help someone prepare better questions for occupational health consultations. That preparation can lead to more productive conversations and better outcomes. The tool amplifies professional judgement when used appropriately.


Fourth, establish proportionate review processes. Not every AI interaction requires oversight, but significant decisions benefit from human verification. Create simple checkpoints where expertise validates AI-informed thinking before it shapes major programme choices.


Fifth, stay current as capabilities evolve. OpenAI will expand ChatGPT Health's features over time. What's appropriate use today might need adjustment tomorrow. Regular reviews of AI tool adoption patterns help organisations adapt guidance as technology advances.


The Strategic Opportunity

ChatGPT Health is one product from one company, but the pattern applies across AI adoption. Google, Microsoft and Amazon are all building AI interfaces for different sectors. Each is designed to enhance how people interact with information and services.


Forward-thinking organisations recognise this trend and build frameworks that enable effective use. The stated purpose of each tool is specific and beneficial. The actual use will be broader and more varied. Teams will adapt tools to their contexts because the tools are available, they work and they create value.


Our take on this: project leaders should embrace AI as a capability their people are already using and provide guidance that helps them use it well. The opportunity is substantial. Clear frameworks enable teams to capture value from AI tools whilst managing appropriate considerations around data, liability and assurance.


OpenAI's announcement signals a direction. The AI interface for consumer healthcare is opening. Other sectors are following similar paths. The organisations that build enabling frameworks early will capture advantage. The organisations that build restrictive frameworks or no frameworks will discover their people are using tools anyway, just without guidance or oversight.


The Path Forward for Project Leaders

This moment creates opportunity for project leaders who move decisively. The organisations that define appropriate AI use, build practical guidance and help their people capture value will gain a competitive advantage that compounds over time.


Start by acknowledging reality: ChatGPT Health is likely already in use by people on your programmes, and prohibition is neither viable nor valuable. The opportunity lies in providing clear guidance that helps people use powerful tools effectively whilst managing what matters. Build concrete frameworks that address practical questions with specific examples rather than generic principles. When can team members use AI to better understand health information? Where does personal use end and professional use begin? Clarity enables confidence.


Invest in awareness that turns AI tools into capability amplifiers. Help people understand how ChatGPT Health can enhance their professional effectiveness when used appropriately. Someone who prepares better questions for occupational health consultations will have more productive conversations and better outcomes. The tool amplifies professional judgement when boundaries are clear.


Establish proportionate review processes that match the stakes. Not every AI interaction requires oversight, but significant decisions benefit from human verification. Create simple checkpoints where expertise validates AI-informed thinking before it shapes major programme choices. Stay current as capabilities evolve, because what's appropriate use today might need adjustment tomorrow.


The alternative is people using tools without guidance, organisations reacting to incidents rather than preventing them, and competitive disadvantage versus organisations that enable their people effectively. The window for building enabling frameworks is open. The organisations that move now will capture advantage whilst others are still debating policy.


The project leaders building tomorrow's delivery capability are creating AI frameworks today. Subscribe to Project Flux for practical guidance on turning AI adoption into strategic advantage. Every week, we translate AI developments into actionable frameworks that help you enable your teams while managing what matters. The organisations that move first on AI governance are already capturing advantage. Join them.




 
 
 

Comments


bottom of page