Microsoft launches Copilot Health to store and analyze patient data, explicitly disclaims medical advice
Microsoft has launched Copilot Health, a separate space within Copilot designed to aggregate electronic health records and wearable data for personalized health insights. The service explicitly disclaims providing medical advice, diagnosis, or treatment—positioning itself as a wellness tool rather than clinical decision support.
Microsoft Launches Copilot Health to Aggregate Patient Data
Microsoft has announced Copilot Health, a new feature that stores and analyzes electronic health records (EHR) and wearable device data within a segregated space in Copilot. According to Microsoft, the service will "deliver personalized health insights" by combining hospital records, lab results, and activity data from devices like Apple Watch, Oura, and Fitbit.
The critical limitation: Copilot Health explicitly disclaims providing medical advice, diagnosing, treating, or preventing diseases. This disclaimer appears buried at the end of Microsoft's announcement, despite the service's core function being health analysis.
Market Context: AI Health Tools Proliferate
Microsoft is following established competitors into health-focused AI. OpenAI announced ChatGPT Health in January after determining that over 40 million people worldwide ask ChatGPT for healthcare advice daily. Anthropic launched Claude for Healthcare days later. Microsoft's own research shows that nearly one in five Copilot conversations involve assessment of personal symptoms or conditions.
Microsoft AI CEO Mustafa Suleyman framed the tool ambitiously: "enabling users to connect all their EHR records and wearable data in a secure, private health space that Copilot can analyze and reason about to provide personalized insights." He suggested Copilot Health would help users prepare focused questions for doctors, implying the service aims to serve populations with limited access to medical advice.
The Medical Advice Problem
The distinction between "wellness insights" and medical advice remains legally and practically murky. A recent UK study found chatbots provide poor medical advice, validating regulatory caution. Yet the FDA relaxed clinical decision support wearable rules at the start of 2026. Law firm Arnold & Porter noted this policy change likely permits "AI-enabled clinical decision support" to bypass FDA review when not classified as medical devices.
This creates regulatory ambiguity: tools can analyze health data and suggest interventions while disclaiming medical advice liability.
Security and Data Controls
Microsoft claims Copilot Health conversations and data are isolated from general Copilot with "additional access, privacy, and safety controls." The company promises:
- Encryption at rest and in transit
- Strict access controls
- Instant disconnection from data sources (EHR systems, wearables)
- No data used for model training
- User ability to delete information
Microsoft's security track record remains contested, though these specific architectural claims are difficult to independently verify.
What This Means
Microsoft is entering a crowded AI health market by offering data aggregation with analytical capability—while maintaining legal distance from medical practice through disclaimers. The service competes directly with OpenAI and Anthropic on convenience (centralized health data) rather than clinical authority.
The broader implication: companies are building health AI tools that function like medical advice systems while legally disclaiming medical advice status. Regulatory frameworks haven't caught up to this functional ambiguity. As these tools reach millions of users, the gap between what users believe the AI can do and what companies claim it can do will likely trigger regulatory and litigation responses.