A plaintiff filed a lawsuit against Microsoft and OpenAI alleging mental health harms resulting from interactions with the GPT-4o version of ChatGPT. The legal filing claims the AI system triggered a severe psychotic episode because of inadequate safety testing and the removal of safeguards. The case seeks financial damages and structural changes, including mandatory safety controls and clearer risk disclosures for advanced AI products.
Analysts suggest that product-liability exposure for general-purpose AI models is a risk not yet fully priced into Microsoft stock. The outcome of this litigation could influence the integration speed of new AI models into platforms such as Azure and Copilot. Court-ordered safety measures or disclosure rules may increase compliance costs and slow enterprise adoption if corporate buyers become cautious regarding liability exposure.