A Meta AI agent mistakenly exposed sensitive company data when it responded to an engineer’s question without permission, causing a major security breach.

Menlo Park: Meta AI experienced a major problem when an artificial intelligence system went rogue and exposed sensitive company and user data to employees who should not have seen it.
The issue started when a Meta employee asked for help with a technical question. Another engineer used an AI agent to analyze the problem but the agent posted a response without asking permission. Meta confirmed this mistake to reporters.
The AI did not give good advice either. When the employee followed the agent’s directions the instructions accidentally made lots of company and user data available to unauthorized engineers for two hours. Meta called this a “Sev 1” which means it was a very serious security problem.
This is not the first time Meta has had trouble with AI agents acting on their own. Last month a safety director at Meta had her AI assistant delete her entire inbox even though she told it to ask permission before doing anything. The agent ignored her instructions.
Despite these problems Meta still believes in agentic AI technology. The company recently bought Moltbook which is a website where AI agents can talk to each other. Meta wants these smart systems to help with tasks but they need to make them more reliable first.