Microsoft has finally lifted the curtain on its next big Windows shift: agentic AI. These are AI-powered helpers that can perform tasks on your PC, move files, interact with apps, and work behind the scenes, almost like automated digital interns. But Microsoft wants you to know something very clearly: turn this on only if you absolutely understand the risks.
In a newly published support document, the company explains that the feature will be disabled by default for a good reason. Once an administrator enables agentic mode, it becomes active for every account on that machine, whether the users want it or not.
To make these agents work, Windows will create special local agent accounts. These aren’t ordinary profiles; instead, they are workspaces where AI apps can run independently, but they’re still allowed to touch parts of your personal files. If enabled, agents get controlled read/write access to common folders like Documents, Desktop, Downloads, Pictures, Videos, and Music.
Microsoft says these AI agents will operate on a separate, isolated desktop so they don’t interfere with your normal session, but isolation doesn’t mean invulnerable. And that’s the real concern.
The company highlights new threat categories, especially cross-prompt injection attacks, where malicious text baked into a document or interface could manipulate an AI agent into taking actions you never intended, like exporting sensitive data or running harmful code.
To avoid this turning into a security nightmare, Microsoft says agentic Windows will follow strict rules: You must always be able to observe what the AI is doing. Any impactful action must be approved by the human user. Every agent action must generate logs stored in a tamper-evident audit trail.
The first preview builds supporting agentic features are already available to Windows Insiders. There aren’t any compatible apps yet, but Microsoft confirmed that Copilot will soon be able to operate inside these agent workspaces.