What Permissions Should An AI Agent Never Have?

What Permissions Should An AI Agent Never Have?: practical guidance, risks, checklist and next steps.

AI agent permissions decide whether an agent is a helpful assistant or an uncontrolled operator. If you need the wider context, start with AI agent security at work. This guide focuses on dangerous permissions, approval gates and least privilege for agents, with practical controls that a UK team can use before the next tool, supplier or incident forces the issue.

The most dangerous agent is not necessarily the most advanced one. It is the one with quiet access to delete, send, spend, export or change things without enough review. The answer is not panic and it is not blind adoption. The answer is a clear boundary: what is allowed, who owns it, what must be checked, and how the team will know if something goes wrong.

Why AI agent permissions matters now

As agents gain tool access, permission design becomes the practical centre of AI safety. This is why the topic should sit in normal business planning rather than being treated as a side project. Security works best when the control is built into the workflow, not added after staff have already found their own shortcuts.

The most useful external reference for AI agent permissions is OWASP Agentic AI Security guidance. Read it as a baseline, then compare it with the exact systems, data and decisions your team handles.

If you would not give the permission to a new employee on day one, do not give it to an AI agent by default.

The risk in plain English

The risk is excessive agency: the agent can do more than the business intended or more than the task requires. Most failures are not caused by one dramatic mistake. They are caused by small permissions, old assumptions and unclear review points connecting together. A safe process breaks that chain before one weak point becomes a business problem.

  • Production admin access.
  • Ability to delete files or backups.
  • Ability to send external emails.
  • Finance or payment workflow access.
  • Broad customer data exports.
  • Permission to create new users or tokens.

What good looks like

Good practice for AI agent permissions should be easy to recognise in daily work. People should know the rule, the owner should be able to show the setting or record, and the team should understand what to do if the control fails.

Area Weak setup Safer setup
Delete permissions Agent can remove data Human approval and backups
Email send Agent can contact customers Draft-only by default
Admin roles Agent can change access Never default; use break-glass review

A practical checklist

Use the checklist below as the first working version for AI agent permissions. Review it when the tool, supplier, workflow or risk level changes.

  • Start with read-only.
  • Separate draft from send.
  • Use scoped tokens.
  • Require approval for side effects.
  • Block deletion and permission changes by default.
  • Review access monthly.

How to roll this out without slowing the team down

For AI agent permissions, begin with the workflow where a mistake would hurt most. One completed improvement in that place is more useful than a broad plan that nobody owns.

  1. Name an owner for AI agent permissions.
  2. List the tools, accounts, data or workflows involved.
  3. Decide what is allowed, blocked and approval-only.
  4. Make the rule easy to find and easy to follow.
  5. Add a review date and a reporting route for problems.
  6. Update related posts, policies or checklists when the process changes.

Common mistakes

The mistakes below are common around AI agent permissions. They become easier to fix once the team knows who should notice them and what the next action should be.

  • Using shared admin accounts.
  • Approving broad access for convenience.
  • Forgetting temporary access.
  • Not logging tool calls.

Internal links and next steps

AI agent permissions belong in access control, MFA and risk management routines. For a broader control set, read cyber risk register guide and why MFA still fails. If the topic touches personal data, also connect it to personal data sharing and privacy basics.

Questions people usually ask

Can an agent ever have write access?

Yes, but only for narrow workflows with logging, approval and rollback.

What permission is most dangerous?

Anything that combines broad data access with external action, such as exporting records or sending messages.

How often should permissions be reviewed?

At least monthly during pilots and after any tool or workflow change.

Final recommendation

Give AI agents the smallest useful permission set and make side effects approval-only. Write down the rule, test it against a real example, and improve it after the first review. Good security is not a perfect document. It is a repeatable behaviour that survives busy days.

Review permissions after the pilot

Permissions often expand during testing and then remain in place. At the end of each pilot, remove anything the agent did not actually need. This keeps the production setup smaller, easier to explain and less damaging if the agent is later manipulated or misconfigured.

A realistic workplace example

A workflow agent starts by organising tickets, then receives access to update records, send customer emails and change labels. Each permission is convenient in isolation. Together, they let the agent affect customer experience without enough review.

What to monitor

Monitoring AI agent permissions should stay simple. Pick a few signals that reveal whether the control is being followed, ignored or stretched beyond its original purpose.

  • Permissions added after launch
  • Actions with customer impact
  • Delete/export rights
  • Ability to change access or roles

A 30-day improvement plan

Improve AI agent permissions in short cycles. Complete one action, record what changed, then use that evidence to decide the next step.

  1. Separate read, draft and act permissions
  2. Make side effects approval-only
  3. Remove unused rights monthly
  4. Use test data for experiments

Why this should stay practical

Permission design is where AI safety becomes practical. The smaller the permission set, the easier it is to trust and audit.

The strongest control for AI agent permissions is the one people can follow during normal work. If the safe route is clear, quick and visible, it is more likely to become the default.

Decision rules for this topic

For agent permissions, block high-impact actions until the workflow proves it needs them. Convenience is not enough reason to grant power.

  • Block deletion, export and permission changes by default.
  • Separate draft actions from send or publish actions.
  • Review permissions at the end of every pilot.

Who should be involved

Access decisions should include the system owner, the workflow owner and someone who can judge incident impact.

When to revisit the guidance

Revisit permissions at the end of every pilot and after any unexpected output. Remove permissions the agent did not truly need.

The approval boundary

The most important boundary is the point where a suggestion becomes an action. Drafting a reply, planning a workflow or identifying a risk is different from sending, changing, deleting or approving. Put approval at that boundary and the agent becomes much easier to manage.

This is also where logs matter. If an action is approved, the business should be able to see who approved it, what changed and which system was affected. Without that trail, a useful assistant can become a difficult incident to investigate.

For higher-risk workflows, create a permission ladder. The first step can read, the second can draft, the third can suggest a change, and only the final step can act after approval. That ladder makes escalation visible.

Sources and further reading

Free PDF guide

Download The AI Sentinel

A strategic guide to securing the intelligent enterprise: risks, governance and defence-in-depth for 2026.

The AI Sentinel guide cover