AWS Security Blog

Secure AI agent access patterns to AWS resources using Model Context Protocol

AI agents and coding assistants interact with AWS resources through the Model Context Protocol (MCP). Unlike traditional applications with deterministic code paths, agents reason dynamically, choosing different tools or accessing different data depending on context. You must assume an agent can do anything within its granted entitlements, whether OAuth scopes, API keys, or AWS Identity and Access Management (IAM) permissions, and design your controls accordingly. Agents operate at machine speed, so the impact of misconfigured permissions scales quickly.

This blog post focuses on IAM as the authorization layer for AWS resource access and presents three security principles for building deterministic IAM controls for these non-deterministic AI systems. The principles apply whether you’re using AI coding assistants like Kiro and Claude Code, or deploying agents on hosting environments like Amazon Bedrock AgentCore. We cover deployment patterns, then explore each principle with concrete IAM policy examples and implementation guidance.

This post specifically addresses securing the MCP access path, where agents interact with AWS resources through MCP servers. AI coding assistants and agents can also access AWS service APIs directly through general-purpose tools like bash or shell execution, bypassing MCP servers entirely. For this reason, we recommend architecting agents to use MCP servers rather than direct service access where possible. MCP servers provide a layer of abstraction that enables the differentiation controls in principle 3 and creates additional monitoring capabilities through AWS CloudTrail. When agents bypass MCP, the differentiation mechanisms in principle 3 don’t apply, and principles 1 and 2 become your primary controls. We discuss this scope boundary in principle 3.

MCP deployment patterns

Your deployment pattern determines which security principles and implementation approaches apply. Three dimensions define this pattern, including where the agent runs, what type of MCP server offers the tools, and your level of control over the agent code. No matter how you connect to it, the MCP server needs AWS credentials to interact with AWS resources.

Where agents run

Agents access AWS resources from three locations: developer machines (where you control the infrastructure), hosting environments (where you control the infrastructure or significant aspects of it), and third-party agent platforms (where you do not control the infrastructure). This post focuses on the first two patterns. Each has a different credential model and different organizational control options.

AI coding assistants and local agents

AI coding assistants (Kiro, Claude Code) or local agent applications represent the first deployment pattern. These assistants run locally on developer machines and connect to MCP servers or use AWS Command Line Interface (AWS CLI) commands to access AWS resources. In this pattern, credentials come from the developer’s local environment. When a developer configures an MCP server in their mcp.json file, they specify which AWS credentials to use. Options include a named profile, which can use credential helpers and the credential provider chain for short-lived credentials, environment variables, or explicit credential configuration. This means the developer controls which IAM principal the agent uses to access AWS. This creates a governance challenge. Without additional controls, developers often use their developer admin credentials, shared development roles, or even production roles for agent access. Developer credentials often carry broad permissions designed for interactive use, where human judgment serves as a safeguard. When an agent inherits these permissions, it operates without that judgment at machine speed. Principle 1 explores this risk in detail.

Agents on hosting environments

Agents deployed on hosting environments represent the second deployment pattern. These agents run on infrastructure you manage, not on developer machines. This changes the credential management model. Using Amazon Bedrock AgentCore as an example, when an agent runs on AgentCore Runtime, it uses an execution IAM role that you configure when creating the runtime. The execution role’s permissions apply to all operations the agent performs and cannot be scoped down per-invocation at the runtime configuration level. For more granular control, agents can call AWS Security Token Service (AWS STS) AssumeRole or AssumeRoleWithWebIdentity (collectively referred to as AssumeRole in this post). This obtains temporary credentials with session policies that further restrict permissions beyond the role’s base permissions. Agents built with frameworks like Strands can also initialize individual MCP clients with different credential sets by calling AssumeRole and passing the resulting credentials to each client connection. This enables per-tool credential isolation within a single agent process. The same pattern applies to agents deployed on Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Elastic Kubernetes Service (Amazon EKS).

With this centralized execution model, you can implement organizational controls. You define the available IAM roles through infrastructure configuration instead of relying on developer choice. However, you must design these roles carefully to prevent overly permissive access and implement session policies for tool-specific restrictions.

What type of MCP server

MCP servers come in two types, provider-managed and self-managed. AWS-managed servers are operated by AWS on your behalf. Self-managed servers are servers that you install and run yourself. The server type affects your operational overhead, available features, and how you implement security controls.

AWS offers fully managed MCP servers, including the AWS MCP Server, Amazon EKS MCP Server, and Amazon ECS MCP Server. These AWS-managed servers run on AWS infrastructure and require no installation or maintenance on your part. AWS-managed MCP servers automatically add IAM context keys (aws:ViaAWSMCPService and aws:CalledViaAWSMCP) to every downstream AWS service call. You can write IAM policies that check these keys to distinguish between AI-driven actions and human-initiated actions without any additional configuration.

Self-managed MCP servers include AWS-provided servers from the AWS MCP GitHub repository that you install and run yourself. They also include custom MCP servers that you build from scratch. With self-managed servers, you control the deployment location (local machine, Amazon EC2, Amazon EKS), the configuration, and the maintenance. These servers can be used with either AI coding assistants running locally or agents deployed on hosting environments. The key difference for security controls is that self-managed servers don’t automatically add IAM context keys for differentiation. You must configure the MCP server to add session tags when assuming IAM roles if you require differentiation between AI-driven and human-initiated actions. This requires modifying your MCP server code to call AWS STS AssumeRole with tags attached. You then write IAM policies that check for these tags using the aws:PrincipalTag condition key. Self-managed servers can also be extended to implement dynamic authorization flows, such as mapping inbound OAuth tokens to outbound IAM role assumptions, giving you control over the full authorization chain. Additionally, with AWS-managed MCP servers, AWS injects context keys at the service layer, so callers cannot spoof them. With self-managed servers, the entity calling AssumeRole sets the session tags, so you must trust that your MCP server code hasn’t been modified.

The responsibility model differs between server types. With AWS-managed MCP servers, AWS is responsible for server infrastructure, patching, and context key injection. You’re responsible for IAM policy design and credential configuration. With self-managed MCP servers, you’re additionally responsible for server patching, dependency and library supply chain security, session tag implementation, and verifying server integrity. This connects to the supply chain risk described in principle 1. While self-managed servers require more operational overhead to implement and maintain, they give you flexibility and control.

Level of client control

A third dimension shapes your security implementation, whether you control the agent and MCP client code (code-controlled) or are limited to configuring pre-built tools without modifying their runtime behavior (configuration-bound). This determines which security mechanisms are available to you at runtime.

In configuration-bound scenarios, you use an AI coding assistant such as Kiro or Claude Code and configure credentials in your mcp.json file. You select which IAM role or profile the agent uses, but you cannot modify the agent’s runtime behavior. The agent calls AWS APIs using whatever credentials you configured ahead of time, and you cannot inject session policies or tags into those calls programmatically. Your security controls must be in place before the agent runs. You select narrowly scoped roles at configuration time, and your organization enforces guardrails through permission boundaries and service control policies (SCPs). These mechanisms restrict what the agent can do regardless of which role the developer selects.

In code-controlled scenarios, you build or deploy a custom agent on Amazon Bedrock AgentCore, Amazon EC2, Amazon EKS, or your local machine, or you build and run a custom MCP server. Because you control the runtime code, you can implement credential management programmatically. For custom agents, this means calling AssumeRole with session policies scoped to each tool invocation, attaching session tags for differentiation, and obtaining temporary credentials with the minimum permissions each operation requires. For custom MCP servers, you can inject session policies into every AWS API call the server makes, applying a consistent set of restrictions across all operations. Both approaches give you runtime IAM controls that are not available in config-bound scenarios.

Deployment pattern summary

The following table summarizes how these dimensions combine.

Source type MCP server type Client control Credential source Differentiation mechanism Example use case
AI coding assistant AWS-managed MCP Config-bound Local (AWS CLI, env vars, ) Automatic context keys Kiro calling AWS-managed MCP server
AI coding assistant Self-managed MCP (local or remote) Config-bound Local (AWS CLI, env vars, ) Manual session tags or session policies Kiro calling local AWS MCP server
Agent on hosting environment AWS-managed MCP Code-controlled Execution role or AssumeRole Automatic context keys Amazon Bedrock AgentCore agent calling AWS-managed MCP server
Agent on hosting environment Self-managed MCP (remote) Code-controlled Execution role or AssumeRole Manual session tags or session policies Agent calling AWS MCP server deployed on Amazon Bedrock AgentCore

Your deployment pattern and level of client control determine which of the following security principles apply and how you implement them.

Three security principles for agent access

With this understanding of deployment patterns, let’s explore the three security principles that apply across all patterns.

  • Principle 1 – Assume all granted permissions could be used: Design permissions based on the acceptable scope of impact, not intended functionality alone.
  • Principle 2 – Provide organizational guidance on role usage: Enforce permission design through role governance, session policies, permission boundaries, and organizational policies.
  • Principle 3 – Differentiate AI-driven from human-initiated actions: Apply different IAM rules based on whether the action comes from an agent or a human.

Security principle 1: Assume all granted permissions could be used

The first security principle is fundamental. Any permission you grant to an agent can be exercised, regardless of your intended use case. If you give an agent s3:DeleteObject permission with a tool that can call the API, you must assume it can delete any Amazon Simple Storage Service (Amazon S3) object it has access to. This can happen in ways you cannot predict or fully prevent through code review alone. This non-deterministic behavior requires a shift in your approach to IAM permissions.

Traditional applications follow deterministic code paths. You can review the source code, identify every API call, and grant the permissions needed. AI agents operate differently. They make decisions at runtime based on reasoning, context, and learned patterns. You cannot predict which AWS APIs or tools an agent will call or which resources it will access. Static analysis of agent code tells you what tools are available, but not which tools will be invoked or how they’ll be used.

This creates a challenge when developers configure agents to use AWS credentials. Developers commonly use existing IAM roles, such as the role their traditional application uses or their local admin role for the AWS CLI. These roles were designed assuming predictable behavior and human judgment. Your local admin role has s3:* permissions because you exercise judgment on what to delete and when. You understand the context, recognize production resources, and can assess the impact of your actions.

An agent with that same role operates at machine speed without human judgment. It can delete production data through hallucination or be directed through prompt injection to perform unintended actions. It can also make a logical error in its reasoning that leads to unintended operations. The speed and scale at which agents operate increases the potential scope of these issues. An agent can make thousands of API calls in seconds, so the impact of misconfigured permissions scales quickly.

Consider the following scenarios with overly permissive access.

  • Hallucination: The agent misinterprets a user request and performs the wrong action. An agent designed to clean up temporary files might hallucinate that production data is temporary and delete it.
  • Prompt injection: An outside party crafts unexpected input that influences the agent’s reasoning. An agent designed to query Amazon DynamoDB tables could be directed to call dynamodb:PutItem or dynamodb:DeleteItem on resources outside its intended scope.
  • Logic errors: The agent’s reasoning leads to an incorrect conclusion. An agent analyzing S3 storage costs might conclude that frequently accessed production data is unused and delete it to save costs.
  • Tool poisoning: A compromised MCP server or dependency performs unintended operations using the agent’s credentials. An agent with broad S3 and DynamoDB permissions connects to an MCP server whose dependency has been modified to exfiltrate data. The compromised tool reads sensitive objects and writes them to an attacker-controlled location, all within the agent’s granted permissions.

This security principle reframes how you approach IAM permissions for agents. Instead of asking what does the agent need to do?, ask what is the scope of impact if the agent acts outside its intended use case? Design permissions based on the acceptable scope of access, not only on intended functionality. If an agent needs to read S3 objects, grant s3:GetObject, not s3:*. If it needs to write to specific paths, use resource-level conditions to restrict access to those paths. Consider what tools the agent has access to and what API calls those tools can make. Design permissions that limit what the agent is allowed to perform based on organizational policy. This doesn’t mean agents can’t have write or delete permissions. It means you and your organization must consider what resources those permissions apply to and what safeguards are in place.

Beyond IAM policies, consider implementing data perimeters as an additional layer of defense. Data perimeters use VPC endpoint policies, resource control policies (RCPs), resource policies, and service control policies (SCPs) to restrict access based on identity, resource, and network boundaries. For agents, data perimeters help verify that even if IAM permissions are broader than intended, access is limited to trusted resources from expected networks. For more information, see Building a data perimeter on AWS.

Practical implementation guidance:

  • Apply least privilege rigorously: If an agent needs read access, grant read permissions. If it needs write access, grant write to specific resources, not all resources of that type.
  • Use resource-level restrictions: Employ IAM policy conditions to limit permissions to specific buckets, paths, tables, or other resources. Don’t grant blanket permissions across all resources.
  • Consider read-only alternatives: Evaluate whether the agent’s task can be accomplished with read-only access. Many analysis and reporting tasks don’t require write or delete permissions.
  • Implement comprehensive monitoring: Set up Amazon CloudWatch alarms for unexpected agent actions, unusual access patterns, or operations on sensitive resources. Monitor for sensitive operations like deletions or modifications to production resources.
  • Conduct regular permission audits: As agents gain new tools and capabilities, developers often add permissions incrementally without removing unused ones. An agent that started with read-only access can gradually accumulate write and delete permissions across multiple services. Review agent IAM roles and policies regularly to identify and remove permissions that are no longer needed.
  • Verify MCP server integrity: Verify the provenance and integrity of MCP servers before granting them access to AWS credentials. Maintain an organizational registry of approved MCP servers and their expected behavior, and monitor for unauthorized server deployments that might have assumed execution roles. For more on agentic application risks, see the OWASP Top 10 for Agentic Applications.

Security principle 1 establishes the foundation. Understand the scope of every permission you grant. The next two security principles build on this foundation.

Security principle 2: Provide organizational guidance on role usage

The second security principle addresses organizational governance. Principle 1 requires that you design permissions based on acceptable scope of impact. Principle 2 addresses how your organization enforces that design through role governance, session policies, permission boundaries, and organizational policies.

When developers adopt AI coding assistants and configure MCP servers, they choose which credentials to use. Without organizational controls, developers often use existing roles (such as personal admin roles, shared development roles, or production roles) that were designed for human use with far more permissions than agents need. For agents deployed on hosting environments, you configure execution roles, but the same question applies. What permissions should those roles have, and how do you enforce consistency across deployments? The answer depends on your level of client control.

When you control the agent code

When you build or deploy custom agents on Amazon Bedrock AgentCore, Amazon EC2, Amazon EKS, or locally, you control the runtime code and can implement dynamic credential management. This is the strongest enforcement model because you can scope permissions per tool invocation at runtime. The same applies if you build or modify a custom MCP server. Because you control the server code, you can inject session policies into every AWS API call the server makes.

The IAM role defines the permission ceiling for the agent across all its tools. Instead of creating a separate role for every tool or MCP server, you use session policies to scope down the role’s permissions per operation. When the agent invokes a specific tool, it calls AssumeRole with a session policy that restricts permissions to just what that tool requires. The effective permissions are the intersection of the role’s policies and the session policy. Session policies restrict permissions but never expand them. If a role grants broad permissions but you attach the ReadOnlyAccess managed policy as a session policy, the agent can only perform read operations. You can also use inline session policies for resource-specific restrictions, such as limiting access to specific S3 buckets or DynamoDB tables.

The following example shows how to implement session policies in agent code.

import boto3

# Uses the execution IAM role as part of AgentCore Runtime
sts = boto3.client('sts')

# Assume role with ReadOnlyAccess managed policy as session policy
response = sts.assume_role(
    RoleArn='arn:aws:iam::111122223333:role/AgentDataRole',
    RoleSessionName='agent-data-reader',
    PolicyArns=[
        {'arn': 'arn:aws:iam::aws:policy/ReadOnlyAccess'}
    ],
    DurationSeconds=3600
)

# Use the temporary credentials
credentials = response['Credentials']
s3 = boto3.client(
    's3',
    aws_access_key_id=credentials['AccessKeyId'],
    aws_secret_access_key=credentials['SecretAccessKey'],
    aws_session_token=credentials['SessionToken']
)

For agents on hosting environments like Amazon Bedrock AgentCore, the execution role serves two purposes. It’s the trust anchor that lets the agent call AssumeRole for tool-specific credentials, and it can supply baseline permissions that all operations need, such as writing logs to CloudWatch. For tool-specific operations that access customer resources, use AssumeRole with session policies to obtain scoped temporary credentials rather than using the execution role’s permissions directly. This centralized execution model simplifies enforcing consistent session policies across all agent deployments. Agents can also attach tags when assuming roles for differentiation purposes (covered in Security principle 3).

When you’re configuration bound

When you use an AI coding assistant like Kiro or Claude Code with off-the-shelf MCP servers, you configure credentials in your mcp.json file but cannot modify the agent’s runtime behavior. Your security controls must be established before the agent runs.

Your first control is role selection. As described in the preceding deployment patterns section, AI coding assistants use credentials from the developer’s local environment. Create agent-specific IAM roles with narrower permissions than equivalent human roles, and direct developers to use them. For self-managed MCP servers running locally, the developer specifies the role through environment variables in the mcp.json configuration.

{
  "mcpServers": {
    "awslabs.aws-pricing-mcp-server": {
      "command": "uvx",
      "args": ["awslabs.aws-pricing-mcp-server@latest"],
      "env": {
        "AWS_PROFILE": "agent-dev-role",
        "AWS_REGION": "us-east-1"
      }
    }
  }
}

For AWS-managed MCP servers, the developer connects through the mcp-proxy-for-aws proxy and specifies the role through the profile parameter.

{
  "mcpServers": {
    "aws-mcp": {
      "command": "uvx",
      "args": [
        "mcp-proxy-for-aws@latest",
        "https://aws-mcp.us-east-1.api.aws/mcp",
        "--profile", "agent-dev-role",
        "--metadata", "AWS_REGION=us-east-1"
      ]
    }
  }
}

Only role selection depends on developer compliance. IAM permission boundaries provide organizational enforcement without requiring code changes or developer cooperation. A permission boundary is a managed policy that your security team attaches to an IAM role to set the maximum permissions that role can grant. The effective permissions are the intersection of the role’s identity-based policies and the permission boundary. Permission boundaries are most effective on agent-specific roles that your organization creates for agent use. They ensure those roles cannot exceed their intended permissions even if misconfigured. If a developer configures their existing role in mcp.json instead, a permission boundary on that role restricts all use of the role, not just agent use. For AWS-managed MCP servers, principle 3’s context keys address this gap. They let you write IAM policies that restrict actions only when they come through an MCP server, leaving the developer’s direct use of the same role unaffected. For self-managed MCP servers, modifying the server code to AssumeRole into an organization-defined role provides a similar override, and session tags can be attached during that AssumeRole for differentiation (see principle 3). For multi-account environments, SCPs in AWS Organizations provide guardrails at the account or organizational unit level. SCPs set the maximum permissions for all principals in an account, giving your central governance team control over agent permissions across your organization.

Organizational governance at scale

Whether your agents are config-bound or code-controlled, you need organizational mechanisms to enforce consistent governance across teams and accounts.

Tag IAM roles intended for agent use with a consistent identifier, such as a tag key of Usage with a value of Agent. This lets your governance team inventory all agent roles across accounts, identify roles that don’t have permission boundaries, and distinguish agent roles from human roles in audit reports. You can also use tag-based conditions in SCPs to enforce that only properly tagged roles are used for agent operations. For AWS-managed MCP servers, the automatic context keys (principle 3) provide this identification without requiring role tags, but tagging remains useful for role inventory and audit purposes.

Use CloudTrail to monitor all API calls made by agent sessions and set up CloudWatch alarms for sensitive operations like resource deletion or permission changes. Principle 3 covers how to filter and analyze agent activity using context keys (AWS-managed MCP) and session tags (self-managed MCP).

For multi-account environments, combine SCPs with permission boundaries and resource control policies (RCPs) for layered enforcement. SCPs set the maximum permissions for principals within your organization at the account or organizational unit level, while permission boundaries constrain individual roles. RCPs enforce controls at the resource level regardless of the caller’s organizational membership, protecting resources even from cross-account access. Verify that the AWS services you use support MCP context keys in RCP evaluation. This layered approach gives your central governance team control over agent permissions across your organization, even when individual teams manage their own accounts and roles. Conduct quarterly reviews of agent roles and session policies to identify permissions that are no longer needed as agent capabilities evolve.

Practical implementation guidance:

  • For code controlled agents: Implement session policies for every tool invocation. Use AssumeRole with the minimum permissions each operation requires rather than relying on the execution role’s base permissions.
  • For config-bound agents: Create agent-specific IAM roles with narrower permissions than human roles or configure self-managed MCP servers to AssumeRole into an organization-defined role. Have your security team attach permission boundaries to agent-specific roles to enforce maximum permissions regardless of developer role selection.
  • At the organization level: Tag agent roles consistently, enforce guardrails through SCPs, and monitor agent activity through CloudTrail. Conduct quarterly reviews to remove unused permissions.

Security principle 2 gives you organizational control over agent permissions through mechanisms matched to your level of client control. Session policies and dynamic credential scoping enforce permissions at runtime for code-controlled agents. Permission boundaries and SCPs enforce permissions at the organizational level for config-bound agents. The next principle adds a complementary layer of governance at the resource level based on whether a human or agent is performing the action.

Security principle 3: Differentiate AI-driven from human-initiated actions

The third security principle adds an additional level of control on top of principle 2. Where principle 2 governs what permissions an agent has, this principle governs what the agent can do with those permissions based on whether the action is AI-driven or human-initiated.

This principle is essential for two reasons. For AWS-managed MCP servers, you cannot modify the server code to inject session policies or call AssumeRole with scoped credentials. The developer’s credentials flow through as-is. Context keys are your primary mechanism to restrict agent actions differently from human-initiated actions on the same role. For self-managed MCP servers where principle 2’s session policies are already in place, differentiation adds a second layer of defense at the resource level. Even if the session policy is broader than intended, differentiation policies can deny specific dangerous operations when performed through an agent.

For example, you can allow both humans and agents to read Amazon S3 objects, but deny delete operations when accessed through agents. Without a differentiation mechanism, IAM policies can’t distinguish between AI-driven actions and human-initiated actions. If a developer has s3:DeleteObject permission and uses an agent with their credentials, the agent also has s3:DeleteObject permission with no way to restrict it.

Differentiation gives you granular governance. Allow human-initiated actions with broad permissions while restricting agent actions to narrower permissions. Apply different rules based on context and implement progressive restrictions. Allow read operations for everyone, require approval for AI-driven write operations, and deny delete operations for agent actions entirely. Maintain audit trails showing which actions were AI-driven versus human-initiated, essential for compliance and security investigations.

When agents bypass MCP servers

Differentiation through condition keys and session tags applies when the agent accesses AWS through an MCP server. AI coding assistants like Kiro and Claude Code have access to general-purpose tools, including bash, shell, and code execution. When an agent uses a bash tool to run an AWS CLI command like aws s3 rm s3://my-bucket/my-object or executes a Python script that calls boto3 directly, the request goes straight to AWS using the developer’s existing credentials. The request bypasses MCP servers entirely. The aws:ViaAWSMCPService condition key isn’t set, session tags from MCP server AssumeRole calls aren’t applied, and IAM policies conditioned on these values don’t evaluate.

This means a deny policy like “Condition": {"Bool": {"aws:ViaAWSMCPService": “true"}} blocks the agent when it calls Amazon S3 through a managed MCP server, but doesn’t block the same agent when it runs the equivalent AWS CLI command through a bash tool. The agent has two paths to the same AWS API, and differentiation controls govern one path.

The condition keys work as designed, differentiating MCP-mediated access from direct access. This is a scope boundary. Differentiation controls secure the MCP access path. For the direct access path, principles 1 and 2 are your controls. Least privilege on the underlying IAM role (principle 1) and organizational guardrails like permission boundaries and SCPs (principle 2) apply regardless of how the agent reaches AWS. If the role doesn’t have s3:DeleteObject permission, the agent can’t delete objects through a bash tool or through an MCP server.

Restricting which tools an agent can access is a complementary control outside the scope of IAM. You can use agent frameworks and hosting environments such as Amazon Bedrock AgentCore to limit the set of available tools, removing general-purpose execution capabilities for agents that interact with AWS exclusively through MCP servers. When you combine tool restriction with the IAM controls in this post, you close the gap between the MCP access path and the direct access path.

AWS-managed MCP servers: Automatic context keys

AWS-managed MCP servers, including the AWS MCP Server, Amazon EKS MCP Server, and Amazon ECS MCP Server, offer differentiation by default. They automatically add IAM context keys to every downstream AWS service call. These context keys are aws:ViaAWSMCPService, a boolean set to true when the request comes through any AWS-managed MCP server. The second key is aws:CalledViaAWSMCP, a string containing the MCP server name like aws-mcp.amazonaws.com, eks-mcp.amazonaws.com, or ecs-mcp.amazonaws.com. No configuration is required on your part. You only need to write IAM policies that check for these keys to apply different rules for agent actions.

The following IAM policy denies delete operations when accessed through any AWS-managed MCP server.

{
  "Version": "2012-10-17",
  "Statement": [{
    "Sid": "AllowS3ReadOperations",
    "Effect": "Allow",
    "Action": [
      "s3:GetObject",
      "s3:ListBucket"
    ],
    "Resource": "*"
  }, {
    "Sid": "DenyDeleteWhenAccessedViaMCP",
    "Effect": "Deny",
    "Action": [
      "s3:DeleteObject",
      "s3:DeleteBucket"
    ],
    "Resource": "*",
    "Condition": {
      "Bool": {
        "aws:ViaAWSMCPService": "true"
      }
    }
  }]
}

When a request doesn’t come through an AWS-managed MCP server, the aws:ViaAWSMCPService condition key isn’t present in the request context. The Deny statement only applies when the key is explicitly set to true, so human-initiated actions are unaffected by this policy.

You can also restrict operations to specific MCP servers. With this policy, you can run EKS operations only when accessed through the EKS MCP server, not through the AWS API MCP server.

{
  "Version": "2012-10-17",
  "Statement": [{
    "Sid": "AllowEKSOperationsViaEKSMCP",
    "Effect": "Allow",
    "Action": "eks:*",
    "Resource": "*",
    "Condition": {
      "StringEquals": {
        "aws:CalledViaAWSMCP": "eks-mcp.amazonaws.com"
      }
    }
  }, {
    "Sid": "DenyEKSOperationsViaOtherMCP",
    "Effect": "Deny",
    "Action": "eks:*",
    "Resource": "*",
    "Condition": {
      "Bool": {
        "aws:ViaAWSMCPService": "true"
      },
      "StringNotEquals": {
        "aws:CalledViaAWSMCP": "eks-mcp.amazonaws.com"
      }
    }
  }]
}

Self-managed MCP servers: Manual session tags

Self-managed MCP servers, whether AWS-provided servers from the AWS MCP GitHub repository or custom servers you build yourself, don’t automatically add IAM context keys. To implement differentiation with self-managed servers, you must configure the MCP server to add session tags when assuming IAM roles. This requires modifying your MCP server to call AWS STS AssumeRole with tags attached. The tags remain active for the duration of the assumed role session and can be referenced in IAM policies using the aws:PrincipalTag condition key. This approach gives you flexibility and control over the session tag configuration. To maintain consistency, verify that all MCP server instances add the appropriate tags.

The following example shows how to configure your MCP server to add session tags.

import boto3

sts = boto3.client('sts')

response = sts.assume_role(
    RoleArn='arn:aws:iam::111122223333:role/MCPServerRole',
    RoleSessionName='mcp-server-session',
    Tags=[
        {'Key': 'AccessType', 'Value': 'AI'},
        {'Key': 'Source', 'Value': 'AgentRuntime'},
        {'Key': 'MCPServer', 'Value': 'org-data-server'}
    ]
)

# Use the temporary credentials from response['Credentials']
credentials = response['Credentials']

After your MCP server has added session tags, you can write IAM policies that check for these tags to differentiate agent actions.

{
  "Version": "2012-10-17",
  "Statement": [{
    "Sid": "AllowS3ReadOperations",
    "Effect": "Allow",
    "Action": [
      "s3:GetObject",
      "s3:ListBucket"
    ],
    "Resource": "*"
  }, {
    "Sid": "DenyDeleteWhenAccessedViaAI",
    "Effect": "Deny",
    "Action": [
      "s3:DeleteObject",
      "s3:DeleteBucket"
    ],
    "Resource": "*",
    "Condition": {
      "StringEquals": {
        "aws:PrincipalTag/AccessType": "AI"
      }
    }
  }]
}

Session tags and session policies are both passed to AssumeRole, but serve different purposes. Session policies (covered in security principle 2) constrain what permissions the agent has. Session tags (covered here in security principle 3) mark the session as AI-driven, enabling IAM policies to differentiate between agent and human actions. You can use both in the same AssumeRole call for defense-in-depth. The session policy constrains what the agent can do. The session tags let IAM policies apply different rules based on the actor type.

The following example uses both session policies and session tags together.

import boto3

sts = boto3.client('sts')

# Assume role with both managed session policy and tags
response = sts.assume_role(
    RoleArn='arn:aws:iam::111122223333:role/AgentDataRole',
    RoleSessionName='agent-data-reader',
    PolicyArns=[                              # Principle 2: Constrains permissions
        {'arn': 'arn:aws:iam::aws:policy/ReadOnlyAccess'}
    ],
    Tags=[                                    # Principle 3: Enables differentiation
        {'Key': 'AccessType', 'Value': 'AI'},
        {'Key': 'Source', 'Value': 'AgentRuntime'},
        {'Key': 'MCPServer', 'Value': 'org-data-server'}
    ],
    DurationSeconds=3600
)

CloudTrail logging and audit trails

Both differentiation mechanisms generate CloudTrail logs for audit trails. For AWS-managed MCP servers, downstream AWS API calls include the MCP service identifier in the invokedBy, sourceIPAddress, and userAgent fields. You can filter on these fields to isolate agent activity. MCP-originated downstream calls are classified as data events, so you must enable data event logging on your CloudTrail trail to capture them.

{
  "eventVersion": "1.11",
  "userIdentity": {
    "type": "AssumedRole",
    "principalId": "AROAEXAMPLE:developer-session",
    "arn": "arn:aws:sts::111122223333:assumed-role/DeveloperRole/developer-session",
    "accountId": "111122223333",
    "sessionContext": {
      "sessionIssuer": {
        "type": "Role",
        "principalId": "AROAEXAMPLE",
        "arn": "arn:aws:iam::111122223333:role/DeveloperRole",
        "accountId": "111122223333",
        "userName": "DeveloperRole"
      }
    },
    "invokedBy": "aws-mcp.amazonaws.com"
  },
  "eventSource": "s3.amazonaws.com",
  "eventName": "GetObject",
  "sourceIPAddress": "aws-mcp.amazonaws.com",
  "userAgent": "aws-mcp.amazonaws.com",
  "eventType": "AwsApiCall",
  "managementEvent": false,
  "eventCategory": "Data"
}

For self-managed MCP servers with session tags, the tags appear in the requestParameters.principalTags field of the AssumeRole CloudTrail event. You can correlate the session name from the AssumeRole event to downstream API calls to trace agent activity.

{
  "eventSource": "sts.amazonaws.com",
  "eventName": "AssumeRole",
  "requestParameters": {
    "roleArn": "arn:aws:iam::111122223333:role/MCPServerRole",
    "roleSessionName": "mcp-server-session",
    "principalTags": {
      "AccessType": "AI",
      "Source": "AgentRuntime",
      "MCPServer": "org-data-server"
    }
  }
}

With these logs, you can query CloudTrail to find all AI-driven actions and analyze patterns of agent behavior. You can also identify unexpected or unauthorized operations and maintain compliance audit trails. Set up CloudWatch alarms to detect agent actions on sensitive resources or unusual patterns that indicate unintended access or misconfiguration.

Things to consider

When deciding between AWS-managed and self-managed MCP servers, consider the trade-offs. AWS-managed MCP servers offer the most straightforward path. Context keys are added automatically with no configuration on your part. Self-managed MCP servers require modifying code to add session tags. However, they give you complete control over the tags and let you implement custom functionality not available in AWS-managed servers. Organizations can use both approaches, AWS-managed servers for standard AWS operations and self-managed servers for specialized use cases.

Practical implementation guidance:

  • Assess direct access paths: Evaluate whether your agents have access to general-purpose tools (bash, shell, code execution) that can bypass MCP servers. If they do, rely on principles 1 and 2 for those paths and consider restricting tool availability where possible.
  • Choose a differentiation mechanism: Select based on your MCP server type (for managed, use context keys, for self-managed, use session tags).
  • For AWS-managed MCP: Write IAM policies that check aws:ViaAWSMCPService and aws:CalledViaAWSMCP condition keys. No MCP server configuration needed.
  • For self managed MCP: Modify MCP server code to add session tags when assuming roles. Verify consistent tag application across all instances.
  • Update IAM policies: Add differentiation conditions to existing policies. Test in non-production first to verify behavior.
  • Monitor CloudTrail logs: Verify differentiation is working by checking for context keys or session tags in CloudTrail events.
  • Set up alerts: Configure CloudWatch alarms for AI-driven sensitive operations or policy violations.
  • Perform regular audits: Review IAM policies quarterly to verify differentiation conditions remain correct as agent capabilities evolve.

Conclusion

Securing AI agent access to AWS resources requires building deterministic IAM controls for non-deterministic AI systems. The three security principles give you a defense-in-depth framework that adapts to your deployment pattern and level of client control.

Your implementation path depends on your situation. Start with principle 1. Audit current agent permissions and default to read-only access where possible. Next, implement principle 2. For config-bound scenarios, establish permission boundaries and select agent-specific roles. For code-controlled scenarios, implement dynamic session policies scoped to each tool invocation. Finally, add principle 3 differentiation based on your MCP server type. Use automatic context keys with AWS-managed MCP servers, or configure session tags with self-managed servers.

By applying these three security principles, you can use AI agents while maintaining the governance and compliance controls your organization requires.

Riggs Goodman III

Riggs Goodman III

Riggs is a Principal Solution Architect at AWS. His current focus is on AI security, providing technical guidance, architecture patterns, and leadership for customers and partners to build AI workloads on AWS. Internally, Riggs focuses on driving overall technical strategy and innovation across AWS service teams to address customer and partner challenges.