Skip to main content
Current20d ago

Amazon S3 Files — Complete Setup & IaC Guide

A hands-on, Notion-ready reference for the new Amazon S3 Files feature (launched April 7, 2026). Covers what it is, how it works, how to provision it with Terraform and AWS CDK (TypeScript), how to inspect the mounted bucket from your terminal, and what dashboards you get for monitoring. Status: Generally Available · Regions: all commercial AWS Regions Last verified: April 21, 2026


Table of Contents

  1. What Amazon S3 Files is

  2. Key capabilities & numbers

  3. How it works under the hood

  4. Architecture diagram (mental model)

  5. Prerequisites

  6. Setup with the AWS Console (fastest path)

  7. Setup with the AWS CLI (authoritative commands)

  8. Setup with Terraform

  9. Setup with AWS CDK (TypeScript)

  10. Mounting from EC2, ECS, EKS, and Lambda

  11. Terminal commands to inspect your bucket-as-filesystem

  12. Dashboards & observability (CloudWatch, CloudTrail)

  13. IAM & security model

  14. S3 Files vs Mountpoint vs EFS vs FSx

  15. Limitations & gotchas

  16. Pricing notes

  17. Reference links


1. What Amazon S3 Files is

Amazon S3 Files is a shared, POSIX-style file system that sits in front of a regular S3 general-purpose bucket and makes that bucket accessible over NFS v4.1+ from any AWS compute — EC2, ECS, EKS, Fargate, Lambda, and Batch. You do not duplicate data: reads and writes flow between a high-performance cache layer (built on Amazon EFS internally) and the underlying S3 bucket, and changes made through the file system show up as regular S3 objects, while changes made in S3 show up in the file system. In practical terms:

  • Your application does open(), read(), write(), ls, cp, rm against a mount point like /mnt/s3files/.

  • The same bytes are simultaneously accessible via s3://bucket-name/key and the S3 API.

  • Up to 25,000 compute resources can attach to the same file system at once.

  • It is the first cloud object store with a fully-featured, high-performance native file system interface — no FUSE, no custom connector, no new API for the application.


2. Key capabilities & numbers

CapabilityValue
ProtocolNFS v4.1+
Backing storageEFS cache layer + S3 bucket (source of truth)
Active data latency~1 ms
Aggregate read throughputMultiple TB/s (4 TB/s+ cited in launch materials)
IOPS10M+ file-system IOPS per bucket
Concurrent compute clientsUp to 25,000
Cache retention window1–365 days, default 30
Large read threshold≥1 MiB reads stream straight from S3
Sync S3 → file systemSeconds (can occasionally take ~1 min)
Sync file system → S3Within ~1 minute
EncryptionTLS 1.3 in transit, SSE-S3 or SSE-KMS at rest
AuthIAM (always on, cannot be disabled)
RegionsAll commercial AWS Regions

3. How it works under the hood

Under the covers, S3 Files is built on Amazon EFS and introduces a new NFS filesystem type called s3files that the Linux mount command understands (via the amazon-efs-utils package v3.0.0+). Data flow:

  1. Read path. When your app reads a file, S3 Files lazily loads the file (or just its metadata) onto the high-performance cache. Subsequent reads come from cache at ~1 ms. Reads ≥ 1 MiB stream directly from S3 and incur only standard S3 GET charges — no filesystem cache cost.

  2. Write path. Writes hit the high-performance cache first, are batched, then flushed to S3 as new objects or new S3 versions. S3 bucket versioning is required for this reason.

  3. Change detection. S3 Files creates managed EventBridge rules (named DO-NOT-DELETE-S3-Files*) so that changes made via the S3 API are detected and reflected in the file system.

  4. Cache expiry. Files that haven't been touched within your configured window (1–365 days, default 30) expire from the cache. They stay in S3 and are re-fetched on next access.

  5. Consistency model. NFS close-to-open consistency — i.e., once one client closes a file, other clients see the new version when they open it.


4. Architecture diagram (mental model)

         ┌────────────────────────────────────────────┐
│ Your VPC │
│ │
│ ┌───────────┐ ┌──────────────────┐ │
│ │ EC2 / │ NFS │ Mount Target │ │
│ │ ECS / ├─────▶│ (one per AZ) │ │
│ │ EKS / │ 2049 │ │ │
│ │ Lambda │ └──────┬───────────┘ │
│ └───────────┘ │ │
└─────────────────────────────┼──────────────┘


┌─────────────────────────┐
│ S3 File System │
│ (fs-xxxx, backed by │
│ EFS high-perf cache) │
└──────────┬──────────────┘
│ two-way sync

┌─────────────────────────┐
│ S3 General Purpose │
│ Bucket (versioned, │
│ SSE-S3 or SSE-KMS) │
└─────────────────────────┘

Three resources, in order: file system → mount target(s) → mount command.


5. Prerequisites

Directly from the AWS prerequisites page:

  • An AWS account.

  • A compute resource (EC2/ECS/EKS/Lambda) and an S3 general-purpose bucket in the same Region.

  • S3 Versioning enabled on the bucket (required for sync).

  • Bucket encryption must be SSE-S3 or SSE-KMS.

  • amazon-efs-utils v3.0.0+ installed on the EC2 instance (shared client for EFS and S3 Files).

  • Two IAM roles:

    • A service role that S3 Files assumes to read/write your bucket and manage EventBridge rules.
    • A compute role (e.g. EC2 instance profile) with the AmazonS3FilesClientFullAccess or AmazonS3FilesClientReadOnlyAccess managed policy plus an inline policy granting direct s3:GetObject/s3:ListBucket.
  • Security groups allowing TCP 2049 between the compute security group and the mount-target security group.

Tip. If you create the file system through the AWS Console, it auto-creates the service IAM role, one mount target per AZ in your default VPC, and one access point for the file system.


6. Setup with the AWS Console (fastest path)

  1. Open the S3 console → General purpose buckets → select your bucket → File systems tab → Create file system.

  2. Confirm the VPC (default VPC is fine for a test).

  3. Click Create. The console provisions the file system, one mount target per AZ, an access point, and the service IAM role.

  4. On the file system's Overview page, choose Attach under Attach to an EC2 instance, pick your EC2 instance, enter a mount path (e.g. /mnt/s3files/), and follow the generated mount command in CloudShell.

  5. Attach the managed policy AmazonS3FilesClientFullAccess to the EC2 instance's IAM role.


7. Setup with the AWS CLI (authoritative commands)

These are the exact AWS-documented commands. Replace placeholders.

7.1 Create the file system

aws s3files create-file-system \
--region <aws-region> \
--bucket <bucket-arn> \
--role-arn <iam-role-arn>

  • <bucket-arn> — e.g. arn:aws:s3:::my-bucket

  • <iam-role-arn> — the service role S3 Files assumes (see §13) The response returns a JSON description containing the file system ID (fs-xxxxxxxx…). Save it.

7.2 Create mount targets (one per AZ)

A mount target is the ENI that lives in your VPC and exposes the file system to clients. Create one per AZ you use.

aws s3files create-mount-target \
--region <aws-region> \
--file-system-id <fs-id> \
--subnet-id <subnet-id>

Creation takes up to ~5 minutes.

7.3 Describe / verify


# Look up a file system
aws s3files get-file-system \
--region <aws-region> \
--file-system-id <fs-id>

# List all file systems in a Region
aws s3files list-file-systems --region <aws-region>

# List mount targets for a file system
aws s3files describe-mount-targets \
--region <aws-region> \
--file-system-id <fs-id>

7.4 Mount it from the EC2 shell

sudo mkdir -p /mnt/s3files
sudo mount -t s3files <fs-id>:/ /mnt/s3files

Full examples of filesystem checks are in §11.

7.5 Tear it down


# Unmount first
sudo umount /mnt/s3files

# Delete mount targets (required before deleting fs)
aws s3files delete-mount-target --mount-target-id <mt-id>

# Delete file system
aws s3files delete-file-system --file-system-id <fs-id>


8. Setup with Terraform

Important caveat as of 21 April 2026: HashiCorp has a tracking issue (GitHub #47324) for native Terraform support — aws_s3files_file_system and aws_s3files_mount_target resources are being added and targeted for AWS provider v6.40.0. Until that release lands in your state, the community-recommended pattern is terraform_data + local-exec calling the AWS CLI. Both flavors below.

8.1 Variables and provider

terraform {
required_version = ">= 1.6.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 5.80, < 7.0"
}
}
}

provider "aws" {
region = var.aws_region
}

variable "aws_region" { type = string default = "us-east-1" }
variable "bucket_name" { type = string default = "my-s3files-bucket" }
variable "subnet_ids" { type = list(string) } # one per AZ you want to cover

8.2 Bucket prerequisites (versioning + encryption)

resource "aws_s3_bucket" "data" {
bucket = var.bucket_name
}

resource "aws_s3_bucket_versioning" "data" {
bucket = aws_s3_bucket.data.id
versioning_configuration { status = "Enabled" }
}

resource "aws_s3_bucket_server_side_encryption_configuration" "data" {
bucket = aws_s3_bucket.data.id
rule {
apply_server_side_encryption_by_default { sse_algorithm = "AES256" }
}
}

8.3 Service IAM role (what S3 Files assumes)

data "aws_caller_identity" "current" {}

locals {
account_id = data.aws_caller_identity.current.account_id
region = var.aws_region
}

resource "aws_iam_role" "s3files_service" {
name = "S3FilesServiceRole-${var.bucket_name}"

assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Sid = "AllowS3FilesAssumeRole"
Effect = "Allow"
Principal = { Service = "elasticfilesystem.amazonaws.com" }
Action = "sts:AssumeRole"
Condition = {
StringEquals = { "aws:SourceAccount" = local.account_id }
ArnLike = { "aws:SourceArn" = "arn:aws:s3files:${local.region}:${local.account_id}:file-system/*" }
}
}]
})
}

resource "aws_iam_role_policy" "s3files_service" {
role = aws_iam_role.s3files_service.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Sid = "S3BucketPermissions"
Effect = "Allow"
Action = ["s3:ListBucket", "s3:ListBucketVersions"]
Resource = aws_s3_bucket.data.arn
Condition = { StringEquals = { "aws:ResourceAccount" = local.account_id } }
},
{
Sid = "S3ObjectPermissions"
Effect = "Allow"
Action = [
"s3:AbortMultipartUpload",
"s3:DeleteObject*",
"s3:GetObject*",
"s3:List*",
"s3:PutObject*"
]
Resource = "${aws_s3_bucket.data.arn}/*"
Condition = { StringEquals = { "aws:ResourceAccount" = local.account_id } }
},
{
Sid = "EventBridgeManage"
Effect = "Allow"
Action = [
"events:DeleteRule", "events:DisableRule", "events:EnableRule",
"events:PutRule", "events:PutTargets", "events:RemoveTargets"
]
Condition = { StringEquals = { "events:ManagedBy" = "elasticfilesystem.amazonaws.com" } }
Resource = ["arn:aws:events:*:*:rule/DO-NOT-DELETE-S3-Files*"]
},
{
Sid = "EventBridgeRead"
Effect = "Allow"
Action = [
"events:DescribeRule", "events:ListRuleNamesByTarget",
"events:ListRules", "events:ListTargetsByRule"
]
Resource = ["arn:aws:events:*:*:rule/*"]
}
]
})
}

8.4 Security group for the mount target

resource "aws_security_group" "s3files_mt" {
name = "s3files-mount-target"
description = "Allow NFS 2049 from app compute"
vpc_id = var.vpc_id
}

resource "aws_vpc_security_group_ingress_rule" "nfs_in" {
security_group_id = aws_security_group.s3files_mt.id
referenced_security_group_id = var.app_sg_id
from_port = 2049
to_port = 2049
ip_protocol = "tcp"
}

8.5 File system + mount targets — native resources (when v6.40.0 is out)


# Expected shape once aws_s3files_file_system ships. Confirm arg names in the

# provider CHANGELOG before apply.
resource "aws_s3files_file_system" "this" {
bucket = aws_s3_bucket.data.arn
role_arn = aws_iam_role.s3files_service.arn
}

resource "aws_s3files_mount_target" "per_az" {
for_each = toset(var.subnet_ids)
file_system_id = aws_s3files_file_system.this.id
subnet_id = each.value
security_group_ids = [aws_security_group.s3files_mt.id]
}

output "file_system_id" {
value = aws_s3files_file_system.this.id
}

8.6 File system + mount targets — fallback (terraform_data + AWS CLI)

Use this until provider v6.40.0 lands. It calls the AWS CLI you set up in §7.

resource "terraform_data" "s3files_fs" {
input = {
region = var.aws_region
bucket_arn = aws_s3_bucket.data.arn
role_arn = aws_iam_role.s3files_service.arn
}

provisioner "local-exec" {
command = <<-EOT
set -euo pipefail
FS_ID=$(aws s3files create-file-system \
--region ${self.input.region} \
--bucket ${self.input.bucket_arn} \
--role-arn ${self.input.role_arn} \
--query FileSystemId --output text)
echo "$FS_ID" > ${path.module}/.s3files-fs-id
EOT
}

provisioner "local-exec" {
when = destroy
command = <<-EOT
FS_ID=$(cat ${path.module}/.s3files-fs-id)
# destroy all mount targets first, then delete fs
for MT in $(aws s3files describe-mount-targets --file-system-id $FS_ID \
--query 'MountTargets[].MountTargetId' --output text); do
aws s3files delete-mount-target --mount-target-id $MT
done
aws s3files delete-file-system --file-system-id $FS_ID
EOT
}
}

data "local_file" "fs_id" {
depends_on = [terraform_data.s3files_fs]
filename = "${path.module}/.s3files-fs-id"
}

resource "terraform_data" "mount_targets" {
for_each = toset(var.subnet_ids)
input = { fs_id = trimspace(data.local_file.fs_id.content), subnet_id = each.value }

provisioner "local-exec" {
command = <<-EOT
aws s3files create-mount-target \
--region ${var.aws_region} \
--file-system-id ${self.input.fs_id} \
--subnet-id ${self.input.subnet_id}
EOT
}
}

8.7 Compute-side IAM (EC2 instance profile)

resource "aws_iam_role" "app_ec2" {
name = "S3FilesEC2Role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Effect = "Allow"
Principal = { Service = "ec2.amazonaws.com" }
Action = "sts:AssumeRole"
}]
})
}

resource "aws_iam_role_policy_attachment" "client_full" {
role = aws_iam_role.app_ec2.name
policy_arn = "arn:aws:iam::aws:policy/AmazonS3FilesClientFullAccess"
}

resource "aws_iam_role_policy" "app_s3_read" {
role = aws_iam_role.app_ec2.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{ Effect = "Allow", Action = ["s3:GetObject", "s3:GetObjectVersion"], Resource = "${aws_s3_bucket.data.arn}/*" },
{ Effect = "Allow", Action = "s3:ListBucket", Resource = aws_s3_bucket.data.arn }
]
})
}

resource "aws_iam_instance_profile" "app_ec2" {
name = "S3FilesEC2Profile"
role = aws_iam_role.app_ec2.name
}


9. Setup with AWS CDK (TypeScript)

The AWS CDK L2 construct for S3 Files is still rolling out (the feature shipped April 7, 2026). Until @aws-cdk/aws-s3files-alpha is stable, use a mix of L1 / L2 constructs with AwsCustomResource where needed.

import * as cdk from 'aws-cdk-lib';
import { Construct } from 'constructs';
import * as s3 from 'aws-cdk-lib/aws-s3';
import * as iam from 'aws-cdk-lib/aws-iam';
import * as ec2 from 'aws-cdk-lib/aws-ec2';
import { AwsCustomResource, AwsCustomResourcePolicy, PhysicalResourceId } from 'aws-cdk-lib/custom-resources';

export class S3FilesStack extends cdk.Stack {
constructor(scope: Construct, id: string, props: cdk.StackProps & { vpc: ec2.IVpc }) {
super(scope, id, props);

// 1. Bucket with required settings
const bucket = new s3.Bucket(this, 'DataBucket', {
bucketName: 'my-s3files-bucket',
versioned: true, // required by S3 Files
encryption: s3.BucketEncryption.S3_MANAGED, // SSE-S3 (or KMS_MANAGED / KMS)
blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL,
enforceSSL: true,
});

// 2. Service role that S3 Files assumes
const serviceRole = new iam.Role(this, 'S3FilesServiceRole', {
assumedBy: new iam.ServicePrincipal('elasticfilesystem.amazonaws.com', {
conditions: {
StringEquals: { 'aws:SourceAccount': this.account },
ArnLike: { 'aws:SourceArn': `arn:aws:s3files:${this.region}:${this.account}:file-system/*` },
},
}),
});

bucket.grantReadWrite(serviceRole);
serviceRole.addToPolicy(new iam.PolicyStatement({
actions: ['s3:ListBucket', 's3:ListBucketVersions'],
resources: [bucket.bucketArn],
conditions: { StringEquals: { 'aws:ResourceAccount': this.account } },
}));
serviceRole.addToPolicy(new iam.PolicyStatement({
actions: [
'events:PutRule', 'events:DeleteRule', 'events:EnableRule', 'events:DisableRule',
'events:PutTargets', 'events:RemoveTargets',
],
resources: ['arn:aws:events:*:*:rule/DO-NOT-DELETE-S3-Files*'],
conditions: { StringEquals: { 'events:ManagedBy': 'elasticfilesystem.amazonaws.com' } },
}));
serviceRole.addToPolicy(new iam.PolicyStatement({
actions: ['events:DescribeRule', 'events:ListRules', 'events:ListRuleNamesByTarget', 'events:ListTargetsByRule'],
resources: ['arn:aws:events:*:*:rule/*'],
}));

// 3. Security group for the mount target
const mtSg = new ec2.SecurityGroup(this, 'S3FilesMountTargetSg', {
vpc: props.vpc,
description: 'NFS 2049 inbound for S3 Files mount target',
});

// 4. Create the file system via AwsCustomResource (until L2 ships)
const fileSystem = new AwsCustomResource(this, 'S3FilesFileSystem', {
onCreate: {
service: 's3files',
action: 'createFileSystem',
parameters: { Bucket: bucket.bucketArn, RoleArn: serviceRole.roleArn },
physicalResourceId: PhysicalResourceId.fromResponse('FileSystemId'),
},
onDelete: {
service: 's3files',
action: 'deleteFileSystem',
parameters: { FileSystemId: new cdk.PhysicalName('FileSystemId').toString() },
},
policy: AwsCustomResourcePolicy.fromStatements([
new iam.PolicyStatement({
actions: ['s3files:CreateFileSystem', 's3files:DeleteFileSystem', 'iam:PassRole'],
resources: ['*'],
}),
]),
});
const fsId = fileSystem.getResponseField('FileSystemId');

// 5. One mount target per private subnet
props.vpc.privateSubnets.forEach((subnet, i) => {
new AwsCustomResource(this, `S3FilesMountTarget${i}`, {
onCreate: {
service: 's3files',
action: 'createMountTarget',
parameters: {
FileSystemId: fsId,
SubnetId: subnet.subnetId,
SecurityGroups: [mtSg.securityGroupId],
},
physicalResourceId: PhysicalResourceId.fromResponse('MountTargetId'),
},
policy: AwsCustomResourcePolicy.fromStatements([
new iam.PolicyStatement({ actions: ['s3files:CreateMountTarget'], resources: ['*'] }),
]),
});
});

// 6. Compute-side IAM: EC2 role that can mount + read bucket
const ec2Role = new iam.Role(this, 'AppEc2Role', {
assumedBy: new iam.ServicePrincipal('ec2.amazonaws.com'),
managedPolicies: [iam.ManagedPolicy.fromAwsManagedPolicyName('AmazonS3FilesClientFullAccess')],
});
bucket.grantRead(ec2Role);

new cdk.CfnOutput(this, 'FileSystemId', { value: fsId });
}
}

Once the native CDK L2 construct lands (track @aws-cdk/aws-s3files-alpha), collapse the AwsCustomResource blocks into new s3files.FileSystem(...) and fileSystem.addMountTarget(...).


10. Mounting from EC2, ECS, EKS, and Lambda

10.1 EC2 — one-shot


# Install the client (Amazon Linux)
sudo yum -y install amazon-efs-utils

# Or, on other distros:
curl https://amazon-efs-utils.aws.com/efs-utils-installer.sh | sudo sh -s -- --install

# Mount
sudo mkdir -p /mnt/s3files
sudo mount -t s3files fs-0123456789abcdef0:/ /mnt/s3files

The mount helper automatically enforces TLS 1.2+ and IAM authentication — they cannot be disabled. Its defaults are nfsvers=4.2, rsize=1048576, wsize=1048576, hard, timeo=600, retrans=2, noresvport, tls, iam.

10.2 EC2 — auto-mount on boot via /etc/fstab

fs-0123456789abcdef0:/  /mnt/s3files  s3files  _netdev,nofail  0  0

  • _netdev is required (network-dependent mount) — without it, the instance can hang on boot.

  • nofail is recommended so the instance still boots if the mount fails. To pin to a specific access point:

fs-xxxx:/  /mnt/s3files  s3files  _netdev,accesspoint=fsap-xxxx  0  0

Test with sudo mount -a then findmnt -T /mnt/s3files.

10.3 ECS / Fargate / EKS containers

Attach the file system as a volume using the standard EFS/NFS volume plumbing — ECS task definitions and EKS PersistentVolume specs accept the S3 Files fs-xxxx ID the same way. Your task role needs AmazonS3FilesClientFullAccess (or the read-only equivalent) plus the same inline s3:GetObject/s3:ListBucket grants.

10.4 AWS Lambda

Lambda functions mount the file system at a path like /mnt/s3files via the function's file-system configuration. The function's execution role needs the same compute-side IAM as EC2.

10.5 Cross-VPC / cross-Region mount

From a peered VPC or Transit Gateway, bypass DNS and pass the mount target's IP explicitly:

sudo mount -t s3files \
-o mounttargetip=10.0.12.34 \
fs-0123456789abcdef0 /mnt/s3files

For cross-Region, also edit /etc/amazon/efs/s3files-utils.conf and uncomment the region = <source-region> line.

10.6 Unmount

sudo umount /mnt/s3files
findmnt -T /mnt/s3files # should produce no output if clean


11. Terminal commands to inspect your bucket-as-filesystem

This is the list you asked for — every useful shell command for verifying the mount and comparing file system ↔ S3 views.

11.1 Is it mounted? What options?


# Is the mount listed by the kernel?
findmnt -T /mnt/s3files

# Free / used space (shows exabyte-scale pseudo-size)
df -h /mnt/s3files

# All NFS mounts on the box
mount | grep s3files

# Is the s3files filesystem type even available?
cat /proc/filesystems | grep nfs

Expected df -h output:

Filesystem     Size  Used Avail Use% Mounted on
fs-xxx.xxx... 8.0E 129M 8.0E 1% /mnt/s3files

11.2 Browse the bucket through the file system

cd /mnt/s3files

# List everything (respects S3 prefixes as directories)
ls -la

# Recursive listing with sizes
ls -lhR | head -n 50

# Tree view (if tree is installed)
sudo yum -y install tree && tree -L 3 /mnt/s3files

# Deep search
find /mnt/s3files -type f -name "*.parquet" | head

# Word-count a file, stream a log, whatever
wc -l /mnt/s3files/logs/2026-04-21.log
tail -f /mnt/s3files/logs/app.log

11.3 Write a file and confirm it materializes in S3

echo "Hello S3 Files" > /mnt/s3files/hello.txt

# Within ~1 minute, the same key appears in S3
aws s3 ls s3://my-s3files-bucket/hello.txt
aws s3api list-object-versions \
--bucket my-s3files-bucket \
--prefix hello.txt

# Fetch it back via the S3 API
aws s3 cp s3://my-s3files-bucket/hello.txt - | cat

11.4 Cross-check: same bytes through both interfaces?


# Sum via file system
sha256sum /mnt/s3files/data/big.parquet

# Sum via S3 API
aws s3 cp s3://my-s3files-bucket/data/big.parquet - | sha256sum

11.5 Inspect S3 Files infrastructure from the CLI


# File systems in this region
aws s3files list-file-systems --region us-east-1

# Detail for one
aws s3files get-file-system --file-system-id fs-xxx

# Mount targets
aws s3files describe-mount-targets --file-system-id fs-xxx

# The IP that a given mount target is using
aws s3files describe-mount-targets \
--file-system-id fs-xxx \
--query 'MountTargets[*].[MountTargetId,SubnetId,IpAddress,AvailabilityZoneName]' \
--output table

# Access points attached to the file system
aws s3files describe-access-points --file-system-id fs-xxx

11.6 Quick health check script

#!/usr/bin/env bash
set -euo pipefail

FS_ID="${1:?usage: $0 <fs-id> <mount-dir>}"
MNT="${2:?}"

echo "== mount status =="
findmnt -T "$MNT" || { echo "NOT MOUNTED"; exit 1; }

echo "== df =="
df -hT "$MNT"

echo "== round-trip write test =="
STAMP="health-$(date +%s).txt"
echo "ok" > "$MNT/$STAMP"
sleep 70
aws s3api head-object --bucket "$(aws s3files get-file-system \
--file-system-id "$FS_ID" --query Bucket --output text | sed 's|^arn:aws:s3:::||')" \
--key "$STAMP" && echo "✔ object visible in S3"
rm "$MNT/$STAMP"


12. Dashboards & observability

Yes — you get a real observability surface, not just a mounted directory.

12.1 Built-in AWS Console dashboard

The S3 console's File systems tab for a bucket gives you:

  • Mount target status per AZ.

  • Attached access points.

  • Current client connection count.

  • Storage used on the high-performance cache layer.

  • Per-file-system CloudWatch widgets.

12.2 CloudWatch metrics (namespace hints)

S3 Files exposes metrics through CloudWatch under the AWS/S3Files namespace (metrics shared with its EFS underpinnings). Useful ones:

MetricMeaning
StorageBytesBytes held in the high-performance cache layer
ClientConnectionsNumber of NFS clients currently mounted
DataReadIOBytes / DataWriteIOBytesI/O throughput
TotalIOBytesAggregate I/O
PercentIOLimitHow close you are to saturation
PermittedThroughput / MeteredIOBytesCapacity vs used
SyncErrors (custom dimension)Failed S3 ↔ FS sync events

Build a dashboard with:

aws cloudwatch put-dashboard \
--dashboard-name S3FilesOverview \
--dashboard-body file://dashboard.json

Minimal dashboard.json:

{
"widgets": [
{
"type": "metric",
"properties": {
"metrics": [
[ "AWS/S3Files", "StorageBytes", "FileSystemId", "fs-xxx" ],
[ ".", "ClientConnections", ".", "." ],
[ ".", "TotalIOBytes", ".", "." ]
],
"view": "timeSeries",
"stat": "Average",
"period": 60,
"title": "S3 Files — fs-xxx"
}
}
]
}

12.3 CloudWatch alarms

aws cloudwatch put-metric-alarm \
--alarm-name S3Files-HighClientConnections \
--metric-name ClientConnections \
--namespace AWS/S3Files \
--statistic Average --period 60 \
--threshold 20000 --comparison-operator GreaterThanThreshold \
--evaluation-periods 2 \
--dimensions Name=FileSystemId,Value=fs-xxx \
--alarm-actions arn:aws:sns:us-east-1:123456789012:oncall

12.4 CloudTrail

All management-plane calls (CreateFileSystem, CreateMountTarget, DeleteFileSystem, etc.) land in CloudTrail as events from the s3files.amazonaws.com service. Query with Athena or the console's Event history.

12.5 Mount-helper logs

On each EC2 client, the mount helper and its watchdog write diagnostic logs useful for debugging NFS / TLS hiccups:

sudo journalctl -u amazon-efs-mount-watchdog -f
sudo ls /var/log/amazon/efs/

12.6 Third-party dashboards

  • Grafana CloudWatch data source — point it at the AWS/S3Files namespace; any EFS dashboard template lightly re-dimensioned works.

  • Datadog — its AWS integration picks up S3 Files metrics automatically.


13. IAM & security model

There are two IAM roles at play. Do not confuse them.

13.1 Service role (S3 Files → your bucket)

Consumed by the elasticfilesystem.amazonaws.com service principal (S3 Files is layered on EFS internally). Needs S3 read/write, KMS use (if SSE-KMS), and EventBridge management confined to rules named DO-NOT-DELETE-S3-Files*. Official policy (reproduced in §8.3 as Terraform) — summary:

  • s3:ListBucket, s3:ListBucketVersions on the bucket.

  • s3:AbortMultipartUpload, s3:DeleteObject*, s3:GetObject*, s3:List*, s3:PutObject* on objects.

  • kms:GenerateDataKey, kms:Encrypt, kms:Decrypt, kms:ReEncryptFrom, kms:ReEncryptTo scoped via kms:ViaService = s3.\<region>.amazonaws.com.

  • Scoped EventBridge manage + broad EventBridge read.

  • Trust policy: principal elasticfilesystem.amazonaws.com, conditioned on aws:SourceAccount and aws:SourceArn = arn:aws:s3files:\<region>:\<account>:file-system/*.

13.2 Compute role (your EC2/Lambda/task → S3 Files)

Attach one managed policy plus an inline S3 read policy:

  • AmazonS3FilesClientFullAccess — read + write through the mount.

  • AmazonS3FilesClientReadOnlyAccess — read only.

  • AmazonElasticFileSystemUtils — lets the mount helper emit CloudWatch metrics. Inline S3 read policy (speeds up large reads by bypassing cache):

{
"Version": "2012-10-17",
"Statement": [
{ "Effect": "Allow",
"Action": ["s3:GetObject", "s3:GetObjectVersion"],
"Resource": "arn:aws:s3:::\<bucket>/*" },
{ "Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::\<bucket>" }
]
}

Granular IAM actions instead of the managed policy: s3files:ClientMount, s3files:ClientWrite, s3files:ClientRootAccess.

13.3 Encryption

  • In transit: TLS 1.3, mandatory. The mount helper always adds tls and iam.

  • At rest: SSE-S3 or SSE-KMS (matches the bucket).

  • FIPS mode: flip fips_mode_enabled = true in /etc/amazon/efs/s3files-utils.conf.

13.4 POSIX permissions

S3 Files stores UID/GID and file-mode bits as object metadata in S3. The file system enforces standard POSIX access checks on top of IAM — both must pass.


14. S3 Files vs Mountpoint vs EFS vs FSx

S3 FilesMountpoint for S3EFSFSx
Access patternShared read/write NFSFUSE client, read-heavyShared NFSShared (Lustre / ONTAP / OpenZFS / Windows)
Source of truthS3 bucketS3 bucketEFSFSx
POSIX semanticsFullPartial (no rename, no random write)FullFull
Max clients25,000N/A (client-side)ThousandsVaries
Latency~1 ms (cache)Network + S3Sub-msSub-ms
Data in S3?Yes, alwaysYes, alwaysNoNo (except tiering)
Best forAgents, ML training, collaborationLarge sequential reads, ETLGeneral-purpose NASHPC, Windows, NetApp-native
Cost modelPay for cache + S3 requestsS3 requests onlyGB/month tieredProvisioned

Rule of thumb: read-heavy and sequential → Mountpoint. Interactive / shared / writes → S3 Files.


15. Limitations & gotchas

  • Only S3 general-purpose buckets — not directory buckets, vector buckets, or S3 Tables buckets.

  • Versioning must be on. Required. Will reject creation otherwise.

  • Encryption must be SSE-S3 or SSE-KMS.

  • One mount target per AZ per VPC. If you need more AZs, cover them explicitly.

  • Mount helper is Linux only. Windows clients are not supported.

  • _netdev** is required** in /etc/fstab — without it, instances can hang on boot.

  • Sync is not instantaneous. Expect seconds from S3 → FS, up to ~1 minute from FS → S3.

  • Don't touch the managed EventBridge rules named DO-NOT-DELETE-S3-Files* — the service needs them.

  • Terraform native support is pending (provider v6.40.0); use the CLI fallback until then.

  • Provider-specific API fields may shift slightly between preview and GA CDK/Terraform resources — always cross-check arg names with the latest CHANGELOG before production apply.

  • Not a POSIX file system in every edge case — it is close-to-open consistent (NFS semantics), not strict locking.


16. Pricing notes

Pay-as-you-go, no minimum commitments:

  • Active working-set storage on the high-performance cache layer (GB-month).

  • Small-file reads and all writes through the file system (metered ops).

  • Underlying S3 request charges during sync (standard GET/PUT pricing).

  • Large reads (≥1 MiB) stream from S3 directly and incur only S3 GET costs — no file-system op charge.

  • Cache retention of 1–365 days (default 30) governs how long data stays "warm" before it expires. Longer retention = higher cache storage bill. Headline claim from AWS: up to 90% cheaper than cycling data between S3 and a separate file system. Check the S3 pricing page for the exact per-region GB-month and request rates.


Related Articles