
Cargando...
Pick the right storage in 30 seconds — object, block, file, or managed file system
Storage type determines everything: object = S3, block = EBS, elastic POSIX file = EFS, managed file system = FSx
| Feature | Amazon S3 Infinite object storage, anywhere | Amazon EBS High-performance block storage for EC2 | Amazon EFS Serverless elastic POSIX shared file system | Amazon FSx Fully managed specialty file systems |
|---|---|---|---|---|
Storage Type Storage type is the first filter on every exam question. Object = S3, Block = EBS, Shared POSIX = EFS, Specialty FS = FSx. | Object storage (key-value) | Block storage (raw volumes) | Network file system (NFS v4.1/4.2) | Managed file system (Lustre, SMB/NTFS, NFS, iSCSI depending on flavor) |
Access Protocol If the question mentions SMB or Windows ACLs → FSx for Windows. If it mentions HPC/Lustre → FSx for Lustre. If it mentions NFS shared across many Linux instances → EFS. | REST/HTTP API, AWS SDKs, S3 Select, Athena, pre-signed URLs | Attached as block device to a single EC2 instance (or Multi-Attach for io1/io2) | NFS v4.0/4.1/4.2 — mountable by thousands of EC2/ECS/Lambda/EKS clients simultaneously | Lustre: POSIX; Windows FS: SMB/DFS; ONTAP: NFS+SMB+iSCSI+S3; OpenZFS: NFS |
Attachment / Concurrency Multi-Attach EBS is a common trap — it still requires a cluster-aware file system (e.g., GFS2) and is limited to same AZ. EFS is the go-to for true shared Linux file access. | Unlimited concurrent HTTP clients globally | Single EC2 instance by default; io1/io2 Multi-Attach: up to 16 Nitro EC2 instances in same AZ | Thousands of concurrent NFS clients across multiple AZs, VPCs (via VPC peering), on-premises (via DX/VPN) | Lustre: thousands of compute nodes; Windows FS: hundreds of SMB clients; ONTAP: multi-protocol, multi-AZ; OpenZFS: NFS clients |
Durability FSx for Lustre SCRATCH file systems are NOT replicated and are designed for temporary, short-duration HPC workloads. Do not use for durable storage. | 99.999999999% (11 nines) — automatic multi-AZ replication for Standard; S3 One Zone-IA stores in single AZ | Designed for 99.8%–99.9% annual failure rate; snapshots stored in S3 (11 nines); io2 Block Express offers higher durability | 99.999999999% (11 nines) for Regional (multi-AZ); One Zone EFS stores in single AZ | Varies by type: Windows FS Multi-AZ = highly durable; Lustre Persistent = replicated; Scratch = no replication (temporary) |
Availability SLA | 99.99% (Standard); 99.9% (Standard-IA, Intelligent-Tiering); 99.5% (One Zone-IA) | 99.999% availability for io2 Block Express; 99.8%–99.9% for other volume types | 99.99% (Regional/Multi-AZ); 99.9% (One Zone) | 99.99% (Windows FS Multi-AZ, ONTAP Multi-AZ); Single-AZ variants lower |
Scalability / Capacity EFS is the only service that scales to petabytes automatically with zero capacity management. S3 also scales infinitely but is object storage, not a file system. | Virtually unlimited — no bucket size limit, unlimited objects; individual object up to 5 TB | gp2/gp3/io1/io2: up to 16 TiB per volume; io2 Block Express: up to 64 TiB; st1/sc1: up to 16 TiB | Automatically scales from 0 to petabytes — no provisioning required | Lustre: up to hundreds of PB with S3 backend; Windows FS: up to 64 TiB (Single-AZ) / 64 TiB (Multi-AZ); ONTAP: scales with SVM; OpenZFS: up to 512 TiB |
Performance Tiers / Volume Types io2 Block Express is the highest-performance EBS option (256,000 IOPS) but costs significantly more than gp3. Exam questions often test whether you correctly identify when io2 is actually needed vs. gp3 sufficing. | No tiers for performance — throughput scales with request rate; use S3 Transfer Acceleration or multipart upload for large objects | gp3 (baseline 3,000 IOPS, up to 16,000 IOPS, up to 1,000 MB/s); gp2 (3 IOPS/GiB, max 16,000 IOPS); io2 Block Express (up to 256,000 IOPS, up to 4,000 MB/s); io1 (up to 64,000 IOPS); st1 (throughput-optimized HDD, up to 500 MB/s); sc1 (cold HDD, up to 250 MB/s) | General Purpose (default, low latency); Max I/O (higher aggregate throughput, higher latency, legacy — not available for new One Zone FS); Elastic Throughput (default, auto-scales); Provisioned Throughput (fixed MiB/s); Bursting Throughput | Lustre: SSD (sub-ms latency, up to hundreds of GB/s aggregate); HDD (throughput-optimized); Windows FS: SSD or HDD; ONTAP: SSD; OpenZFS: SSD |
Latency For databases requiring sub-millisecond latency → EBS io2. For HPC requiring sub-millisecond shared file system → FSx for Lustre. S3 is never the answer for low-latency I/O. | Milliseconds to tens of milliseconds (HTTP overhead); not suitable for low-latency block I/O | Sub-millisecond (SSD volumes: gp3, io1, io2); single-digit milliseconds (HDD: st1, sc1) | General Purpose: low single-digit milliseconds; Max I/O: higher latency; One Zone: slightly lower latency than Multi-AZ | Lustre SSD: sub-millisecond; Windows FS SSD: sub-millisecond; ONTAP SSD: sub-millisecond; HDD variants: higher |
Multi-Region / Global Access S3 is the ONLY storage service with native global multi-region access points. For global file system needs, FSx for ONTAP with SnapMirror or S3 with CRR are the patterns. | Global by design — S3 Cross-Region Replication (CRR), S3 Multi-Region Access Points, Transfer Acceleration via CloudFront edge | Single AZ only — snapshots can be copied across regions; no native global access | Single region (multi-AZ within region); EFS Replication to another region for DR | Single region; FSx for ONTAP supports SnapMirror for cross-region replication; Windows FS supports DFS Replication |
EC2 Lifecycle Dependency EBS is AZ-locked — you cannot attach an EBS volume to an EC2 instance in a different AZ without creating a snapshot and restoring it. | Completely independent — persists without any EC2 instance | Independent of EC2 lifecycle (persists after instance termination by default unless DeleteOnTermination=true); must be in same AZ as EC2 | Completely independent — persists without EC2; accessible via mount target in each AZ | Completely independent — persists without EC2; accessed via ENI/endpoint |
Encryption S3 SSE-KMS allows audit trails via CloudTrail for every key usage — important for compliance workloads. SSE-S3 has no per-request KMS cost. DSSE-KMS is dual-layer encryption for regulated industries. | SSE-S3 (AES-256, AWS managed), SSE-KMS (customer-managed CMK via KMS), SSE-C (customer-provided key), DSSE-KMS (dual-layer); TLS in transit | AES-256 encryption via AWS KMS; can enforce encryption by default account-wide; encrypted snapshots; snapshots from unencrypted volumes are unencrypted | AES-256 encryption at rest via KMS; TLS encryption in transit (enforced via mount option); can enforce encryption at rest during file system creation | AES-256 via KMS for all types; Windows FS also supports SMB encryption in transit; ONTAP supports encryption at rest and in transit |
Access Control EFS Access Points enforce a specific POSIX user identity and root directory — critical for multi-tenant container workloads (ECS/EKS). FSx for Windows is the only AWS storage with native Windows ACL and AD integration. | IAM policies, S3 Bucket Policies, S3 ACLs (legacy, recommend disabling), S3 Access Points, VPC Endpoints, Block Public Access settings, Object Ownership | IAM policies for API actions (CreateVolume, AttachVolume, etc.); OS-level permissions once mounted; Resource-based policies for snapshots | IAM resource-based policies (EFS access points), POSIX permissions (UID/GID), NFS ACLs, VPC Security Groups on mount targets | Windows FS: Active Directory integration, Windows ACLs, SMB; Lustre: POSIX; ONTAP: AD + POSIX + S3-compatible; OpenZFS: POSIX + NFS ACLs |
Versioning & Object Lock For WORM compliance on backup data across multiple services (not just S3), use AWS Backup Vault Lock. For WORM on S3 objects specifically, use S3 Object Lock with Compliance mode. | Native versioning (per-bucket); S3 Object Lock (WORM — Governance and Compliance modes); MFA Delete for extra protection | No versioning — snapshots provide point-in-time recovery; AWS Backup can enforce retention | No native versioning — AWS Backup provides point-in-time recovery; EFS-to-EFS restore | Windows FS: Volume Shadow Copy (VSS) for user-driven restores; ONTAP: SnapLock for WORM; Lustre: no versioning; OpenZFS: ZFS snapshots |
Lifecycle Management EFS Intelligent-Tiering can reduce costs by up to 92% for infrequently accessed files. For S3, Intelligent-Tiering automatically moves objects between access tiers with no retrieval fees. | S3 Lifecycle policies — transition between storage classes (Standard → IA → Glacier → Glacier Deep Archive) or expire objects automatically | No lifecycle policies — use Amazon Data Lifecycle Manager (DLM) for automated snapshot creation, retention, and cross-region copy | EFS Intelligent-Tiering: auto-moves files not accessed in 7/14/30/60/90 days to Infrequent Access (IA) tier; also moves back on access | Lustre: integrates with S3 as backing store (HSM-like); ONTAP: FabricPool for tiering to S3; Windows FS: no auto-tiering |
Backup & Disaster Recovery AWS Backup is the unified backup service that covers S3, EBS, EFS, FSx, RDS, DynamoDB, and more — use it for centralized backup governance and compliance across all storage services. | S3 Cross-Region Replication (CRR), S3 Same-Region Replication (SRR), S3 Batch Replication, Versioning + MFA Delete, AWS Backup (for S3) | EBS Snapshots (incremental, stored in S3), Amazon Data Lifecycle Manager, AWS Backup, Fast Snapshot Restore (FSR) for instant restore | AWS Backup (recommended), EFS-to-EFS Backup (legacy), EFS Replication to another region | AWS Backup integration for all types; Windows FS: VSS-consistent backups; ONTAP: SnapMirror, SnapVault; Lustre: S3 data repository as backup |
Pricing Model gp3 is ~20% cheaper than gp2 for the same capacity AND provides 3,000 IOPS baseline for free (vs. gp2 which gives only 100 IOPS for a 33 GiB volume). Always prefer gp3 unless there's a specific reason for gp2. | Pay per GB stored (varies by storage class) + per request (GET, PUT, LIST) + data transfer out; no minimum; S3 Intelligent-Tiering has monitoring fee per object | Pay per GB provisioned per month (regardless of use) + IOPS provisioned (for io1/io2) + throughput provisioned (for gp3 above baseline) + snapshot storage; minimum 1 GiB | Pay per GB stored per month (Standard or IA tier) + provisioned throughput (if not elastic); no minimum; Elastic Throughput included in per-GB price | Pay per GB/month for storage + MB/s/month for throughput capacity; pricing varies significantly by type (Lustre, Windows FS, ONTAP, OpenZFS) |
Cost Optimization The #1 EBS cost optimization on exams: migrate from gp2 to gp3. The #1 EFS cost optimization: enable Intelligent-Tiering. The #1 S3 cost optimization for archives: use Glacier Deep Archive. | Use S3 Intelligent-Tiering for unknown access patterns; S3 Glacier Instant Retrieval for archives needing ms access; S3 Glacier Deep Archive for long-term cold storage (~$0.00099/GB/month); S3 Storage Lens for visibility | Migrate gp2 → gp3 (same or better performance, ~20% cheaper); delete unattached volumes; use st1/sc1 for throughput-heavy, cost-sensitive workloads; right-size volumes | Enable EFS Intelligent-Tiering (IA tier is ~85% cheaper than Standard); use One Zone EFS for dev/test (47% cheaper than Regional); right-size provisioned throughput | Use FSx for Lustre SCRATCH for temporary HPC jobs; use HDD storage for Windows FS when latency allows; leverage ONTAP deduplication/compression; use S3 tiering with ONTAP FabricPool |
Supported Compute / Integration Lambda can mount EFS file systems natively (great for sharing state/models across Lambda invocations). Lambda CANNOT mount EBS volumes. Lambda accesses S3 via SDK only. | EC2, Lambda (event triggers + SDK), ECS/EKS (SDK), EMR (native EMRFS), Athena, Glue, SageMaker, Redshift (COPY/UNLOAD), CloudFront (origin) | EC2 only (block device); ECS/EKS via EBS CSI driver; not accessible from Lambda or on-premises directly | EC2, ECS (task storage), EKS (EFS CSI driver), Lambda (mounted file system), on-premises via DX/VPN, AWS DataSync | EC2, ECS/EKS (CSI drivers), on-premises (via DX/VPN for Windows FS and ONTAP), SageMaker (Lustre), AWS Batch (Lustre), ParallelCluster (Lustre) |
On-Premises Access For on-premises servers needing NFS access to cloud storage → EFS via DX/VPN. For SMB/Windows → FSx for Windows via DX/VPN. For object storage from on-premises → S3 via Storage Gateway File Gateway. | Via internet/HTTPS or VPC endpoint (PrivateLink); AWS Storage Gateway (S3 File Gateway, Tape Gateway) for on-premises integration | NOT directly accessible from on-premises; must go through EC2 | Via AWS Direct Connect or Site-to-Site VPN + NFS mount; AWS DataSync for bulk migration | Windows FS: via DX/VPN (SMB); ONTAP: via DX/VPN (NFS/SMB/iSCSI); Lustre: typically cloud-only; OpenZFS: via DX/VPN |
Use Case Fit When you see 'lift and shift Windows file server' → FSx for Windows. When you see 'HPC cluster / genomics / ML training at scale' → FSx for Lustre. When you see 'shared NFS for Linux containers' → EFS. | Static websites, data lakes, media storage, backups, archives, ML training datasets, log aggregation, cross-service data sharing | OS boot volumes, databases (RDS uses EBS under the hood), transactional applications, low-latency block I/O, SAP HANA, Oracle DB | Shared home directories, CMS (WordPress), CI/CD build artifacts, containerized workloads needing shared storage, machine learning training (shared dataset access) | HPC/genomics (Lustre), Windows enterprise apps (Windows FS), NetApp migrations (ONTAP), legacy NFS workloads (OpenZFS), SAP (ONTAP), Splunk (Lustre) |
Data Transfer / Migration AWS DataSync is the go-to service for migrating file data to/from S3, EFS, and FSx. It handles scheduling, encryption, integrity verification, and bandwidth throttling automatically. | AWS DataSync (from on-prem or other clouds), S3 Transfer Acceleration, AWS Snowball/Snowmobile for petabyte-scale, S3 Batch Operations | AWS DataSync (EBS → S3 or EFS), snapshots copy across regions/accounts, Elastic Disaster Recovery | AWS DataSync (bidirectional, on-prem ↔ EFS), EFS-to-EFS replication, AWS Backup restore | AWS DataSync (all FSx types), ONTAP SnapMirror, Windows FS Robocopy/DFS-R, Lustre S3 data repository sync |
Summary
Choose S3 for object/unstructured data at any scale with HTTP access. Choose EBS for high-performance block storage tightly coupled to a single EC2 instance (databases, boot volumes). Choose EFS when multiple Linux clients need concurrent shared POSIX file system access with automatic scaling. Choose FSx when you need a fully managed specialty file system — Lustre for HPC, Windows File Server for SMB/AD environments, ONTAP for enterprise NetApp migrations, or OpenZFS for ZFS workloads.
🎯 Decision Tree
Need block storage for EC2 database? → EBS (gp3 default, io2 for >16K IOPS). Need shared file system for multiple Linux EC2/containers/Lambda? → EFS. Need Windows SMB / Active Directory integration? → FSx for Windows File Server. Need HPC / ML training / Lustre? → FSx for Lustre. Need NetApp ONTAP features (dedup, compression, multi-protocol)? → FSx for ONTAP. Need object storage / data lake / HTTP access / unlimited scale? → S3. Need on-premises NFS/SMB to cloud? → EFS or FSx via Direct Connect/VPN, or S3 File Gateway.
gp2 vs gp3 is a guaranteed exam topic: gp3 provides 3,000 IOPS baseline FREE regardless of volume size, while gp2 scales at 3 IOPS/GiB (so a 10 GiB gp2 volume gets only 30 IOPS). gp3 is ~20% cheaper AND better — always recommend migrating gp2 → gp3 unless asked about a specific reason to stay on gp2. You can change volume type with zero downtime using Elastic Volumes.
S3 Object Lock ≠ AWS Backup Vault Lock. S3 Object Lock (Governance/Compliance mode) protects individual S3 objects from deletion/modification — it is S3-only. AWS Backup Vault Lock protects backup recovery points stored in a Backup Vault from deletion — it works across EBS snapshots, EFS backups, RDS backups, etc. Exam questions about WORM for backup data across multiple AWS services → Backup Vault Lock. WORM for S3 objects specifically → S3 Object Lock Compliance mode.
EBS is AZ-locked. An EBS volume can only be attached to EC2 instances in the SAME Availability Zone. To move data to another AZ: create a snapshot → restore snapshot in the target AZ. EFS and FSx mount targets are accessible across AZs within a region. S3 is region-scoped but globally accessible via HTTP. This AZ constraint for EBS is tested heavily in HA/DR architecture questions.
Lambda can mount EFS (not EBS, not FSx directly). When an exam question asks about Lambda needing persistent shared file system access (e.g., sharing ML models, shared state between invocations), the answer is EFS. Lambda accesses S3 via SDK, not as a mounted file system. EFS mount for Lambda requires the Lambda function to be in a VPC.
FSx for Lustre can be linked to an S3 bucket as a data repository. Files are lazily loaded from S3 on first access and can be exported back to S3. This makes FSx for Lustre the ideal solution for HPC workloads that need to process data stored in S3 at sub-millisecond latency — a common exam pattern for genomics, financial modeling, and ML training.
EBS root volumes have DeleteOnTermination=true by default; additional data volumes have DeleteOnTermination=false by default. This means if you terminate an EC2 instance, the root volume is deleted but data volumes persist. Exam scenario: 'data lost after instance termination' → check if root volume was used for data storage.
For Windows workloads needing shared file storage with Active Directory integration and Windows ACLs — the ONLY correct answer is FSx for Windows File Server. EFS does not support SMB or Windows ACLs. EBS cannot be shared across instances natively. S3 has no file system semantics.
The #1 exam trap: choosing io2 when gp3 would suffice. Exam scenarios often describe a workload needing 'better performance than gp2' or 'higher IOPS' — the correct answer is almost always to migrate to gp3 first (free 3,000 IOPS baseline, up to 16,000 IOPS, ~20% cheaper than gp2). Only choose io2 when you genuinely need >16,000 IOPS, >1,000 MB/s throughput, or the higher durability SLA of io2. The second biggest trap: confusing S3 Object Lock with AWS Backup Vault Lock — they protect different things at different layers.
CertAI Tutor · DEA-C01, SAP-C02, CLF-C02, SAA-C03, DVA-C02, DOP-C02, SCS-C02 · 2026-02-22
Services
Comparisons