
Cargando...
Persistent, high-performance block storage designed for EC2 — from boot volumes to mission-critical databases
Amazon Elastic Block Store (EBS) provides durable, low-latency block storage volumes that attach to EC2 instances and persist independently of instance lifecycle. EBS volumes live in a single Availability Zone and are automatically replicated within that AZ for durability. With multiple volume types spanning SSD and HDD tiers, EBS serves everything from general-purpose workloads to ultra-high IOPS transactional databases and sequential big-data processing.
Provide EC2 instances with durable, resizable, high-performance block storage that survives instance stop/terminate and can be independently snapshotted, encrypted, and managed.
Use When
Avoid When
Encryption at rest (AES-256)
Uses AWS KMS CMKs or AWS-managed keys. Encryption covers data at rest, data in transit between EC2 and EBS, snapshots, and volumes restored from encrypted snapshots.
Encryption in transit (EC2 ↔ EBS)
Automatically included when EBS encryption is enabled — no separate TLS configuration needed.
Default encryption (account-level)
Can be enabled per-region so all new volumes and snapshots are encrypted automatically. Does NOT retroactively encrypt existing volumes.
Live volume modification (Elastic Volumes)
Resize, change volume type, or adjust IOPS/throughput on a live, attached volume without detaching or stopping the instance. May require OS-level partition extension.
Snapshots (point-in-time backup)
Stored in S3 (AWS-managed, not visible in your S3 console). Incremental after first snapshot. Used to create AMIs or restore volumes.
Fast Snapshot Restore (FSR)
Eliminates latency on first-access of restored snapshot data. Enabled per snapshot per AZ. Additional cost applies.
Multi-Attach (io1/io2 only)
Attach one io1/io2 volume to up to 16 Nitro instances in the same AZ. Requires cluster-aware file system.
EBS Direct APIs
Read/write snapshot data directly without needing to restore to a volume. Used for backup/restore software and data validation.
Amazon Data Lifecycle Manager (DLM)
Automate snapshot creation, retention, and cross-region copy policies without Lambda or custom scripts.
Boot volume support
gp2, gp3, io1, io2 support boot. st1 and sc1 do NOT support boot volumes.
RAID configuration
RAID 0 (striping for performance) and RAID 1 (mirroring for fault tolerance) are supported via OS-level configuration. AWS does not manage RAID.
NVMe interface (io2 Block Express)
io2 Block Express uses NVMe-oF (NVMe over Fabrics) for sub-millisecond latency.
99.999% durability (io2)
io2 offers 5-nine durability vs 99.8-99.9% for gp2/gp3/io1. Critical for compliance-sensitive workloads.
Cross-account snapshot sharing
Snapshots can be shared with specific AWS accounts or made public. Encrypted snapshots require sharing the KMS key as well.
CloudWatch metrics integration
VolumeReadOps, VolumeWriteOps, VolumeQueueLength, BurstBalance (gp2/st1/sc1) are key metrics for monitoring EBS performance.
Primary Block Storage for EC2
high freqEBS volumes attach to EC2 instances as root or data volumes. Use gp3 for general workloads, io2 for high-IOPS databases, st1/sc1 for sequential big-data. EBS-optimized instances ensure dedicated I/O bandwidth. Nitro-based instances required for maximum IOPS on io1/io2.
Encryption with Customer-Managed Keys
high freqEBS encryption uses KMS to encrypt volume data, snapshots, and in-transit data between EC2 and EBS. Customer-managed CMKs (CMKs) enable key rotation control, cross-account sharing, and audit via CloudTrail. AWS-managed keys (aws/ebs) are simpler but lack granular control — important for compliance scenarios.
Snapshot Storage and Lifecycle
high freqEBS snapshots are stored in AWS-managed S3 (not visible in your bucket). Use S3 as the durable backup tier — restore snapshots to new EBS volumes in any AZ or region. EBS Direct APIs allow reading snapshot data without restoring, enabling lightweight backup validation.
Managed Database Storage
high freqRDS uses EBS volumes under the hood (gp2, gp3, io1, io2). RDS Multi-AZ replicates EBS data synchronously to a standby. Understanding EBS volume types helps right-size RDS storage for IOPS-intensive vs general workloads. gp3 is now the recommended default for most RDS instances.
Block vs Shared File Storage Decision
high freqEBS is single-AZ, single-instance (except Multi-Attach). EFS is a managed NFS file system accessible from multiple instances across multiple AZs simultaneously. Choose EBS for single-instance low-latency block I/O; choose EFS for shared access, auto-scaling storage, or cross-AZ file sharing.
Hybrid Storage with Cached Volumes
medium freqStorage Gateway Volume Gateway (Cached mode) stores primary data in S3 and caches frequently accessed data on-premises using EBS snapshots as the restore source. Gateway-stored volumes keep primary data on-premises with async backups to S3 as EBS snapshots.
Centralized Backup Policy
medium freqAWS Backup provides a unified backup solution for EBS volumes across accounts and regions. Supports backup plans with retention policies, cross-region copy, and compliance reporting — preferred over manual snapshot scripts for enterprise governance.
Performance Monitoring and Alerting
medium freqMonitor BurstBalance (critical for gp2/st1/sc1), VolumeQueueLength (should be near 0 for healthy I/O), and VolumeReadBytes/VolumeWriteBytes. Set alarms on BurstBalance < 20% to detect gp2 volumes needing upgrade to gp3 or io2.
gp3 is ALWAYS the default choice for new general-purpose workloads. It is ~20% cheaper than gp2, provides 3,000 IOPS and 125 MiB/s baseline FREE at any size, and allows independent IOPS/throughput scaling up to 16,000 IOPS / 1,000 MiB/s. If a question asks to optimize cost on EBS, migrating gp2 → gp3 is almost always the correct answer.
EBS volumes are AZ-scoped — they CANNOT be directly attached across AZs. To move an EBS volume to another AZ or region: create a snapshot → restore in target AZ/region. This is the ONLY way and is heavily tested in architecture and migration scenarios.
io2 vs io1 decision: Always prefer io2 over io1 for new deployments. io2 provides 500:1 IOPS-to-GiB ratio (vs 50:1 for io1), 99.999% durability (vs 99.8-99.9%), and same price. io1 is legacy — exam scenarios asking about 'highest durability provisioned IOPS SSD' should select io2.
st1 and sc1 CANNOT be used as boot volumes. If a question mentions boot volume or OS disk, eliminate st1/sc1 immediately. Only gp2, gp3, io1, io2 are bootable.
EBS encryption with customer-managed KMS keys (CMKs) is required for regulatory compliance (PCI-DSS, HIPAA) — not AWS-managed keys. AWS-managed keys (aws/ebs) cannot be rotated on demand, audited at the key level, or shared cross-account. When a question mentions compliance + encryption, CMKs are the answer.
gp3 beats gp2 in cost AND performance — migrating gp2→gp3 is zero-downtime and always cost-optimal. When asked 'most cost-effective EBS volume,' gp3 is correct unless IOPS >16,000 or throughput >1,000 MiB/s is required.
EBS is AZ-locked. Moving data to another AZ or region ALWAYS requires: Snapshot → (Copy) → Restore. There is no direct cross-AZ EBS attachment or replication.
Compliance encryption requires Customer-Managed KMS Keys (CMKs), NOT AWS-managed keys. CMKs enable on-demand rotation, audit trails, cross-account sharing, and immediate key revocation — all required by PCI-DSS, HIPAA, and similar frameworks.
The gp2 burst credit model is a trap: small gp2 volumes (e.g., 100 GiB) have only 300 IOPS sustained baseline but burst to 3,000 IOPS using I/O credits. If credits are exhausted (CloudWatch BurstBalance = 0), performance drops to baseline. This is why gp3 is superior — it provides 3,000 IOPS baseline with NO burst credit system.
EBS Multi-Attach (io1/io2 only, up to 16 Nitro instances, same AZ) requires a cluster-aware file system like GFS2 or OCFS2. Standard Linux ext4 or Windows NTFS will cause data corruption with Multi-Attach. This distinction appears in questions about shared storage for clustered applications.
Deleting an EBS snapshot does NOT break the restore chain. AWS internally promotes dependent data to the next snapshot in the chain. You can safely delete any snapshot and still restore from remaining ones. This is frequently tested as a misconception.
Instance store vs EBS: Instance store is ephemeral (data lost on stop, hibernate, or termination), physically attached (lowest possible latency), and cannot be detached. EBS persists across stop/start/reboot. For questions asking about 'temporary scratch space with maximum I/O performance', instance store is correct. For 'persistent data', EBS is correct.
Elastic Volumes allow live modification of volume type, size, IOPS, and throughput without downtime. After resizing, you MUST extend the OS file system (e.g., growpart + resize2fs on Linux) — EBS only expands the block device, not the partition or file system. This two-step process is tested in operations scenarios.
EBS snapshots are stored in S3 (AWS-managed, not in your S3 buckets). They can be shared across accounts and copied across regions. Encrypted snapshot sharing requires sharing the KMS key with the target account — otherwise the target account cannot decrypt the snapshot.
For maximum EBS performance on io1/io2, the EC2 instance MUST be Nitro-based and EBS-optimized. Non-Nitro instances cap at 32,000 IOPS even on io1 volumes provisioned for 64,000 IOPS. Always check instance type when troubleshooting EBS performance shortfalls.
Common Mistake
io2 is needed for any high-performance database workload
Correct
gp3 supports up to 16,000 IOPS and 1,000 MiB/s — sufficient for most databases including mid-tier RDS, MySQL, PostgreSQL, and MongoDB. io2 is only justified when you need >16,000 IOPS, >1,000 MiB/s throughput, or 99.999% durability (SAP HANA, Oracle RAC, financial transaction systems).
Over-provisioning io2 when gp3 suffices wastes significant money. Exam questions will present a workload needing 10,000 IOPS and ask which volume type is 'most cost-effective' — gp3 is correct, not io2. Remember: gp3 max = 16,000 IOPS; only exceed this with io2.
Common Mistake
Migrating from gp2 to gp3 requires downtime or data migration
Correct
Elastic Volumes allows in-place, live migration from gp2 to gp3 with zero downtime. The volume type, IOPS, and throughput can all be changed while the volume is attached and in use. This is a cost-optimization best practice with no operational risk.
Candidates avoid gp2→gp3 migration thinking it requires a maintenance window. It does not. AWS Compute Optimizer and Cost Explorer both flag gp2 volumes as optimization opportunities. On exams, 'migrate gp2 to gp3 for cost savings without downtime' is always a valid answer.
Common Mistake
AWS-managed KMS keys (aws/ebs) provide sufficient encryption for compliance workloads
Correct
AWS-managed keys are adequate for encryption-at-rest but insufficient for compliance frameworks requiring key management control. Customer-managed CMKs are required for: on-demand key rotation, key deletion/disabling, cross-account snapshot sharing with encryption, CloudTrail audit at the key level, and satisfying PCI-DSS / HIPAA / FedRAMP key custody requirements.
Exam scenarios will describe a compliance requirement (e.g., 'must demonstrate key rotation every 90 days' or 'must revoke access to data immediately') — AWS-managed keys rotate annually on AWS's schedule and cannot be manually rotated or deleted. CMKs are the only correct answer for these scenarios.
Common Mistake
Instance store volumes can be used for persistent application data since they are faster than EBS
Correct
Instance store is 100% ephemeral. Data is permanently lost when an instance is stopped, hibernated, or terminated (but survives reboots). It cannot be snapshotted, detached, or reattached. Use instance store ONLY for temporary data: caches, buffers, scratch space, or data that is replicated elsewhere (e.g., Hadoop HDFS replicated across multiple nodes).
This is one of the most dangerous misconceptions in AWS. Architectures relying on instance store for databases, user uploads, or application state will lose data on any instance lifecycle event. The exam tests this by describing a scenario where instance store is used for 'fast persistent storage' — this is always wrong.
Common Mistake
EBS replication across AZs means my data is safe if an AZ fails
Correct
EBS replication is WITHIN a single AZ only — it protects against hardware failures within that AZ (disk, node failures) but NOT against full AZ outages. For AZ-level fault tolerance, you must use snapshots (restore in another AZ), EFS (inherently multi-AZ), or application-level replication (e.g., EC2 Auto Scaling across AZs with separate EBS volumes).
The phrase 'automatically replicated within its Availability Zone' in AWS docs misleads candidates into thinking EBS provides cross-AZ durability. It does not. This distinction is critical for designing highly available architectures and appears frequently in SAA-C03 HA design questions.
Common Mistake
Enabling default EBS encryption at the account level encrypts all existing volumes
Correct
Default encryption only applies to NEW volumes and NEW snapshots created after the setting is enabled. Existing unencrypted volumes are NOT retroactively encrypted. To encrypt an existing volume: create a snapshot → copy the snapshot with encryption enabled → restore the encrypted snapshot to a new volume → swap the volume.
Security audit scenarios will ask 'a new policy requires all EBS volumes to be encrypted — what is the correct approach?' Enabling default encryption alone is insufficient. The four-step process (snapshot → encrypted copy → restore → swap) is the tested answer for existing volumes.
Common Mistake
Deleting intermediate EBS snapshots will cause data loss for remaining snapshots
Correct
EBS snapshots are incremental but each snapshot is independently restorable. When you delete an intermediate snapshot, AWS automatically moves any unique data blocks referenced by that snapshot to the next snapshot in the chain. No data is lost, and all remaining snapshots remain fully restorable.
This misconception causes administrators to hoard old snapshots 'just in case,' wasting significant S3 storage costs. The exam tests this with questions like 'which statement about EBS snapshot deletion is correct?' — the correct answer is always that deletion is safe and data is preserved.
GIST for volume types: G=General (gp2/gp3), I=IOPS-intensive (io1/io2), S=Sequential throughput (st1), T=Tundra/Cold (sc1) — only G and I can boot!
gp3 > gp2 in EVERY dimension: cheaper, faster baseline, independent scaling, no burst credits — '3 is greater than 2' in every way
io2 vs io1 ratio: io2 = 500:1, io1 = 50:1 — io2 has 10× the ratio, 10× the durability nines (99.999% vs 99.9%)
EBS AZ = AZ-locked Block Storage: 'EBS stays home' — it never leaves its AZ without a snapshot
SNAP-COPY-RESTORE for cross-AZ/region migration: Snapshot → Copy (optional, for region) → Restore in target AZ
Compliance encryption = CMK: 'Compliance needs Control' — Customer-Managed Keys give you control; AWS-managed keys give you convenience
CertAI Tutor · SAA-C03, SAP-C02, DEA-C01, DOP-C02, CLF-C02, DVA-C02 · 2026-02-21
In the Same Category
Comparisons