
Cargando...
Master the two pillars of AWS data protection — where data lives vs. where data travels
Encryption at rest protects data stored on physical or logical media (S3, EBS, RDS, DynamoDB, etc.) from unauthorized access if the storage medium is compromised. Encryption in transit protects data moving between clients and services — or between AWS services — from interception or man-in-the-middle attacks using protocols like TLS/SSL. Together, they form a defense-in-depth strategy that appears across every AWS certification, from Cloud Practitioner through Security Specialty.
Exam questions test whether you can correctly identify WHICH type of encryption solves a given compliance or security scenario, and WHICH AWS service/feature enables it — confusing the two types or their enabling mechanisms is the most common failure point.
Encryption at Rest via AWS-Managed Keys (SSE-S3 / SSE-KMS / SSE-C)
Data is encrypted before being written to storage and decrypted when read back. AWS services like S3, EBS, RDS, DynamoDB, and Redshift support server-side encryption. SSE-S3 uses AES-256 with AWS-managed keys. SSE-KMS uses AWS KMS Customer Master Keys (CMKs), giving audit trails via CloudTrail. SSE-C lets the customer supply their own key material per request.
When regulatory requirements (HIPAA, PCI-DSS, FedRAMP) mandate that stored data be unreadable if physical media is stolen or decommissioned. Also use SSE-KMS when you need key rotation, cross-account access, or a full audit trail of every decryption event.
SSE-KMS adds latency and API call costs to KMS (per-request pricing). SSE-C requires the client to manage and transmit the key with every request — AWS never stores the key. SSE-S3 is free but offers no customer control over key material.
Encryption in Transit via TLS/SSL
Data moving between a client and an AWS endpoint — or between AWS services — is protected using Transport Layer Security (TLS 1.2 minimum recommended; TLS 1.3 preferred). AWS services enforce HTTPS endpoints. ACM (AWS Certificate Manager) provisions and manages TLS certificates for ALB, CloudFront, API Gateway, and more at no additional charge for public certificates.
Any time data crosses a network boundary — user to application, microservice to microservice, on-premises to AWS via Direct Connect or VPN, or Lambda to RDS. Required by compliance frameworks to prevent eavesdropping and tampering.
TLS adds CPU overhead for handshake and encryption/decryption. Terminating TLS at a load balancer (ALB) means traffic between ALB and backend targets may be unencrypted unless you also configure end-to-end TLS (re-encryption). This is a frequent exam trap.
Client-Side Encryption (CSE)
The application encrypts data BEFORE sending it to AWS. AWS stores or transmits ciphertext only — AWS never sees the plaintext. The AWS Encryption SDK and DynamoDB Encryption Client are purpose-built libraries for this pattern. Keys are managed entirely by the customer (often backed by KMS or an HSM).
When you cannot trust AWS with plaintext data — e.g., extreme compliance scenarios, sovereign data requirements, or zero-trust architectures. Also used when you need end-to-end encryption where even AWS employees cannot access plaintext.
Highest operational complexity. Search, indexing, and querying encrypted fields becomes difficult or impossible without careful design. Key management burden falls entirely on the customer.
Encryption in Transit for Internal AWS Traffic (VPC, PrivateLink, VPN)
Traffic within a VPC between EC2 instances is NOT automatically encrypted by default. AWS Site-to-Site VPN and AWS Client VPN use IPSec to encrypt traffic between on-premises and AWS. AWS PrivateLink keeps traffic on the AWS backbone, reducing exposure, but does not itself encrypt the payload. For inter-node encryption within a VPC, you must implement TLS at the application layer or use enhanced networking with encryption (Nitro-based instances support in-transit encryption between instances in the same placement group).
When compliance requires encryption of ALL data in motion, including east-west traffic inside a VPC. Use VPN for on-premises connectivity. Use application-layer TLS for service-to-service calls inside a VPC when data sensitivity demands it.
PrivateLink is often misidentified as providing encryption — it provides network isolation, not payload encryption. VPN adds cost and complexity for on-premises connectivity.
Double Encryption / Envelope Encryption
AWS KMS uses envelope encryption: data is encrypted with a Data Encryption Key (DEK), and the DEK itself is encrypted with a KMS Customer Master Key (CMK). Only the encrypted DEK is stored alongside the ciphertext. To decrypt, KMS first decrypts the DEK, then the DEK decrypts the data. This pattern applies to both at-rest and in-transit scenarios and is the foundation of how KMS scales to large data volumes.
When you need to encrypt large objects efficiently (KMS has a 4 KB plaintext limit for direct encryption), require key rotation without re-encrypting all data, or need to share encrypted data across accounts by granting CMK access.
Adds conceptual complexity. Candidates must understand that rotating a CMK does NOT automatically re-encrypt existing data — old DEKs encrypted with the old CMK version remain valid until explicitly re-encrypted.
STEP 1 — Identify WHERE the data security requirement applies:
• → Data on disk / in a database / in object storage? → Encryption AT REST
→ Data moving over a network / API call / streaming? → Encryption IN TRANSIT
→ Both? → Apply both independently.
STEP 2 — For Encryption AT REST, choose the key management model:
• → No key control needed, simplest setup → SSE-S3 (AWS manages everything)
→ Need audit trail, key rotation, cross-account access → SSE-KMS (CMK in KMS)
→ Must supply your own key material per request, AWS must never store key → SSE-C
→ AWS must never see plaintext at all → Client-Side Encryption (CSE)
STEP 3 — For Encryption IN TRANSIT, choose the enforcement mechanism:
• → Public-facing web app / API → ACM certificate + HTTPS on ALB/CloudFront/API Gateway
→ On-premises to AWS → Site-to-Site VPN (IPSec) or Direct Connect + VPN overlay
→ Service-to-service inside VPC → Application-layer TLS (enforce via security groups + ACM Private CA)
→ Need to verify end-to-end (no TLS termination at LB) → Configure re-encryption on ALB or use NLB with pass-through
STEP 4 — Compliance overlay:
• → HIPAA / PCI-DSS / FedRAMP → Both at-rest AND in-transit encryption required
→ Audit trail required → SSE-KMS + CloudTrail
→ Customer-controlled keys required → KMS CMK (not AWS-managed key)
→ Zero-trust / sovereign data → CSE with customer-held keys
Encryption at rest and encryption in transit are INDEPENDENT controls — enabling one does NOT enable the other. A question describing an S3 bucket with SSE-S3 enabled but accessed over HTTP is still vulnerable to in-transit interception. You must enforce HTTPS separately (S3 bucket policy with 'aws:SecureTransport: false' deny condition).
KMS CMK encryption (SSE-KMS) generates a CloudTrail log entry for EVERY decrypt operation. This is the correct answer when a question asks for 'audit trail of who accessed encrypted data' — SSE-S3 does NOT provide this granularity.
ALB terminates TLS by default — traffic between ALB and EC2 targets travels unencrypted inside the VPC unless you configure HTTPS listeners on the target group AND install certificates on the backend instances. NLB in TCP mode passes through encrypted traffic without termination (true end-to-end TLS).
AWS KMS direct encryption has a 4 KB plaintext size limit. For anything larger, envelope encryption must be used: generate a Data Encryption Key (DEK) via GenerateDataKey API, encrypt the data locally with the DEK, then store the encrypted DEK alongside the ciphertext. The AWS Encryption SDK automates this pattern.
Encryption at rest and in transit are INDEPENDENT — an S3 bucket with SSE-S3 enabled but accessed over HTTP is still unprotected in transit. Enforce both separately: SSE/KMS for rest, bucket policy with aws:SecureTransport deny + HTTPS for transit.
SSE-KMS is the ONLY server-side encryption option that produces a CloudTrail audit log for every decrypt operation — choose it whenever 'audit trail of data access' or 'who accessed encrypted data' appears in the question.
ALB terminates TLS — backend traffic is unencrypted by default. For true end-to-end encryption, either configure HTTPS on the target group (re-encryption) or use NLB in TCP pass-through mode.
Rotating a KMS CMK (automatic annual rotation or manual) does NOT re-encrypt existing data. Old ciphertext encrypted with a previous key version can still be decrypted because KMS retains all prior key material. Re-encryption of existing data requires explicit re-encrypt API calls or application-level re-encryption.
AWS PrivateLink and VPC Endpoints keep traffic on the AWS network backbone and prevent it from traversing the public internet — but they do NOT encrypt the payload. Do not confuse network isolation with encryption. If the question asks for encryption in transit, TLS is still required even with PrivateLink.
RDS supports encryption at rest using KMS, but it MUST be enabled at creation time — you cannot enable encryption on an existing unencrypted RDS instance directly. The correct migration path is: take a snapshot → copy the snapshot with encryption enabled → restore from the encrypted snapshot.
S3 default encryption (bucket-level) ensures all new objects are encrypted at rest automatically, but existing objects are NOT retroactively encrypted. To encrypt existing objects, use S3 Batch Operations with a PUT COPY operation.
Common Mistake
Enabling HTTPS on a website means data is also encrypted at rest on the server.
Correct
HTTPS (TLS) only protects data IN TRANSIT between the client and server. Once data arrives at the server and is written to disk or a database, a completely separate mechanism (SSE, EBS encryption, RDS encryption, etc.) is required to protect it at rest.
Exam scenarios describe systems with 'HTTPS enabled' and ask which additional control is needed — the answer is always an at-rest encryption mechanism. Conflating the two is the most common conceptual error at the Cloud Practitioner and Solutions Architect Associate level.
Common Mistake
Using an AWS-managed key (aws/s3, aws/ebs) gives the same control and auditability as a customer-managed CMK.
Correct
AWS-managed keys are controlled entirely by AWS — you cannot set key policies, cannot grant cross-account access, cannot disable or delete them on demand, and they do NOT produce per-operation CloudTrail decrypt logs. Customer-managed CMKs give you full control, audit granularity, and cross-account sharing capability.
Questions about 'customer control over encryption keys' or 'audit trail of data access' always require a customer-managed CMK, not an AWS-managed key. This distinction is heavily tested on Security Specialty and SAA-C03.
Common Mistake
Data traveling within a VPC between two EC2 instances is automatically encrypted.
Correct
Traffic within a VPC is NOT encrypted by default. AWS provides network isolation (only your instances can see the traffic), but the payload is not encrypted unless you implement application-layer TLS or use Nitro-based instance types that support in-transit encryption between instances within a cluster placement group.
Compliance frameworks like PCI-DSS require encryption of ALL cardholder data in transit, including internal east-west traffic. Assuming VPC isolation equals encryption will lead to compliance failures and wrong exam answers.
Common Mistake
SSE-C means AWS manages a customer-provided key on the customer's behalf.
Correct
With SSE-C, the customer provides the key with EVERY API request. AWS uses the key to encrypt/decrypt and then immediately discards it — AWS never stores the key. The customer is solely responsible for key storage and delivery. If the key is lost, the data is permanently unrecoverable.
SSE-C is often confused with SSE-KMS with imported key material. The critical difference is that SSE-C requires the client to transmit the key on every call, while SSE-KMS with imported material stores the key in KMS. This distinction appears in Security Specialty questions about key custody.
Common Mistake
Enabling KMS key rotation automatically re-encrypts all data encrypted with that key.
Correct
KMS key rotation only generates a new backing key for future encryption operations. Existing ciphertext is NOT re-encrypted. KMS retains all previous backing key versions to decrypt old ciphertext. To re-encrypt existing data with the new key version, you must explicitly call the ReEncrypt API or re-encrypt at the application level.
This is a classic trap in Security Specialty and SAA-C03 questions about key lifecycle management. Candidates assume rotation solves the problem of old key exposure — it does not retroactively protect already-encrypted data.
REST = Refrigerator (data is STORED, cold, at rest — lock the fridge with KMS). TRANSIT = Train (data is MOVING — seal the train car with TLS). Two different locks for two different states.
SSE options in order of customer control: S3 (AWS owns everything) → KMS (shared control, audit trail) → C (customer owns key, must bring it every time) → CSE (customer encrypts before AWS ever sees it). More letters = more customer responsibility.
TLS termination at ALB = 'The pipe ends at the door' — what happens inside the house (VPC) is a separate concern. NLB TCP pass-through = 'The pipe goes all the way through' — end-to-end sealed.
Assuming that enabling HTTPS (TLS/in-transit encryption) also protects data at rest, OR assuming that enabling at-rest encryption (SSE-KMS) also protects data as it travels over the network — these are always two separate, independent controls that must each be explicitly configured.
CertAI Tutor · · 2026-02-22
Key Services
Comparisons
Guides & Patterns