
Cargando...
Fully managed, multi-active, key-value and document database built for single-digit millisecond performance at any scale
Amazon DynamoDB is a fully managed, serverless, NoSQL database service that delivers consistent single-digit millisecond performance at any scale. It supports both key-value and document data models, offers built-in security, backup and restore, in-memory caching via DAX, and multi-Region, multi-active replication through Global Tables. DynamoDB eliminates the operational overhead of managing database infrastructure, making it ideal for high-traffic applications that require massive horizontal scalability.
Provide a fully managed, highly available, horizontally scalable NoSQL database for applications requiring consistent low-latency reads and writes at any throughput level — without managing servers, clusters, or replication.
Use When
Avoid When
Provisioned Capacity Mode
Specify RCUs and WCUs in advance. Use Auto Scaling to adjust based on CloudWatch metrics. Best for predictable, steady-state workloads.
On-Demand Capacity Mode (Pay-Per-Request)
No capacity planning required. DynamoDB instantly accommodates traffic spikes. Best for unpredictable workloads or new applications. Higher per-request cost than provisioned.
DynamoDB Accelerator (DAX)
Fully managed, in-memory cache for DynamoDB. Reduces read latency from milliseconds to microseconds. API-compatible with DynamoDB SDK. Does NOT support strongly consistent reads or transactions.
DynamoDB Streams
Ordered stream of item-level changes (INSERT, MODIFY, REMOVE). 24-hour retention. Integrates natively with AWS Lambda for event-driven processing.
Kinesis Data Streams for DynamoDB
Alternative to DynamoDB Streams with configurable retention (up to 365 days), fan-out to multiple consumers, and enhanced monitoring via Kinesis.
Global Tables (Multi-Region, Multi-Active)
Active-active replication across multiple AWS Regions. Last-writer-wins conflict resolution. Requires DynamoDB Streams enabled. Version 2019.11.21 is current.
Point-in-Time Recovery (PITR)
Continuous backups with restore to any second in the last 35 days. Restores to a new table. No performance impact on the source table.
On-Demand Backup and Restore
Full table backups with no performance impact. Retained until explicitly deleted. Restores to a new table in the same or different Region.
Time to Live (TTL)
Automatically delete expired items based on a timestamp attribute. No WCU cost for TTL deletions. Deletion may lag up to 48 hours.
Transactions (ACID)
TransactWriteItems and TransactGetItems provide all-or-nothing atomicity across up to 100 items in multiple tables. Consumes 2x capacity units.
Conditional Writes
Use ConditionExpression to implement optimistic locking. Write succeeds only if condition is met (e.g., attribute_not_exists for idempotent puts).
PartiQL Support
SQL-compatible query language for DynamoDB. Supports SELECT, INSERT, UPDATE, DELETE. Does NOT support JOINs — still bound by DynamoDB's key-based access patterns.
Local Secondary Indexes (LSI)
Same partition key as base table, different sort key. Must be created at table creation. Shares table throughput. Supports strongly consistent reads.
Global Secondary Indexes (GSI)
Different partition key and/or sort key from base table. Can be created/deleted anytime. Has its own throughput. Only supports eventually consistent reads.
Encryption at Rest
Enabled by default using AWS owned keys. Can use AWS managed keys (aws/dynamodb) or customer managed keys (CMK) via AWS KMS.
VPC Endpoints (Gateway Endpoint)
DynamoDB supports Gateway VPC Endpoints (free). Traffic stays within AWS network. Does not require NAT Gateway or Internet Gateway.
Fine-Grained Access Control (IAM)
IAM policies can restrict access to specific tables, items (using dynamodb:LeadingKeys condition), or attributes.
Export to S3
Export DynamoDB table data to S3 in DynamoDB JSON or Amazon Ion format without consuming RCUs. Requires PITR enabled. Supports incremental exports.
Import from S3
Import data from S3 (CSV, DynamoDB JSON, Amazon Ion) into a new DynamoDB table. Does not consume WCUs during import.
Auto Scaling
Automatically adjusts provisioned throughput based on CloudWatch utilization metrics. Configure min/max capacity and target utilization percentage.
Table Classes
DynamoDB Standard (default) and DynamoDB Standard-IA (Infrequent Access). Standard-IA reduces storage costs by ~60% but increases per-request costs. Best for rarely accessed data.
Strongly Consistent Reads
Returns the most up-to-date data. Consumes 2x RCUs vs eventually consistent. NOT available on GSIs — only on base table and LSIs.
Scan and Query Operations
Query uses key conditions for efficient retrieval. Scan reads every item in the table (expensive). Use FilterExpression to reduce returned data (but capacity is consumed for all scanned items).
Deletion Protection
Prevents accidental table deletion. Must be explicitly disabled before a table can be deleted. Does not protect against item-level deletes.
Event-Driven Stream Processing
high freqDynamoDB Streams triggers Lambda functions on item-level changes (INSERT, MODIFY, REMOVE). Used for real-time aggregations, cross-table denormalization, notifications, and audit logging. Lambda polls the stream as an event source mapping. Batch size and parallelization factor are configurable.
Data Lake Export / Tiered Storage
high freqExport DynamoDB tables to S3 for analytics (Athena, EMR, Redshift Spectrum) without impacting table performance. Requires PITR enabled. Supports full and incremental exports. Also used for large binary object storage — store S3 URL in DynamoDB, object in S3.
Cache-Aside Pattern
high freqElastiCache (Redis or Memcached) sits in front of DynamoDB to cache frequently read items. Application checks cache first; on miss, reads from DynamoDB and populates cache. Reduces DynamoDB RCU consumption and latency for hot items. Use when you need general-purpose caching beyond DynamoDB.
Serverless REST API Backend
high freqAPI Gateway → Lambda → DynamoDB is the canonical serverless CRUD application pattern. API Gateway can also integrate directly with DynamoDB via AWS service integrations (no Lambda needed for simple operations). Fully serverless, scales to zero, pay-per-use.
High-Retention Change Data Capture
high freqEnable Kinesis Data Streams for DynamoDB to capture item-level changes with up to 365-day retention and multiple consumer support. Superior to native DynamoDB Streams (24-hour retention) for compliance, audit, and replay scenarios.
Polyglot Persistence
high freqUse DynamoDB for high-throughput key-value lookups (session state, user profiles) alongside RDS/Aurora for relational data requiring complex queries and JOINs. Each database serves its optimal use case. Microservices architecture commonly uses this pattern.
GraphQL API with Real-Time Subscriptions
high freqAppSync uses DynamoDB as a data source for GraphQL APIs. Supports real-time subscriptions via DynamoDB Streams. Enables offline data synchronization for mobile/web apps using Amplify DataStore.
Session State Store
high freqEC2-based web applications store user session state in DynamoDB for shared, stateless session management across multiple instances. TTL automatically expires old sessions. Enables horizontal scaling without sticky sessions.
Workflow State Persistence
high freqStep Functions uses DynamoDB to store and retrieve workflow state between steps. DynamoDB's SDK integrations in Step Functions (optimistic locking with condition expressions) enable idempotent state transitions.
Modernization Migration Pattern
high freqMigrate monolithic RDS/Aurora relational workloads to DynamoDB for high-scale, low-latency access patterns. Use AWS DMS or custom ETL via Lambda+Streams for data migration. Common in SAP-C02 workload modernization scenarios.
GSIs only support EVENTUALLY consistent reads — NEVER strongly consistent. LSIs support both eventually and strongly consistent reads. If a question requires strongly consistent reads on an alternate key, the answer requires an LSI, not a GSI.
LSIs must be created at TABLE CREATION TIME and CANNOT be added, modified, or deleted afterward. If a question describes adding a secondary index to an existing table with a different sort key, the answer is GSI, not LSI.
DynamoDB Transactions (TransactWriteItems/TransactGetItems) consume 2x the capacity units of equivalent non-transactional operations. This doubles cost and affects capacity planning. Always flag this in cost-optimization questions.
Global Tables are MULTI-ACTIVE (all replicas are writable), NOT active-passive. This is fundamentally different from RDS Multi-AZ (passive standby) and RDS Read Replicas (read-only). On exams, 'active-active multi-region' = DynamoDB Global Tables.
DAX (DynamoDB Accelerator) does NOT support strongly consistent reads or transactional operations. If a question requires strongly consistent reads with caching, DAX is the WRONG answer — you must read directly from DynamoDB.
FilterExpression in Scan/Query does NOT reduce the capacity units consumed — DynamoDB reads and charges for ALL items scanned before filtering. To reduce RCU consumption, use better key design and Query instead of Scan.
GSIs = eventually consistent reads ONLY. LSIs = strongly consistent reads supported BUT must be created at table creation and cannot be added later. If a question needs strongly consistent reads on an alternate key on an existing table — it's an architectural problem requiring a new table with LSI.
Global Tables are ACTIVE-ACTIVE (all Regions writable) — NOT active-passive like RDS Multi-AZ. Any question describing 'multi-region, low-latency writes, active-active' = DynamoDB Global Tables.
FilterExpression does NOT reduce RCU consumption — you pay for all items read before filtering. DAX does NOT support strongly consistent reads or transactions. TTL deletions are free but lag up to 48 hours. These three facts eliminate wrong answers on dozens of exam questions.
BatchWriteItem does NOT support UpdateItem operations — only PutItem and DeleteItem. If you need to batch update items, you must use individual UpdateItem calls or TransactWriteItems (up to 100 items).
TTL deletions are FREE (no WCU cost) and best-effort (up to 48-hour lag). Expired items may still be returned in reads — always filter by your TTL attribute if precision matters. Never use TTL for security-sensitive expiration.
DynamoDB uses a Gateway VPC Endpoint (not Interface Endpoint). Gateway endpoints are FREE and route traffic within the AWS network without needing NAT Gateway or Internet Gateway. This matters for cost-optimized VPC architecture questions.
Hot partition problem: If all reads/writes go to the same partition key value, you hit the 1000 WCU / 3000 RCU per-partition limit and get throttled even if your table has much higher aggregate capacity. Solution: Use high-cardinality partition keys, write sharding, or add random suffixes.
DynamoDB Streams has 24-hour retention. For longer retention, fan-out, or replay capabilities, use Kinesis Data Streams for DynamoDB (up to 365 days). This is the correct answer when questions mention audit trails or compliance retention requirements.
Conditional writes (ConditionExpression) implement optimistic locking. Use 'attribute_not_exists(pk)' to prevent overwriting existing items — this is the correct pattern for idempotent item creation. Pessimistic locking is NOT natively supported in DynamoDB.
DynamoDB Standard-IA (Infrequent Access) table class reduces storage costs significantly but increases per-request costs. Choose Standard-IA for tables with large amounts of data that are rarely accessed — NOT for hot tables.
PartiQL in DynamoDB looks like SQL but does NOT support JOINs, subqueries across tables, or aggregate functions like SUM/COUNT. It's syntactic sugar over DynamoDB's existing API — access patterns are still key-driven.
When exporting DynamoDB to S3, PITR must be enabled. The export does NOT consume RCUs and does NOT affect table performance. Exports are in DynamoDB JSON or Amazon Ion format and can be queried with Athena.
Common Mistake
DynamoDB is a document database like MongoDB, so it supports complex queries, JOINs, and aggregations similar to relational databases.
Correct
DynamoDB is a key-value and document database optimized for single-item lookups by primary key. It does NOT support JOINs, cross-table queries, or SQL aggregations. All access patterns must be designed around the partition key and sort key. Complex queries require pre-computed results stored in the table or offloading to a search/analytics service.
This misconception causes candidates to recommend DynamoDB for relational use cases. On exams, if a question mentions JOINs, complex queries, or relational integrity — the answer is RDS/Aurora, not DynamoDB. DynamoDB excels at known access patterns, not ad-hoc queries.
Common Mistake
DynamoDB Global Tables are like RDS Multi-AZ — one active Region and one passive standby for failover.
Correct
DynamoDB Global Tables are ACTIVE-ACTIVE (multi-active). Every Region is a fully writable replica. Writes in any Region replicate to all other Regions, typically within one second. Conflict resolution uses last-writer-wins based on timestamps. This is fundamentally different from RDS Multi-AZ (passive standby, automatic failover) or RDS Read Replicas (read-only).
Exams frequently test the distinction between active-active and active-passive replication. 'Multi-Region active-active database with low-latency local writes' = DynamoDB Global Tables. 'Automatic failover with a standby' = RDS Multi-AZ. Confusing these leads to wrong architecture answers.
Common Mistake
DAX (DynamoDB Accelerator) is just like ElastiCache — a general-purpose in-memory cache that can be used in front of any database.
Correct
DAX is purpose-built EXCLUSIVELY for DynamoDB. It is API-compatible with DynamoDB (same SDK calls, just point to DAX endpoint) and only works with DynamoDB. ElastiCache (Redis/Memcached) is a general-purpose cache that works with any database. DAX also does NOT support strongly consistent reads or transactional operations — those bypass DAX and go directly to DynamoDB.
Questions that ask about caching for RDS, Aurora, or other databases should NEVER have DAX as the answer — use ElastiCache. Questions about microsecond latency for DynamoDB reads with minimal code changes = DAX. The API compatibility of DAX (no code changes beyond endpoint) is a key exam differentiator.
Common Mistake
Adding a FilterExpression to a DynamoDB Scan or Query reduces the read capacity units consumed because fewer items are returned.
Correct
FilterExpression is applied AFTER DynamoDB reads all items that match the key condition. You are charged for ALL items read (scanned), not just the items returned after filtering. A Scan with FilterExpression that returns 10 items but reads 10,000 items consumes capacity for 10,000 items. The only way to reduce capacity consumption is to use a more selective key condition (Query vs Scan) or better index design.
This is one of the most tested DynamoDB cost and performance misconceptions. Candidates assume filtering = less cost. On exams, if a question asks how to reduce RCU consumption, the answer involves better key design, Query instead of Scan, or appropriate indexes — NOT FilterExpression.
Common Mistake
You can add a Local Secondary Index (LSI) to an existing DynamoDB table at any time, just like a Global Secondary Index (GSI).
Correct
LSIs can ONLY be created at table creation time. They cannot be added, modified, or deleted after the table exists. GSIs, on the other hand, can be created and deleted at any time on an existing table. This is a critical architectural constraint — if you realize you need an LSI after table creation, you must create a new table with the LSI and migrate your data.
This immutability of LSIs is tested constantly. If a question says 'an existing table needs a new access pattern with the same partition key but different sort key and strongly consistent reads,' the correct architectural answer is to rebuild the table with an LSI — not add one later. GSIs can be added anytime but don't support strongly consistent reads.
Common Mistake
DynamoDB TTL immediately deletes items when they expire, making it suitable for precise time-based expiration (e.g., invalidating security tokens at an exact time).
Correct
TTL deletions are BEST-EFFORT and can lag up to 48 hours after the expiration timestamp. Expired items may still be returned in reads until they are actually deleted. For security-sensitive expiration (tokens, sessions that must be invalid immediately), you must check the TTL attribute value in your application logic and treat expired items as invalid, regardless of whether DynamoDB has deleted them yet.
Using TTL as a security mechanism is an anti-pattern. Exams test whether you understand that TTL is a cost/storage optimization tool, not a precision timing mechanism. The correct pattern for token invalidation is application-level expiry checking combined with TTL for eventual cleanup.
Common Mistake
DynamoDB Streams and Kinesis Data Streams for DynamoDB are the same thing with different names.
Correct
These are two distinct mechanisms with different capabilities. Native DynamoDB Streams: 24-hour retention, up to 2 concurrent consumers per shard, integrated with Lambda. Kinesis Data Streams for DynamoDB: configurable retention up to 365 days, unlimited consumers via Enhanced Fan-Out, richer monitoring via Kinesis Data Streams metrics, and can feed Kinesis Data Firehose directly. Choose Kinesis when you need longer retention, more consumers, or Firehose integration.
This distinction is increasingly tested as Kinesis Data Streams for DynamoDB becomes more widely adopted. For compliance/audit scenarios requiring long-term change capture, Kinesis is the correct answer. For simple Lambda triggers, native DynamoDB Streams is simpler and sufficient.
Common Mistake
DynamoDB is an in-memory database like Redis/ElastiCache, which is why it's so fast.
Correct
DynamoDB is a DURABLE, disk-based database (SSDs) that achieves single-digit millisecond performance through its distributed architecture, SSD storage, and efficient indexing — not because it stores data in memory. Redis/ElastiCache stores data in RAM (volatile by default, though Redis supports persistence). DynamoDB data survives instance failures, AZ failures, and even Region failures (with Global Tables). DAX adds an in-memory caching layer ON TOP of DynamoDB for microsecond reads.
Confusing DynamoDB with in-memory databases leads to wrong answers about durability, persistence, and appropriate use cases. On exams: DynamoDB = durable, scalable NoSQL. ElastiCache Redis = in-memory, sub-millisecond, volatile (unless persistence configured). DAX = in-memory cache specifically for DynamoDB.
LSI = 'Locked at Start, Identical partition key' — LSIs are locked in at table creation and share the same partition key as the base table. GSI = 'Go anywhere, Start anytime' — GSIs can be created anytime and use any key.
DAX = 'DynamoDB Access eXclusively' — DAX works ONLY with DynamoDB, not any other database. For anything else, use ElastiCache.
FILTER ≠ FEWER charges: FilterExpression filters AFTER reading — you pay for everything DynamoDB touches, not just what it returns. Think of it as a bouncer who reads every ID but only lets some people in — the club still paid for all the checks.
Global Tables = 'All Writers, No Losers' — every Region can write (active-active), and last-writer-wins resolves conflicts. RDS Multi-AZ = 'One Writer, One Waiter' — one active, one passive standby.
TTL = 'Try To Leave (eventually)' — TTL items try to leave (be deleted) but may linger up to 48 hours after expiration. Never trust TTL for security-critical expiration.
Transaction cost rule: 'Double or Nothing' — Transactions cost 2x the capacity of regular operations. Always double your capacity estimate when transactions are in play.
Batch limits to memorize: BatchGetItem = 100 items / 16 MB. BatchWriteItem = 25 items / 16 MB. Transactions = 100 items / 4 MB. Remember: 'Get 100, Write 25, Transact 100 but tighter (4 MB)'.
CertAI Tutor · DVA-C02, SAA-C03, SAP-C02, DEA-C01, DOP-C02, CLF-C02 · 2026-02-21
In the Same Category
Comparisons