
Cargando...
Stop guessing which compute service to use — master the decision framework that appears on every AWS certification exam
Serverless functions vs container orchestration vs managed Kubernetes vs serverless containers — know exactly which to pick
| Feature | Lambda Serverless event-driven function execution | ECS AWS-native managed container orchestration | EKS Managed Kubernetes for any workload | Fargate Serverless compute engine for containers |
|---|---|---|---|---|
Compute Model CRITICAL TRAP: Fargate is NOT an orchestrator. It is a launch type/compute engine used BY ECS or EKS. You cannot use Fargate alone — you always pair it with ECS or EKS. | Serverless functions — no servers, no containers to manage | Container orchestration — you define tasks and services on EC2 or Fargate | Managed Kubernetes control plane — nodes can be EC2, Fargate, or EKS Auto Mode | Serverless container runtime — launch engine used by ECS and EKS, not a standalone orchestrator |
Max Execution / Run Duration Lambda's 15-minute hard limit is one of the most tested limits on all exams. If a workload exceeds 15 minutes, Lambda is wrong — use ECS/EKS/Fargate or Step Functions chaining. | 15 minutes per invocation (standard). Durable Lambda functions can run stateful workflows up to 1 year. | No time limit — tasks run as long as needed | No time limit — pods run as long as needed | No time limit — inherits from ECS/EKS task definition |
Max Memory Lambda's 10,240 MB memory limit is a hard limit from the live docs. Memory allocation also controls vCPU allocation in Lambda proportionally. | 10,240 MB (10 GB) per function | Defined per task — limited by EC2 instance size or Fargate task size | Defined per pod — limited by node size or Fargate profile | Up to 120 GB per task (ECS Fargate); varies by platform version |
Concurrency / Scaling Model Lambda Provisioned Concurrency eliminates cold starts by pre-warming execution environments — critical for latency-sensitive APIs. | Up to 1,000 concurrent executions per Region by default (soft limit, can be increased). Scales per invocation automatically. | Scales via ECS Service Auto Scaling (target tracking, step, scheduled). Scales at task level. | Scales via Kubernetes HPA, VPA, Cluster Autoscaler, or Karpenter. Most flexible but most complex. | Scales with ECS/EKS scaling policies — no infrastructure to scale manually |
Infrastructure Management EKS Standard = AWS manages control plane, YOU manage nodes. EKS Auto Mode = AWS manages both. This distinction appears frequently in SAA-C03 and SAP-C02. | Zero — AWS manages everything including OS, runtime patching, scaling | — | — | Zero — AWS manages underlying compute infrastructure |
Container Support Lambda CAN run container images (up to 10 GB). This is a common misconception — candidates assume Lambda only runs ZIP packages. | Yes — supports container images up to 10 GB. Also supports ZIP deployment packages. | Yes — primary use case. Runs Docker containers natively. | Yes — runs any OCI-compliant container image via Kubernetes pods | Yes — runs containers defined in ECS task definitions or EKS pods |
Pricing Model For cost-optimization questions: Lambda wins for sporadic/unpredictable traffic. ECS on EC2 with Reserved Instances wins for steady, predictable high-throughput workloads. EKS has a baseline cluster cost making it expensive for small workloads. | Pay per request + duration (GB-seconds). Free tier: 1M requests/month + 400,000 GB-seconds/month. | — | $0.10/hour per cluster for control plane + EC2/Fargate costs for worker nodes. | Pay per vCPU-second and GB-second for tasks — no charge for idle capacity |
Cold Start Latency Lambda Provisioned Concurrency = pre-warmed environments, eliminates cold starts, billed continuously. Lambda SnapStart (for Java) = restores from snapshot, dramatically reduces cold start time. | Yes — cold starts occur when new execution environments are initialized. Use Provisioned Concurrency to eliminate. | Task startup latency (seconds to minutes for EC2 launch type). Fargate launch type is faster. | Pod scheduling latency. Faster with pre-warmed nodes. | Container startup latency (typically faster than ECS on EC2 for first-run but slower than pre-warmed EC2) |
Stateful Workloads Lambda is stateless between invocations. For stateful multi-step processes, combine Lambda with Step Functions or use ECS/EKS with persistent volumes. | Stateless by design. Durable Lambda functions enable stateful multi-step workflows up to 1 year. Use Step Functions for orchestration. | Supports stateful workloads with EFS/EBS volume mounts | Full stateful workload support with PersistentVolumes (EBS, EFS, FSx) | Supports EFS mounts for persistent storage. EBS support available for ECS Fargate. |
Networking / VPC Lambda in VPC does NOT mean traffic stays off the public internet — it means the function can ACCESS VPC resources. Lambda still needs a NAT Gateway to reach the internet from within a VPC. This is a top exam trap. | Can run inside a VPC (with ENI creation) or outside. VPC Lambda has cold start implications. Requires subnets and security groups. | Tasks run in VPC by default. Full VPC networking control. | Pods run in VPC using VPC CNI plugin. Full networking control. | Tasks/pods run in VPC. Each Fargate task gets its own ENI (Elastic Network Interface). |
Event-Driven / Trigger Support Lambda is the ONLY service here with native, built-in event source mappings. For event-driven architectures, Lambda is almost always the right answer unless the workload exceeds Lambda limits. | Native event-driven triggers: S3, DynamoDB Streams, Kinesis, SQS, SNS, API Gateway, EventBridge, Cognito, ALB, and more | Not natively event-driven. Use EventBridge + ECS RunTask for event-triggered tasks. | Not natively event-driven. Use KEDA (Kubernetes Event-Driven Autoscaling) or custom controllers. | Inherits from ECS/EKS — use EventBridge to trigger ECS tasks on Fargate |
Kubernetes Compatibility If the question mentions 'existing Kubernetes workloads', 'kubectl', 'Helm charts', or 'Kubernetes operators' — EKS is the answer. ECS does NOT support Kubernetes APIs. | No Kubernetes — proprietary AWS execution model | No Kubernetes — AWS proprietary orchestration (ECS API, not Kubernetes API) | Full Kubernetes API compatibility — kubectl, Helm, operators, CRDs all work | Works with EKS (Kubernetes pods on Fargate) — Kubernetes API still used |
GPU / Specialized Hardware For ML training, deep learning, or GPU workloads: EKS or ECS on EC2 with GPU instances. SageMaker is often the better answer for ML training specifically. Fargate and Lambda do NOT support GPUs. | No GPU support for standard functions | Yes — GPU-enabled EC2 instances (p3, p4, g4, g5) via ECS Managed Instances or EC2 launch type | Yes — GPU nodes with NVIDIA device plugin for Kubernetes. Best for ML training workloads. | No GPU support — Fargate does not support GPU workloads |
Multi-Region / Hybrid EKS Anywhere and ECS Anywhere allow running containers on-premises with the same AWS APIs. This appears in hybrid architecture questions on SAP-C02 and SAA-C03. | Regional service. Use Lambda@Edge or CloudFront Functions for edge execution. | Regional. ECS Anywhere extends to on-premises servers. | Regional. EKS Anywhere for on-premises. EKS Hybrid Nodes for hybrid cloud. | Regional only — no on-premises or edge support |
Operational Complexity Complexity spectrum: Lambda < ECS Fargate < ECS EC2 < EKS Auto Mode < EKS Fargate < EKS EC2. Always match complexity to team capability and workload requirements. | Lowest — zero infrastructure, automatic scaling, built-in HA | — | — | — |
Long-Running Batch Jobs AWS Batch is built ON TOP of ECS/Fargate. For batch workloads >15 min, the answer is AWS Batch (ECS/Fargate), ECS directly, or EKS — not Lambda alone. | Not ideal for jobs >15 minutes (use AWS Batch or Step Functions with Lambda for orchestration) | Excellent — ECS scheduled tasks and AWS Batch on ECS for batch workloads | Excellent — Kubernetes Jobs and CronJobs for batch processing | Excellent — run batch jobs as Fargate tasks with no idle infrastructure cost |
Deployment Package / Artifact Lambda supports BOTH ZIP and container image deployments. Container image Lambda functions can be up to 10 GB — useful for large ML inference models. | ZIP file (up to 50 MB compressed, 250 MB unzipped) OR container image up to 10 GB | Docker/OCI container image stored in ECR or any registry | Docker/OCI container image — any OCI-compliant registry | Docker/OCI container image via ECS task definition or EKS pod spec |
Service Mesh / Advanced Networking For microservices requiring service mesh, mutual TLS, or advanced traffic management: EKS > ECS > Lambda. EKS gives the most control for complex microservice architectures. | No native service mesh. Use API Gateway for routing. | AWS App Mesh integration. Service Connect for service discovery. | Full service mesh support: AWS App Mesh, Istio, Linkerd. Ingress controllers. | Supports App Mesh when running on ECS/EKS |
CI/CD & DevOps Integration CodeDeploy Blue/Green deployments work natively with ECS (via ALB target group switching). Lambda also supports CodeDeploy traffic shifting with aliases. EKS uses Kubernetes-native rolling updates or GitOps tools. | CodePipeline, CodeDeploy (canary/linear/all-at-once deployments), SAM, CDK | CodePipeline + CodeDeploy Blue/Green deployments with ALB. Native ECS rolling updates. | Full GitOps support: ArgoCD, Flux. CodePipeline, Helm, kubectl apply. | Inherits from ECS/EKS deployment strategies |
Observability All four services integrate with CloudWatch. EKS offers the most observability options including Prometheus/Grafana for Kubernetes-native metrics. Lambda has automatic CloudWatch Logs integration — no agent needed. | CloudWatch Logs (automatic), CloudWatch Metrics, X-Ray tracing, Lambda Insights | CloudWatch Container Insights, CloudWatch Logs, X-Ray, FireLens log routing | CloudWatch Container Insights, CloudWatch Logs, X-Ray, Prometheus/Grafana, AWS Distro for OpenTelemetry | CloudWatch Container Insights, CloudWatch Logs, X-Ray — inherits from ECS/EKS |
Summary
Use Lambda for event-driven, short-duration (≤15 min), stateless workloads where zero infrastructure management is the priority. Use ECS (with Fargate for serverless containers, or EC2 for control/cost at scale) for containerized applications without Kubernetes requirements. Use EKS when you need Kubernetes API compatibility, existing Kubernetes tooling (Helm, operators), or the most advanced container orchestration capabilities. Remember: Fargate is always a launch type for ECS or EKS — it is never used alone.
🎯 Decision Tree
Is it event-driven AND runs <15 min AND stateless? → Lambda. Does it need containers AND Kubernetes APIs/kubectl/Helm? → EKS (with EC2 for GPU/control, Fargate for serverless, or Auto Mode for managed). Does it need containers WITHOUT Kubernetes complexity? → ECS (Fargate for no infra management, EC2 for cost optimization at scale or GPU). Is the question about the compute engine underneath ECS or EKS tasks? → Fargate (serverless) or EC2 (managed). Needs on-premises? → ECS Anywhere or EKS Anywhere. Needs GPU/ML training? → EKS or ECS on EC2 with GPU instances (NOT Fargate, NOT Lambda).
Fargate is NEVER a standalone service. It is always a launch type for ECS or EKS. 'Fargate vs ECS' is a false comparison — they work together. Any question asking you to choose between them is testing whether you know Fargate is a compute engine, not an orchestrator.
Lambda's 15-minute hard limit is the #1 decision filter. If a workload exceeds 15 minutes, Lambda is eliminated. Use ECS/EKS/Fargate for long-running tasks, AWS Batch for batch jobs, or Step Functions + Lambda for orchestrated multi-step workflows that need state.
EKS = Kubernetes API compatibility. If the exam question mentions kubectl, Helm charts, Kubernetes operators, CRDs, or 'existing Kubernetes workloads', the answer is EKS. ECS uses a completely different API and does NOT support Kubernetes tooling.
Lambda in a VPC does NOT automatically keep traffic off the public internet. Lambda in a VPC needs a NAT Gateway to reach the internet, and VPC Endpoints to privately access AWS services (S3, DynamoDB). Without these, a VPC Lambda has NO internet or AWS service access.
ECS control plane is FREE (EC2 launch type). EKS costs $0.10/hour (~$72/month) per cluster for the control plane. For cost optimization questions with small workloads or dev environments, ECS is cheaper than EKS even if both use EC2 for compute.
Lambda supports container images up to 10 GB — not just ZIP packages. For large ML models or complex dependencies, Lambda container images eliminate the need to move to ECS/EKS just because of package size.
GPU workloads (ML training, deep learning, CUDA) require ECS or EKS on EC2 GPU instances. Neither Fargate nor standard Lambda supports GPUs. SageMaker is often the best answer for ML training specifically.
EKS Standard = AWS manages control plane only (you manage nodes). EKS Auto Mode = AWS manages BOTH control plane AND nodes. This distinction is increasingly tested as EKS Auto Mode reduces the gap between EKS and ECS Fargate operational complexity.
For event-driven architectures, Lambda is almost always correct for the compute layer — it has native event source mappings for S3, SQS, Kinesis, DynamoDB Streams, SNS, EventBridge, API Gateway, and more. ECS/EKS require EventBridge + RunTask for event-triggered execution.
The #1 exam trap is treating Fargate as a competitor to ECS/EKS rather than understanding it is a launch type WITHIN ECS and EKS. Candidates who choose 'Fargate' as a complete answer without specifying ECS or EKS are architecturally wrong. The second biggest trap is eliminating Lambda for 'long-running workflows' without considering Step Functions or Durable Lambda functions, which extend Lambda's reach to multi-step, stateful processes lasting up to 1 year.
CertAI Tutor · DEA-C01, SAA-C03, DOP-C02, DVA-C02, SCS-C02, AIF-C01, SAP-C02, CLF-C02 · 2026-02-22
Services
Comparisons