
Cargando...
Managed Kubernetes control plane so you focus on apps, not infrastructure
Amazon Elastic Kubernetes Service (EKS) is a fully managed Kubernetes control plane that eliminates the need to install, operate, and maintain your own Kubernetes masters. EKS runs upstream Kubernetes, is certified Kubernetes conformant, and integrates deeply with AWS services like IAM, VPC, ECR, ALB, and CloudWatch. Worker nodes can run on EC2 (self-managed or managed node groups), AWS Fargate, or AWS Outposts, giving teams flexible compute options without sacrificing Kubernetes portability.
Run production-grade, portable Kubernetes workloads on AWS without managing the Kubernetes control plane, etcd, or master nodes — while retaining full Kubernetes API compatibility.
Use When
Avoid When
Managed Control Plane (etcd + API server)
AWS manages HA masters across multiple AZs; you never SSH into control plane nodes
Managed Node Groups
AWS automates EC2 provisioning, AMI updates, and draining for rolling upgrades
Fargate compute for pods
Serverless pods — no node management; does NOT support DaemonSets, privileged containers, or GPU workloads
AWS Outposts support
Run EKS worker nodes on Outposts for low-latency on-premises workloads; control plane remains in AWS region
EKS Anywhere (on-premises)
Separate product using EKS Distro; runs on-premises with optional AWS connectivity
IAM Roles for Service Accounts (IRSA)
Fine-grained IAM permissions per Kubernetes pod via OIDC federation — the recommended way to grant AWS API access to pods
EKS Pod Identity (newer alternative to IRSA)
Simplified pod-level IAM without managing OIDC provider; introduced as a simpler IRSA alternative
AWS Load Balancer Controller (ALB/NLB ingress)
Provisions ALB for Kubernetes Ingress resources and NLB for LoadBalancer services
Amazon VPC CNI plugin
Pods get native VPC IP addresses — enables VPC security groups, routing, and direct pod-to-pod communication
Security Groups for Pods
Assign EC2 security groups directly to individual pods (requires VPC CNI); NOT supported on Fargate
EBS CSI Driver (persistent volumes)
Dynamically provision EBS volumes for stateful workloads; EBS is AZ-scoped — pods must schedule in the same AZ as their volume
EFS CSI Driver (shared persistent volumes)
ReadWriteMany access mode — multiple pods across AZs can share the same EFS filesystem
Cluster Autoscaler / Karpenter
Karpenter is the AWS-native next-gen node provisioner — faster and more cost-efficient than Cluster Autoscaler; both work with EKS
Horizontal Pod Autoscaler (HPA)
Scales pod replicas based on CPU/memory or custom metrics (via Metrics Server or KEDA)
Vertical Pod Autoscaler (VPA)
Adjusts pod resource requests/limits automatically; cannot scale simultaneously with HPA on same metric
AWS Secrets Manager / SSM Parameter Store integration
Via Secrets Store CSI Driver — mount secrets as volumes or env vars without storing them in Kubernetes Secrets (etcd)
Amazon GuardDuty EKS Protection
Analyzes Kubernetes audit logs and runtime behavior for threat detection
DaemonSets on Fargate
CRITICAL: DaemonSets are NOT supported on Fargate — use EC2 node groups for workloads requiring DaemonSets (logging agents, monitoring agents)
Privileged containers on Fargate
Fargate enforces a security boundary — no privileged pods, no host networking, no hostPath volumes
GPU workloads on Fargate
GPU instance types require EC2 node groups; Fargate does not support GPU-accelerated pods
Windows node groups
Windows Server containers supported on EC2 node groups; NOT supported on Fargate
EKS Blueprints / CDK constructs
Infrastructure-as-code accelerators for production-ready EKS clusters with add-ons pre-configured
Amazon CloudWatch Container Insights
Cluster, node, pod, and container-level metrics and logs — requires CloudWatch agent or Fluent Bit DaemonSet on EC2 nodes
Serverless Kubernetes Pods
high freqUse Fargate profiles to run specific namespaces or label-selected pods on serverless compute — eliminates node group management for suitable workloads. Critical constraint: Fargate does NOT support DaemonSets, privileged containers, GPUs, or Windows containers. Best for stateless, variable-load microservices.
IAM Roles for Service Accounts (IRSA) / EKS Pod Identity
high freqGrant individual pods fine-grained AWS API permissions via OIDC federation (IRSA) or the newer EKS Pod Identity feature. Eliminates the anti-pattern of storing AWS credentials in pods or using overly broad EC2 instance role permissions. Each pod/service account gets its own IAM role.
Private Container Image Registry
high freqStore, version, scan, and pull container images from ECR. EKS nodes authenticate to ECR automatically via EC2 instance role. ECR image scanning (basic and enhanced via Inspector) integrates into CI/CD pipelines. Cross-account ECR access requires explicit resource-based policies.
Managed and Self-Managed Node Groups
high freqEC2 worker nodes provide full Kubernetes feature support including DaemonSets, GPUs, Windows, privileged containers, and custom AMIs. Managed node groups automate lifecycle management (provisioning, updates, drain). Use Spot instances in separate node groups for cost optimization with Karpenter or Cluster Autoscaler.
Event-Driven Sidecar or Trigger Pattern
high freqLambda is NOT a replacement for EKS containers — they serve different purposes. Lambda can trigger EKS jobs via EventBridge or SNS, or EKS pods can invoke Lambda for short-lived tasks. Do not migrate long-running containerized services to Lambda; use EKS or ECS instead.
Container Orchestration Choice
high freqECS is AWS-native, simpler, no control plane cost, and integrates seamlessly with AWS services. EKS is Kubernetes-compatible, portable, and required for teams with existing Kubernetes investments. They are NOT interchangeable for Kubernetes migrations — ECS uses task definitions, not Kubernetes manifests.
Kubernetes Ingress and Service Load Balancing
high freqAWS Load Balancer Controller provisions ALB for Ingress resources (HTTP/HTTPS, path-based routing, WAF integration) and NLB for LoadBalancer-type Services (TCP/UDP, static IPs, PrivateLink). ALB is recommended for HTTP microservices; NLB for TCP workloads requiring static IPs or ultra-low latency.
Cluster Observability
high freqContainer Insights provides cluster/node/pod metrics. Fluent Bit DaemonSet (on EC2) ships logs to CloudWatch Logs. On Fargate, logging uses the built-in log router (Fluent Bit as a sidecar — no DaemonSet possible). GuardDuty EKS Protection analyzes audit logs for threats.
Secure Secret Injection
high freqSecrets Store CSI Driver mounts Secrets Manager or SSM Parameter Store secrets as volumes into pods. Avoids storing sensitive data in Kubernetes Secrets (stored in etcd, base64-encoded, not encrypted by default unless envelope encryption is enabled). Combine with IRSA for pod-level secret access control.
Hybrid On-Premises Kubernetes
high freqEKS worker nodes run on Outposts racks for low-latency, data-residency, or disconnected requirements. The Kubernetes control plane remains in the AWS region — requires reliable connectivity for control plane operations. For fully disconnected on-premises, use EKS Anywhere instead.
Intelligent Node Autoscaling
high freqKarpenter is an open-source, AWS-native node provisioner that provisions right-sized EC2 instances directly (bypassing Auto Scaling Groups) in seconds. Supports Spot interruption handling, consolidation, and mixed instance types. Preferred over Cluster Autoscaler for new EKS deployments due to speed and cost efficiency.
EKS and ECS are NOT interchangeable for Kubernetes migrations. ECS uses AWS-specific task definitions and does not understand Kubernetes manifests, Helm charts, or kubectl. A question asking how to migrate a Kubernetes workload to AWS with minimal refactoring always points to EKS, not ECS.
Fargate on EKS does NOT support DaemonSets, privileged containers, GPU instances, Windows containers, or hostPath volumes. Any question describing a workload that requires a DaemonSet (e.g., a logging agent or monitoring agent on every node) CANNOT use Fargate — it requires EC2 node groups.
EKS charges $0.10/hour per cluster control plane regardless of workload. This means even an idle EKS cluster costs ~$72/month. For cost-optimization questions, ECS has no control plane charge — this is a real cost differentiator when choosing between EKS and ECS for simple containerized applications.
IRSA (IAM Roles for Service Accounts) is the secure way to give pods AWS API permissions. The anti-pattern is using the EC2 instance role for all pods on a node — this gives every pod on that node the same permissions. IRSA uses OIDC federation to assign per-pod IAM roles. EKS Pod Identity is the newer, simpler alternative.
EBS persistent volumes in EKS are Availability Zone-scoped. If a pod with an EBS-backed PVC is rescheduled to a node in a different AZ, it will fail to mount the volume. Use EFS (ReadWriteMany, multi-AZ) for shared or cross-AZ persistent storage, or use topology-aware scheduling to keep pods in the same AZ as their EBS volumes.
When a question asks how to migrate existing Kubernetes workloads to AWS with minimal refactoring, the answer is ALWAYS EKS — not ECS. ECS is AWS-native and incompatible with Kubernetes manifests, Helm, and kubectl.
Fargate on EKS does NOT support DaemonSets, privileged containers, or GPUs. Any workload requiring a DaemonSet (logging/monitoring agents) MUST use EC2 node groups — this eliminates Fargate as an answer for those scenarios.
EKS charges $0.10/hour per cluster control plane on top of compute costs. For cost-optimization questions comparing EKS vs. ECS for simple containerized applications, ECS wins on cost because it has no cluster management fee.
Self-managed Kubernetes on EC2 (without EKS) does NOT reduce complexity — it increases it dramatically. You become responsible for etcd backups, control plane HA, Kubernetes upgrades, security patching of masters, and API server availability. Exam questions about 'reducing operational overhead' always favor EKS over self-managed Kubernetes.
EKS Anywhere is for running Kubernetes on-premises using the EKS distribution (EKS-D). It is a DIFFERENT product from EKS in the cloud. For hybrid architecture questions: use EKS Anywhere for fully on-premises, EKS + Outposts for on-premises nodes with cloud control plane, and EKS in the cloud for cloud-native workloads.
Kubernetes etcd data in EKS is encrypted at rest by default using AWS-managed keys. You can enable envelope encryption using a customer-managed KMS key for Kubernetes Secrets — this is an additional security layer beyond the default. This matters for compliance and security architecture questions.
The AWS Load Balancer Controller must be installed as an add-on to use ALB Ingress or NLB Service types in EKS. It is NOT built in. The classic Kubernetes in-tree load balancer creates Classic Load Balancers (CLBs), which are legacy — exam questions about modern EKS architectures expect ALB (for HTTP/HTTPS Ingress) or NLB (for TCP/UDP Services).
Karpenter is the modern, preferred node autoscaler for EKS. It provisions nodes directly without Auto Scaling Groups (ASGs), responds faster than Cluster Autoscaler, and supports node consolidation to reduce cost. For new architectures, recommend Karpenter over Cluster Autoscaler. Both are valid for the exam but Karpenter is increasingly featured.
EKS extended support adds an additional hourly charge per cluster for Kubernetes versions beyond the standard 14-month support window. This is a cost trap in real environments and a newer exam topic — always recommend upgrading clusters on a regular cadence to avoid extended support charges.
Common Mistake
ECS and EKS are interchangeable — you can migrate a Kubernetes application to ECS without code changes
Correct
ECS and EKS use completely different orchestration models. ECS uses Task Definitions, Services, and clusters with no Kubernetes API compatibility. Kubernetes manifests, Helm charts, kubectl commands, CRDs, and RBAC policies are meaningless in ECS. Migrating a Kubernetes app to ECS requires significant refactoring. For Kubernetes portability, EKS is the only AWS-managed option.
This is the #1 EKS misconception on certification exams. Questions will describe a company with existing Kubernetes workloads and ask how to migrate to AWS — the answer is always EKS (not ECS) when the requirement is 'minimal refactoring' or 'Kubernetes compatibility'.
Common Mistake
Running self-managed Kubernetes on EC2 reduces complexity and gives you more control with less overhead than EKS
Correct
Self-managed Kubernetes dramatically INCREASES operational complexity. You become responsible for: installing and configuring etcd, setting up HA control plane across AZs, performing Kubernetes version upgrades manually, patching master node OS, backing up etcd, and maintaining API server availability. EKS offloads all of this with a 99.95% SLA. The only reason to self-manage is for very specific customization needs not supported by EKS.
Exam questions about 'reducing operational overhead' or 'minimizing undifferentiated heavy lifting' always favor EKS over self-managed Kubernetes. The managed control plane is the entire value proposition of EKS.
Common Mistake
AWS Fargate on EKS can handle all enterprise Kubernetes workload types, making EC2 node groups unnecessary
Correct
Fargate on EKS has significant constraints: no DaemonSets, no privileged containers, no GPU support, no Windows containers, no hostPath volumes, no host networking, and limited instance size flexibility. Enterprise workloads commonly require DaemonSets for log collection (Fluentd, Fluent Bit) and monitoring (Datadog agent, Prometheus node exporter) — these REQUIRE EC2 node groups. Most production EKS clusters use a hybrid model: Fargate for stateless microservices, EC2 for DaemonSet-dependent or stateful workloads.
Exam questions will describe a workload needing a DaemonSet or privileged container and ask if Fargate is suitable — the answer is NO. Knowing Fargate's specific limitations is more important than knowing what it supports.
Common Mistake
Lambda can replace containerized microservices on EKS for any workload to reduce costs
Correct
Lambda has a 15-minute maximum execution timeout, limited ephemeral storage (up to 10 GB), and is designed for stateless, event-driven, short-duration functions. Long-running services, streaming processors, stateful applications, or workloads requiring specific runtimes/system dependencies are NOT suitable for Lambda. EKS (or ECS) is the correct choice for persistent, long-running containerized services. Lambda Container Image support allows packaging Lambda functions as containers but still subject to all Lambda limits.
Questions will try to trick you into choosing Lambda as a 'serverless' option for containerized applications — Lambda and EKS solve fundamentally different problems. Lambda = event-driven functions; EKS = long-running containerized services.
Common Mistake
EKS is free because AWS manages the control plane — you only pay for EC2 instances
Correct
EKS charges $0.10 per hour (~$72/month) per cluster for the managed control plane, IN ADDITION to EC2 or Fargate compute costs. A development cluster with minimal EC2 nodes still incurs the $72/month cluster fee. ECS, by contrast, has no cluster management fee. For cost-sensitive architectures with simple containerized workloads, ECS is more cost-effective than EKS.
Cost-optimization questions on the exam require knowing the EKS control plane charge exists. Choosing EKS for a simple web application when ECS would suffice is a cost anti-pattern.
Common Mistake
Kubernetes Secrets in EKS are encrypted and secure by default
Correct
Kubernetes Secrets in etcd are base64-encoded (NOT encrypted) by default. EKS encrypts etcd data at rest using AWS-managed keys, but Kubernetes Secrets themselves are not envelope-encrypted unless you explicitly enable KMS envelope encryption for the cluster. Additionally, any pod in the same namespace can read a Secret by default unless RBAC is configured. Use AWS Secrets Manager with the Secrets Store CSI Driver for production-grade secret management.
Security architecture questions about protecting sensitive configuration data in Kubernetes should lead to Secrets Manager + CSI Driver or KMS envelope encryption, not just Kubernetes Secrets.
EKS Fargate CANNOT: 'D-P-G-W' — No DaemonSets, No Privileged containers, No GPUs, No Windows. If a workload needs any of these, use EC2 node groups.
EKS vs ECS decision: 'K for Kubernetes = K for Keep your manifests' — if you have Kubernetes manifests/Helm charts, use EKS. If you're building new on AWS with no Kubernetes investment, ECS is simpler and cheaper.
IRSA = 'I Restrict Service Accounts' — IAM Roles for Service Accounts gives each pod its own IAM identity instead of sharing the node's EC2 role.
EBS = 'Exactly one Box, one Shelf' — EBS volumes are AZ-locked. One pod, one AZ. For multi-AZ shared storage, use EFS.
EKS control plane cost reminder: '$0.10/hr = roughly $72/month per cluster' — even an empty cluster costs money; ECS has no such charge.
CertAI Tutor · SAA-C03, SAP-C02, DEA-C01, DOP-C02, CLF-C02, DVA-C02 · 2026-02-21