
Cargando...
Automate your release pipeline from source to production — fully managed, event-driven, and deeply integrated with the AWS ecosystem.
AWS CodePipeline is a fully managed continuous delivery service that automates the build, test, and deploy phases of your release process every time there is a code change. It integrates natively with AWS services like CodeBuild, CodeDeploy, CloudFormation, Lambda, and S3, as well as third-party tools like GitHub, Jenkins, and Jira. CodePipeline enables fast, reliable application and infrastructure updates using a visual pipeline model with stages, actions, and transitions.
Orchestrate and automate end-to-end CI/CD workflows across build, test, and deployment stages without managing infrastructure.
Use When
Avoid When
Visual Pipeline Editor
Drag-and-drop pipeline creation in the AWS Console with stage/action configuration.
Pipeline V2 Type (Enhanced features)
V2 pipelines support triggers with filtering (branch, file path, tags), variables, and per-execution metadata. V1 is legacy.
Pipeline Variables
V2 pipelines support pipeline-level and stage-level variables that can be passed between actions.
Manual Approval Actions
Pause pipeline execution and send SNS notifications for human approval before proceeding.
Parallel Actions (runOrder)
Actions with the same runOrder integer within a stage execute in parallel.
Cross-Region Actions
Deploy to multiple AWS regions from a single pipeline using cross-region artifact replication buckets.
Cross-Account Deployments
Use cross-account IAM roles to deploy to different AWS accounts from a central pipeline account.
EventBridge Integration (native)
Pipeline state changes emit events to EventBridge. V2 pipelines use EventBridge for source triggers natively.
CloudWatch Metrics and Alarms
Pipeline execution metrics available in CloudWatch for monitoring success/failure rates.
CloudTrail Audit Logging
All API calls logged in CloudTrail for compliance and audit purposes.
Webhook Triggers (GitHub/GitHub Enterprise)
Event-driven source triggers via webhooks — preferred over polling.
Source: Amazon S3
Trigger pipelines on S3 object changes using EventBridge or polling.
Source: AWS CodeCommit
Native integration; note CodeCommit is no longer accepting new customers as of 2024.
Source: GitHub / GitHub Enterprise
Uses GitHub App connection (preferred) or OAuth. GitHub App connections are more secure and support fine-grained permissions.
Source: Bitbucket / GitLab
Via AWS CodeStar Connections (now called AWS Connections).
Source: Amazon ECR
Trigger pipelines on new container image pushes to ECR repositories.
Deploy: AWS CloudFormation
Create, update, delete stacks and change sets. Supports StackSets for multi-account/region deployments.
Deploy: AWS CodeDeploy
Blue/green and in-place deployments to EC2, ECS, Lambda.
Deploy: Amazon ECS
Standard and blue/green (with CodeDeploy) ECS deployments.
Deploy: AWS Elastic Beanstalk
Deploy application versions to Beanstalk environments.
Deploy: AWS Service Catalog
Automate product version updates in Service Catalog.
Invoke: AWS Lambda
Run Lambda functions as pipeline actions for custom logic, testing, or notifications.
Invoke: AWS Step Functions
Trigger Step Functions state machines from pipeline actions for complex orchestration.
Notifications via SNS
Manual approval actions send SNS notifications. Pipeline events can also route to SNS via EventBridge.
Dead-Letter Queue (DLQ) for EventBridge targets
Must be explicitly configured on EventBridge rule targets — NOT automatic. Critical for production pipelines.
Encryption with AWS KMS
Artifact store S3 bucket can use SSE-KMS with a customer-managed key (CMK) for cross-account scenarios.
Resource-based policies / IAM
CodePipeline uses service roles with IAM policies. Cross-account requires trust policies on both sides.
Pipeline as Code (AWS CDK / CloudFormation / Terraform)
Pipelines can be fully defined as infrastructure-as-code.
Trigger Filtering (V2 only)
V2 pipelines support filtering triggers by branch name, file path glob patterns, and Git tags.
Execution Modes: QUEUED, SUPERSEDED, PARALLEL
V2 pipelines support configurable execution modes. SUPERSEDED (default V1 behavior) cancels older runs when a new one starts.
Build and Test Stage
high freqCodeBuild is used as the Build and/or Test action provider in CodePipeline. CodePipeline passes source artifacts to CodeBuild via S3, CodeBuild executes the buildspec.yml, and output artifacts are passed to subsequent stages. This is the most common CodePipeline integration pattern.
Blue/Green and In-Place Deployment
high freqCodeDeploy is used as the Deploy action provider. Supports EC2/on-premises in-place, EC2 blue/green, ECS blue/green, and Lambda canary/linear/all-at-once deployments. CodePipeline passes the deployment artifact; CodeDeploy handles the rollout strategy.
Artifact Store and Source Provider
high freqS3 serves dual roles: (1) as the mandatory artifact store between pipeline stages, and (2) as a source provider to trigger pipelines on S3 object uploads. For cross-region pipelines, separate artifact buckets must exist in each target region.
Custom Logic Invoke Action
high freqLambda functions can be invoked as pipeline actions for custom testing, notifications, environment setup, or approval logic. Lambda must call back to CodePipeline with PutJobSuccessResult or PutJobFailureResult, or the action will time out.
Pipeline Monitoring and Event-Driven Automation
high freqCodePipeline emits state change events to EventBridge (pipeline started, succeeded, failed, stage changed, action changed). These events can trigger Lambda, SNS, SQS, or other targets. CloudWatch metrics provide pipeline execution success/failure rates for alarms.
Infrastructure as Code Deployment Pipeline
high freqCloudFormation is used as the Deploy action to create/update/delete stacks. Supports CREATE_UPDATE, DELETE_ONLY, CHANGE_SET_EXECUTE modes. CloudFormation StackSets can be used for multi-account/multi-region deployments. This is the canonical IaC pipeline pattern.
Human-in-the-Loop Approval Gate
high freqA Manual Approval action pauses the pipeline and sends an SNS notification to approvers. Approvers use the console, CLI, or SDK to approve/reject. This is used between staging and production stages to enforce human oversight.
Container Image CI/CD Pipeline
high freqECR image push triggers the pipeline via EventBridge. CodeBuild builds and pushes the Docker image to ECR. CodeDeploy (blue/green) or ECS direct deploy action updates the ECS service with the new task definition revision.
Complex Orchestration Invoke
high freqStep Functions state machines can be triggered as pipeline actions for workflows requiring complex branching, retries, or parallel processing that exceed what a single Lambda can handle.
Multi-Account Deployment Pipeline
high freqA central 'tools' account hosts the pipeline. Cross-account IAM roles in target accounts allow CodePipeline to deploy. The artifact S3 bucket and KMS key must grant access to target account IAM roles. This is the AWS Organizations recommended pattern.
EventBridge DLQ is NOT configured automatically — you MUST explicitly configure a Dead-Letter Queue on EventBridge rule targets for production CodePipeline triggers. Without a DLQ, failed event deliveries are silently dropped.
Lambda invoke actions in CodePipeline MUST call back PutJobSuccessResult or PutJobFailureResult to the CodePipeline API, or the action will hang until it times out. This is a mandatory contract — forgetting it is the #1 Lambda action failure.
For cross-region CodePipeline deployments, you MUST create a separate S3 artifact bucket in EACH target region. CodePipeline replicates artifacts to these buckets. Without this, cross-region deploy actions will fail.
Pipeline V2 type is the modern standard — it supports trigger filtering (branch, file path, tags), pipeline variables, and three execution modes (QUEUED, SUPERSEDED, PARALLEL). V1 is legacy and lacks these features. Exam questions increasingly reference V2 behaviors.
For cross-account deployments, the artifact S3 bucket policy AND the KMS key policy in the tools account must explicitly grant permissions to the deployment account's IAM role. Missing either one causes access denied errors — both are required.
ALWAYS configure a Dead-Letter Queue (SQS) on EventBridge rule targets for CodePipeline triggers. DLQs are NOT automatic — without them, throttled or failed event deliveries are silently dropped, causing missed pipeline triggers.
Lambda invoke actions MUST explicitly call PutJobSuccessResult or PutJobFailureResult via the CodePipeline API. Lambda return values and exceptions do NOT auto-signal CodePipeline — omitting these calls leaves the action stuck in InProgress until timeout.
Cross-account deployments require BOTH the S3 artifact bucket policy AND the KMS key policy in the tools account to grant the target account's IAM role access. One without the other causes AccessDenied — both authorization layers are independently required.
Custom action job workers use a POLLING model — they call PollForJobs on the CodePipeline API. They are NOT pushed events. Design custom action workers with polling loops and handle the nonce token correctly.
Manual Approval actions use SNS for notifications, but the approval itself is performed via the CodePipeline console, CLI (aws codepipeline put-approval-result), or SDK. SNS just notifies — it does not perform the approval.
EventBridge is ASYNCHRONOUS — throttling EventBridge does NOT block the source event from being received. It means the delivery to the target (e.g., Lambda, SQS) may be delayed or dropped. This is why DLQ configuration is essential.
CodePipeline pricing differs between V1 and V2: V1 charges $1/pipeline/month (first pipeline free); V2 charges per action execution minute. For cost optimization questions, V2 is cheaper for infrequent pipelines; V1 may be cheaper for pipelines with very long-running actions.
Actions within a stage that have the SAME runOrder value execute IN PARALLEL. Actions with different runOrder values execute SEQUENTIALLY within the stage. This is how you design fan-out test parallelism within a single stage.
CodePipeline source actions for S3 require versioning to be enabled on the source S3 bucket. Without versioning, CodePipeline cannot detect object changes reliably.
The SUPERSEDED execution mode (default legacy V1 behavior) means if a new pipeline execution starts while one is already running, the older execution is superseded (cancelled). Use QUEUED mode to process all executions in order, or PARALLEL mode to run them simultaneously.
CodePipeline integrates with AWS Chatbot to send pipeline notifications to Slack or Microsoft Teams — useful for DevOps team visibility without building custom Lambda notification functions.
Common Mistake
EventBridge throttling blocks new events from being received, so CodePipeline will never miss a trigger event.
Correct
EventBridge is fully asynchronous. Throttling affects delivery to targets (Lambda, SQS, etc.), NOT reception of events. If a target is throttled and no DLQ is configured, the event delivery attempt is eventually abandoned and the pipeline trigger is silently lost.
This is one of the most dangerous misconceptions in production pipelines. The fix is to ALWAYS configure a Dead-Letter Queue (SQS) on EventBridge rule targets for CodePipeline triggers. Exam questions test whether you know DLQ configuration is manual and not automatic.
Common Mistake
Lambda automatically scales to handle any volume of EventBridge events sent to a CodePipeline trigger, so no throttling or dropped events are possible.
Correct
Lambda has concurrency limits (default 1000 per region, adjustable). If EventBridge sends more events than Lambda can handle concurrently, Lambda throttles the invocations. Without a DLQ on the EventBridge rule, throttled events are dropped after retry exhaustion — not queued indefinitely.
Lambda auto-scaling is real but bounded. The key insight is that EventBridge → Lambda delivery failures require a DLQ at the EventBridge rule level (not the Lambda function's DLQ, which only applies to async invocations from other sources). Exam questions test this distinction.
Common Mistake
CodePipeline's Manual Approval action sends an SNS notification that, when the subscriber responds, automatically approves the pipeline.
Correct
SNS in Manual Approval is notification-only. The actual approval must be performed by a human using the AWS Console, AWS CLI (aws codepipeline put-approval-result), or SDK. SNS cannot approve a pipeline — it only notifies the approver.
Candidates confuse 'notification' with 'action'. SNS is a fire-and-forget notification service. The approval is a separate, explicit human action against the CodePipeline API. You could build a Lambda subscriber to auto-approve based on logic, but that requires custom code.
Common Mistake
Custom action job workers in CodePipeline receive push notifications (webhooks) when a job is available, similar to how GitHub webhooks work.
Correct
Custom action job workers use a POLLING model exclusively. They must repeatedly call PollForJobs on the CodePipeline API to check for available jobs. CodePipeline does not push jobs to workers — workers must pull them.
This architectural distinction matters for designing reliable custom action integrations. Workers need polling loops with appropriate sleep intervals and must handle the continuation token (nonce) correctly. Exam questions may present a scenario where a custom action isn't executing and ask why — the answer is often that the job worker stopped polling.
Common Mistake
For cross-account CodePipeline deployments, only the IAM role in the target account needs to be configured — CodePipeline handles artifact access automatically.
Correct
Cross-account deployments require BOTH: (1) the S3 artifact bucket policy in the tools account must grant the target account's IAM role access, AND (2) the KMS key policy (if using SSE-KMS) must also grant the target account's IAM role decrypt permissions. Missing either one causes AccessDenied errors.
AWS resource-based policies and KMS key policies are independent authorization layers. IAM role trust alone is insufficient when the artifact bucket uses KMS encryption. This is a classic multi-step authorization failure that appears in SAP-C02 and DOP-C02 scenario questions.
Common Mistake
A Lambda function invoked as a CodePipeline action will automatically signal success or failure back to CodePipeline based on whether the function throws an exception.
Correct
Lambda actions in CodePipeline require the function to EXPLICITLY call PutJobSuccessResult or PutJobFailureResult on the CodePipeline API. An exception or successful Lambda return does NOT automatically signal CodePipeline. Without these API calls, the action will remain in 'InProgress' state until it times out.
This is the #1 implementation mistake with Lambda pipeline actions. The Lambda function receives a job ID in the event payload and must use it to report back. Forgetting this means pipelines appear stuck. Exam questions test this explicit callback requirement.
Common Mistake
CodePipeline V1 and V2 pipelines have identical features — V2 is just a newer UI.
Correct
V2 pipelines have significantly enhanced capabilities: trigger filtering by branch/file path/tags, pipeline-level variables, three execution modes (QUEUED, SUPERSEDED, PARALLEL), and per-action-minute pricing. V1 lacks all of these. V2 is the recommended type for all new pipelines.
Exam questions increasingly reference V2-specific features. If a question describes trigger filtering on file paths or branch names, it's describing V2 behavior. Knowing the V1/V2 distinction helps you answer 'which pipeline type supports X?' questions correctly.
STAB = Source → Test → Approve → Build/Deploy — the canonical CodePipeline stage order mnemonic
DLQ = 'Don't Lose Queue-events' — always configure a DLQ on EventBridge targets for production pipelines
POLL not PUSH = Custom action job workers POLL for jobs; they are never pushed to
BOTH keys unlock cross-account = S3 bucket policy AND KMS key policy must BOTH grant target account access
Lambda MUST call back = PutJobSuccessResult / PutJobFailureResult — Lambda doesn't auto-signal CodePipeline
V2 = Variables, Versioned-triggers, Varied-execution-modes — the three V2 superpowers
runOrder SAME = SIMULTANEOUS, runOrder DIFFERENT = SEQUENTIAL (within a stage)
CertAI Tutor · DOP-C02, DVA-C02, SAP-C02, DEA-C01, CLF-C02 · 2026-02-21
In the Same Category
Comparisons
Guides & Patterns