How to Setup CI/CD Pipeline with GitHub Actions and AWS: Complete Guide
By Braincuber Team
Published on April 2, 2026
In this article, we will learn how to set up a CI/CD pipeline with GitHub Actions and AWS. I have divided the guide into three parts to help you work through it. First, we will cover some important terminology so you are not lost in a bunch of big buzzwords. Second, we will set up continuous integration so we can automatically run builds and tests. And finally, we will set up continuous delivery so we can automatically deploy our code to AWS. This complete tutorial provides a beginner guide with a step by step guide to automating your entire software delivery pipeline.
What You'll Learn:
- What a CI/CD Pipeline is and how it speeds up feature delivery
- The five core concepts of GitHub Actions: jobs, workflows, events, actions, and runners
- How to configure GitHub Actions for continuous integration (build and test)
- How to set up AWS Elastic Beanstalk for application hosting
- How to configure continuous delivery with automatic deployment to AWS
- The difference between Continuous Delivery and Continuous Deployment
Automated Testing
Run tests automatically on every push or pull request to catch bugs before they reach production.
One-Click Deployment
Deploy to AWS Elastic Beanstalk automatically with zero manual intervention after code review approval.
Secure Secrets Management
Store AWS credentials as GitHub repository secrets, never exposing sensitive data in your codebase.
Version Control Integration
Git-based workflows with automatic triggers on push, pull request, or scheduled events.
Part One: Demystifying the Hefty Buzzwords
The key to making sense of the title of this piece lies in understanding the terms CI/CD Pipeline, GitHub Actions, and AWS.
What Is a CI/CD Pipeline?
A CI/CD Pipeline is simply a development practice. It tries to answer this one question: How can we ship quality features to our production environment faster? In other words, how can we hasten the feature release process without compromising on quality?
Without the CI/CD Pipeline, each step in the feature delivery cycle will be performed manually by the developer. In essence, to build the source code, someone on your team has to manually run the command to initiate the build process. Same thing with running tests and deployment.
The CI/CD approach is a radical shift from the manual way of doing things. It is entirely based on the premise that we can speed up the feature release process reasonably, if we automate steps like building, testing, deploying to UAT, and finally to production each time a member of the team pushes their change to the shared repo.
| Concept | Description | Key Characteristic |
|---|---|---|
| Continuous Integration | Build process is initiated and tests run on a new change | Automated build and test on every commit |
| Continuous Delivery | Newly integrated change is automatically deployed to UAT, then manually to production | Manual gate before production deployment |
| Continuous Deployment | Update in UAT is automatically deployed to production as an official release | Fully automated, no manual intervention |
Note: If the deployment from the UAT environment to the production environment is initiated manually, then it is a Continuous Integration/Continuous Delivery setup. Otherwise, it is a Continuous Integration/Continuous Deployment setup.
What Are GitHub Actions?
In the CI/CD Pipeline, GitHub Actions is the entity that automates the boring stuff. Think of it as some plugin that comes bundled with every GitHub repository you create. The plugin exists on your repo to execute whatever task you tell it to. Usually, you would specify what tasks the plugin should execute through a YAML configuration file.
At the core of GitHub Actions lies five concepts:
Jobs
The tasks you command GitHub Actions to execute through the YAML config file. A job could be something like telling GitHub actions to build your source code, run tests, or deploy the code that has been built to some remote server.
Workflows
Essentially automated processes that contain one or more logically related jobs. For example, you could put the build and run tests jobs into the same workflow, and the deployment job into a different workflow. GitHub Actions considers each configuration file that you put in the .github/workflows folder in your repo a workflow.
Events
Literally the events that trigger the execution of a job by GitHub Actions. For example, is it on-PR to main? Is it on-push to main? Is it on-merge to main? A job can only be executed by a GitHub Action when some event happens. You could also schedule jobs (e.g., at 2am everyday).
Actions
The reusable commands that you can reuse in your config file. You can write your custom actions or use existing ones from the GitHub Marketplace.
Runners
The remote computer that GitHub Actions uses to execute the jobs you tell it to. GitHub provides Ubuntu Linux, Microsoft Windows, and macOS runners to run your workflows.
What Is AWS?
AWS stands for Amazon Web Services. It is a platform owned by Amazon, and this platform allows you access to a broad range of cloud computing services.
Cloud computing platforms like AWS exist to save you all the stress of having to setup your own hardware infrastructure from scratch. Instead of having to setup your own hardware, they allow you to upload your application to one of their pre-configured computers over the internet. In return, you pay some certain amount to them.
In its most simplistic form, Cloud Computing is primarily about storing or executing (sometimes both) certain things on someone else's computer, usually, over a network.
| Service Category | Purpose | Example Services |
|---|---|---|
| Compute Service | Upload and execute source code that powers applications | Elastic Beanstalk, EC2, Lambda |
| Storage Service | Persist media files and static assets | Amazon S3, EFS |
| Database Service | Manage relational and NoSQL databases | Amazon RDS, DynamoDB |
Part Two: Continuous Integration - How to Automatically Run Builds and Tests
In this section, we will be seeing how we can configure GitHub Actions to automatically run builds and tests on push or pull request to the main branch of a repo.
Prerequisites
- A Django project setup locally with at least one view that returns some response defined
- A testcase written for the view(s) you have defined
Now that you have a Django project setup locally, let us configure GitHub Actions.
How to Configure GitHub Actions
Okay, so we have our project setup. We also have a testcase written for the view that we have defined, but most importantly we have pushed our shiny change to GitHub.
The goal is to have GitHub trigger a build and run our tests each time we push or open a pull request on main/master. We just pushed our change to main, but GitHub Actions did not trigger the build or run our tests.
Why not? Because we have not defined a workflow yet. Remember, a workflow is where we specify the jobs we want GitHub Actions to execute.
Every GitHub repo has an Action tab. If you navigate to that tab, you will know if a repo has a workflow defined on it or not. A repo with a workflow defined will show a list with the heading "All Workflows" in the Actions tab.
name: Build and Test
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up Python Environment
uses: actions/setup-python@v2
with:
python-version: '3.x'
- name: Install Dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Run Tests
run: |
python manage.py test
Let us make sense of each line in the file above:
- name: Build and Test - This is the name of our workflow. When you navigate to the actions tab, each workflow you define will be identified by the name you give it here on that list.
- on: - This is where you specify the events that should trigger the execution of our workflow. In our config file we passed it two events. We specified the main branch as the target branch.
- jobs: - Remember, a workflow is just a collection of jobs.
- test: - This is the name of the job we have defined in this workflow. You could name it anything really. Notice it is the only job and the build job is not there? Well, it is Python code so no build step is required.
- runs-on: - GitHub provides Ubuntu Linux, Microsoft Windows, and macOS runners to run your workflows. This is where you specify the type of runner you want to use. In our case, we are using the Ubuntu Linux runner.
- steps: - A job is made up of a series of steps that are usually executed sequentially on the same runner. Each step is marked by a hyphen. name represents the name of the step. Each step could either be a shell script that is being executed (defined with
run) or an action (defined withuses).
Now that you have defined a workflow by adding the config file in the designated folder, you can commit and push your change to your remote repo. If you navigate to the Actions tab of your remote repo, you should see a workflow with the name Build and Test listed there.
Part Three: Continuous Delivery - How to Automatically Deploy Our Code to AWS
In this section, we will see how we can have GitHub Actions automatically deploy our code to AWS on push or pull request to the main branch. AWS offers a broad range of services. For this tutorial, we will be using a compute service called Elastic Beanstalk.
Compute Service? Elastic Beanstalk? What Does That Mean?
Remember we mentioned that cloud computing is all about storing and executing certain things on someone else's computer via the internet right - certain things?
Yes. For example, we can store and execute source code or we can just store media files. Amazon knows this, and as a result, their cloud infrastructure encompasses a plethora of service categories. Each service category allows us do a certain thing out of the certain things that we can do.
Each service in a category just presents us with a different way of solving the problem that the category it belongs to addresses. For example, each service in the compute category provides us with a different approach to deploying and executing our application code on the cloud - one problem, different approaches. Elastic Beanstalk is one of the services in the compute category. Others are, but not limited to, EC2 and Lambda.
Of all the compute services, why Elastic Beanstalk? Well, because it is one of the easiest to work with.
Our Deployment Architecture
For brevity's sake we are going with the Continuous Delivery setup. In addition, we are going to have just one deployment environment that will serve as our UAT environment.
In summary, this is how our deployment setup is going to work: on push or pull request to main, GitHub Actions will test and upload our source code to Amazon S3. The code is then pulled from Amazon S3 to our Elastic Beanstalk environment. Picture the flow this way:
GitHub → Amazon S3 → Elastic Beanstalk
Why are we not pushing directly to Elastic Beanstalk, you might ask? The only other way we could upload code directly to an Elastic Beanstalk instance with our current setup, is if we were using the AWS Elastic Beanstalk CLI (EB CLI). Using the EB CLI requires running some shell command that would then require that we respond with some input. Now, if we are deploying from our local machine to Elastic Beanstalk, when we run the EB CLI commands, we would be there to type in the required responses. But with our current setup, those commands would be executed on GitHub Runners. We would not be there to provide the required responses.
With the approach we have picked, we would run a shell command that uploads our code to S3 and then another command that pulls the uploaded code to our Elastic Beanstalk instance. These commands, when run, do not require that we submit some responses. Having the Amazon S3 step is the easiest way to go about this.
Step 1: Setup an AWS Account
Create an IAM user. To keep things simple, when adding permissions, just add "Administrator Access" to the user (this has some security pitfalls, though). To accomplish this, follow the steps in modules 1 and 2 of the AWS setup environment guide.
In the end, make sure to grab and keep your AWS secret and access keys. We will be needing them in the subsequent sections.
Step 2: Setup Your Elastic Beanstalk Environment
Once logged into your AWS account, take the following steps to set up your Elastic Beanstalk environment.
Search for Elastic Beanstalk
Search for "elastic beanstalk" in the search field in the AWS console. Then click on the Elastic Beanstalk service.
Create a New Environment
Click on the "Create a New Environment" prompt. Make sure to select "Web server environment" in the next step.
Configure Environment Details
Submit an application name, an environment name, and also select a platform. For this tutorial, we are going with the Python platform.
Wait for Environment Creation
Once you submit the form, after a while your application and its associated environment will be created. You should see the names you submitted displayed on the left side bar. Grab the application name and the environment name. We will be needing them in the subsequent steps.
Step 3: Configure Your Project for Elastic Beanstalk
By default, Elastic Beanstalk looks for a file named application.py in our project. It uses that file to run our application, but we do not have that file in our project. We need to tell Elastic Beanstalk to use the wsgi.py file in our project to run our application instead.
option_settings:
aws:elasticbeanstalk:container:python:
WSGIPath: django_github_actions_aws.wsgi:application
One last thing you need to do in this section is to go to your settings.py file and update the ALLOWED_HOSTS setting to allow all hosts:
ALLOWED_HOSTS = ['*']
Security Warning: Wildcard Hosts
Using the wildcard * for ALLOWED_HOSTS has major security drawbacks. We are only using it here for demo purposes. In production, you should specify your actual domain names.
Step 4: Update Your Workflow File
There are five important pieces of information we need to complete this step: application name, environment name, access key id, secret access key, and the server region (after login, you can grab the region from the right-most section of the navbar).
Because the access key id and the secret access key are sensitive data, we will hide them somewhere in our repository and access them in our workflow file. To do that, head over to the settings tab of your repo, and then click on secrets. There, you can create your secrets as key-value pairs.
name: Build, Test and Deploy
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up Python Environment
uses: actions/setup-python@v2
with:
python-version: '3.x'
- name: Install Dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Run Tests
run: |
python manage.py test
deploy:
needs: [test]
runs-on: ubuntu-latest
steps:
- name: Checkout source code
uses: actions/checkout@v2
- name: Generate deployment package
run: zip -r deploy.zip . -x '*.git*'
- name: Deploy to EB
uses: einaregilsson/beanstalk-deploy@v20
with:
aws_access_key: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws_secret_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
application_name: django-github-actions-aws
environment_name: django-github-actions-aws
version_label: 12348
region: "us-east-2"
deployment_package: deploy.zip
Key concepts in the deployment job:
- needs: [test] - Simply tells GitHub Actions to only start executing the deployment job after the test job has been completed with a passing status.
- Generate deployment package - Creates a zip file of your entire project, excluding the .git directory.
- Deploy to EB - Uses an existing action,
einaregilsson/beanstalk-deploy@v20. Remember how we said actions are some reusable applications that takes care of some frequently repeated tasks for us? This is one of those actions.
To reinforce the above, remember that our deployment was supposed to go through the following steps: GitHub → Amazon S3 → Elastic Beanstalk. However, throughout this tutorial, we did not do any Amazon S3 setup. Furthermore, in our workflow file we did not upload to an S3 bucket nor did we pull from an S3 bucket to our Elastic Beanstalk environment.
Normally, we are supposed to do all that, but we did not here - because under the hood, the einaregilsson/beanstalk-deploy@v20 action does all the heavy lifting for us. You can also create your own action that takes care of some repetitive tasks and make it available to other developers through the GitHub Marketplace.
Now that you have updated your workflow file locally, you can then commit and push this change to your remote. Your jobs will run and your code will be deployed to the Elastic Beanstalk instance you created. And that is it. We are done!
Need Help Setting Up CI/CD Pipelines?
Braincuber's DevOps experts can help you architect, implement, and optimize CI/CD pipelines with GitHub Actions, AWS, and more. 500+ successful cloud projects delivered.
Frequently Asked Questions
What is the difference between Continuous Delivery and Continuous Deployment?
Continuous Delivery automatically deploys to UAT but requires manual approval for production. Continuous Deployment automatically deploys all the way to production without any manual intervention.
Why use GitHub Actions instead of Jenkins or CircleCI?
GitHub Actions is fully integrated into GitHub, requires no separate infrastructure, offers 2000 free build minutes per month for private repos, and has a growing marketplace of reusable actions.
How do I store AWS credentials securely in GitHub?
Go to your repository Settings > Secrets and variables > Actions. Add your AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as repository secrets. Access them in workflows using ${{ secrets.AWS_ACCESS_KEY_ID }}.
Why route deployment through Amazon S3 instead of directly to Elastic Beanstalk?
Direct EB CLI deployment requires interactive input which is not possible on GitHub runners. The beanstalk-deploy action handles the S3 upload and EB deployment automatically without interactive prompts.
What is the needs keyword in GitHub Actions?
The needs keyword creates job dependencies. When deploy needs test, the deployment job only runs after the test job completes successfully. If tests fail, deployment is skipped.
