How to Set Up Automated Deployment in AWS with Python Boto3
By Braincuber Team
Published on May 8, 2026
Manually provisioning servers and deploying applications is tedious, error-prone, and does not scale. This complete tutorial shows you how to automate the entire deployment pipeline in AWS using Python and the Boto3 SDK. You will learn to programmatically provision EC2 instances, configure network security, set up IAM roles, and deploy a real Flask web application from GitHub all from a single Python script. By the end, a single command launches your server, installs dependencies, clones your repository, and starts your web app.
What You Will Learn:
- How to install and configure Boto3, the AWS SDK for Python
- How to programmatically create and manage AWS IAM users and credentials
- How to read CSV credentials into your Python application
- How to create security groups and IAM roles for EC2 access
- How to provision Ubuntu EC2 instances from Python code
- How to execute remote commands on EC2 using AWS SSM
- How to deploy a Flask web app from GitHub to a live server
Prerequisites
| Requirement | Details |
|---|---|
| Python | 3.6 or higher installed on your local machine |
| AWS Account | Active AWS account with billing enabled |
| Git | Git installed for cloning repositories to the server |
| AWS Knowledge | Basic familiarity with EC2, IAM, and security groups |
| Linux Terminal | Familiarity with basic Bash commands (Ubuntu examples used) |
What is Automated Deployment?
Automated deployment is the practice of using scripts and tools to provision infrastructure and deploy applications without manual intervention. Instead of SSHing into a server, installing dependencies by hand, and copying files manually, a single script handles the entire process. This approach is fundamental to DevOps, continuous integration, and continuous deployment (CI/CD) pipelines.
Amazon Web Services provides the AWS SDK for Python (Boto3) to interact with its services programmatically. Boto3 handles the low-level HTTP operations, authentication, request signing, and error handling so you can focus on the logic of your infrastructure. In this tutorial, you use Boto3 to provision an EC2 instance, configure networking, and deploy a Flask application all from a single Python script.
Infrastructure as Code
Define your entire infrastructure in Python code. Provision servers, configure networking, and deploy applications programmatically without clicking through the AWS console.
Repeatable Deployments
Run the same script multiple times and get identical infrastructure every time. Eliminate configuration drift and the it-works-on-my-machine problem.
Time Savings
What takes 30 minutes of clicking through the AWS console takes seconds with an automated script. Deploy multiple environments dev, staging, production with a single command.
Auditable History
Scripts are version-controlled. Every infrastructure change is tracked in Git with a clear history of who changed what and when.
Step 1: Set Up Your Python Environment and Install Boto3
Create a new project directory and open your favorite Python IDE. Create a main file called app.py and add a simple print statement to verify your environment is working:
print("Hello Python!")
Run the file to confirm your Python environment is set up correctly. Next, install the AWS SDK for Python. Open a terminal and run:
sudo pip3 install boto3
Add import boto3 at the top of your Python file. This gives you access to the full AWS SDK, which handles authentication, request signing, retries, and error handling for all AWS services.
Create an IAM User and Download Credentials
Before Boto3 can interact with AWS, you need programmatic access credentials. Go to the AWS console, navigate to the Identity and Access Management (IAM) panel, click Users, then Add user. Enter a username and tick the box for Programmatic access. On the permissions step, create a new group with the AdministratorAccess policy for this tutorial. Click through the remaining steps and download the CSV file containing your access key ID and secret access key. Copy this CSV into your project root directory.
Security Warning
AdministratorAccess grants full control over your AWS account. For production systems, follow the principle of least privilege by assigning only the permissions your application needs. Never commit credential CSV files to version control. Add credentials.csv to your .gitignore file immediately.
Read Credentials from CSV into Python
Create a new file called creds.py that parses the downloaded CSV file and extracts the access key ID and secret key. The CSV contains headers in the first row and the actual credentials in the second row. Use Python's built-in csv module to read and parse it. This class encapsulates credential management so your main application code stays clean.
import csv
class Creds:
username = ""
access_key_id = ""
secret_key = ""
def __init__(self, creds_file):
with open(creds_file) as file:
reader = csv.reader(file, delimiter=",")
header = next(reader)
creds_line = next(reader)
self.username = creds_line[0]
self.access_key_id = creds_line[2]
self.secret_key = creds_line[3]
In your main app.py file, import the Creds class and initialize it with the path to your downloaded CSV:
from creds import Creds
creds = Creds("credentials.csv")
Create EC2 Client, Security Group, and IAM Role
Create a Boto3 EC2 client using your credentials. You also need to set up a security group that opens ports 22 (SSH), 80 (HTTP), 443 (HTTPS), and 5000 (Flask default port) to the internet. Then create an IAM role with EC2 as the trusted entity and AdministratorAccess policy. Copy the security group ID and IAM role name into global variables.
from botocore.exceptions import ClientError
import time
REGION = "us-east-2"
SECURITY_GROUP = "sg-0c7a3bfa35c85f8ce"
IAM_PROFILE = "Python-Tutorial"
GIT_URL = "https://github.com/hsauers5/hellopython"
ec2 = boto3.client(
'ec2',
aws_access_key_id=creds.access_key_id,
aws_secret_access_key=creds.secret_key,
region_name=REGION
)
Creating Security Groups in the AWS Console
Navigate to EC2 then Network and Security then Security Groups in the AWS console. Create a new security group and add inbound rules for ports 22 (SSH), 80 (HTTP), 443 (HTTPS), and 5000 (Flask). Set the source to 0.0.0.0/0 for this tutorial. Copy the security group ID after creation. For the IAM role, go to IAM, Roles, Create role, select EC2, attach AdministratorAccess, and note the role name.
Provision an EC2 Ubuntu Instance via Boto3
Now write the provision_server() function that calls ec2.run_instances() with the Ubuntu Server 18.04 AMI ID, t2.micro instance type, your key pair, security group, and IAM role. The function waits 60 seconds for the instance to be provisioned before returning its ID. Add error handling with ClientError to catch and print any provisioning failures.
def provision_server():
image_id = "ami-0f65671a86f061fcd"
instance_type = "t2.micro"
keypair_name = "robot"
response = {}
try:
response = ec2.run_instances(
ImageId=image_id,
InstanceType=instance_type,
KeyName=keypair_name,
SecurityGroupIds=[SECURITY_GROUP],
IamInstanceProfile={'Name': IAM_PROFILE},
MinCount=1,
MaxCount=1
)
print(response['Instances'][0])
print("Provisioning instance...")
time.sleep(60)
return str(response['Instances'][0]['InstanceId'])
except ClientError as e:
print(e)
Key Pair Requirement
Before running this function, create an EC2 key pair from the AWS console. Navigate to EC2 then Network and Security then Key Pairs. Create one with a name that matches the keypair_name variable in your code. The key pair is required for EC2 instance launch even if you access the server through SSM.
List EC2 Instances and Execute Remote Commands via SSM
Create a get_instance_ids() function that calls ec2.describe_instances() and returns a list of all instance IDs in your region. Then build a send_command_aws() function using the AWS SSM (Systems Manager) client to execute shell commands on a remote instance. Send commands using AWS-RunShellScript document and retrieve the output with get_command_invocation().
def get_instance_ids():
instance_id_list = []
instances = ec2.describe_instances()
instances = instances['Reservations'][0]['Instances']
for instance in instances:
instance_id_list.append(instance['InstanceId'])
return instance_id_list
def send_command_aws(commands=["echo hello"], instance="i-06cca6072e593a0ac"):
ssm_client = boto3.client(
'ssm',
aws_access_key_id=creds.access_key_id,
aws_secret_access_key=creds.secret_key,
region_name=REGION
)
response = ssm_client.send_command(
InstanceIds=[instance],
DocumentName="AWS-RunShellScript",
Parameters={'commands': commands},
)
command_id = response['Command']['CommandId']
time.sleep(5)
output = ssm_client.get_command_invocation(
CommandId=command_id,
InstanceId=instance,
)
print(output)
Generate Deployment Commands and Deploy Flask from GitHub
Write a generate_git_commands() function that builds a list of shell commands to run on the remote server. The commands update apt, install Git, Python 3, pip3, clone your GitHub repository, install Python dependencies like Flask, and start the web application. This function dynamically generates commands based on the repository URL and package list, making it reusable for different projects.
def generate_git_commands(git_url=GIT_URL, start_command="sudo python3 hellopython/app.py", pip3_packages=[], additional_commands=[]):
commands = []
if ".git" in git_url:
git_url = git_url[:-4]
repo_name = git_url[git_url.rfind('/'):]
commands.append("sudo apt-get update")
commands.append("sudo apt-get install -y git")
commands.append("sudo apt-get install -y python3")
commands.append("sudo apt-get install -y python3-pip")
commands.append("sudo rm -R hellopython")
commands.append("pip3 --version")
commands.append("sudo git clone " + git_url)
for dependency in pip3_packages:
commands.append("sudo pip3 install " + dependency)
for command in additional_commands:
commands.append(command)
commands.append(start_command)
return commands
Run the Full Deployment
With all the functions defined, the final step is to wire everything together. Add this line to the bottom of your program and run it:
send_command_aws(
commands=generate_git_commands(GIT_URL, pip3_packages=["flask"]),
instance=provision_server()
)
Execute your script with python3 app.py. The script will provision a new EC2 instance, wait for it to be ready, then execute the deployment commands. After the script finishes, go to the EC2 console, copy the instance public DNS, append :5000 to it, and open it in your browser to see your deployed Flask application running on AWS.
Understanding the Full Deployment Flow
| Function | Purpose |
|---|---|
| Creds() | Parses AWS credentials CSV and provides access key ID and secret key |
| provision_server() | Launches a t2.micro Ubuntu EC2 instance with configured security group and IAM role |
| get_instance_ids() | Queries all EC2 instances in the region and returns their IDs |
| send_command_aws() | Executes shell commands on a remote EC2 instance via AWS SSM |
| generate_git_commands() | Builds a sequence of commands to install dependencies, clone repo, and start the app |
The complete code for this tutorial is available on GitHub at github.com/hsauers5/AWS-Deployment. This simple but powerful pattern forms the foundation of modern infrastructure-as-code and CI/CD pipelines. You can extend it to support multiple environments, add load balancers, configure auto-scaling groups, and integrate with monitoring services like CloudWatch.
Beyond This Tutorial
For production deployments, consider using AWS Elastic Beanstalk for managed deployments, AWS CodeDeploy for automated rollouts, or Terraform for declarative infrastructure management. The Boto3 approach in this tutorial gives you fine-grained control and is ideal for understanding how AWS services work at the API level.
Frequently Asked Questions
What is the difference between Boto3 and the AWS CLI?
Boto3 is the AWS SDK for Python that lets you interact with AWS services programmatically within your Python code. The AWS CLI is a command-line tool that performs the same operations from your terminal. Boto3 gives you more control over error handling, flow logic, and integration with other Python libraries, while the CLI is better for quick ad-hoc operations and shell scripting.
Does t2.micro qualify for the AWS free tier?
Yes, the t2.micro instance type is eligible for the AWS Free Tier, which includes 750 hours per month for the first 12 months. The AMI used in this tutorial Ubuntu Server 18.04 LTS is also free tier eligible. Keep track of additional resources like EBS storage volumes as they may incur costs beyond the free tier limits.
Why does the provision_server function wait 60 seconds?
After calling run_instances, the EC2 instance takes time to transition from pending to running state. The 60-second sleep is a simple polling mechanism to wait for the instance to be fully provisioned. In production, use waiter functions like ec2.get_waiter(instance_running) for robust polling instead of a fixed sleep.
Can I deploy a different application instead of Flask?
Yes, the generate_git_commands function is designed to be reusable. Change the GIT_URL to your repository, update pip3_packages with your dependencies, and modify the start_command to match your applications entry point. The deployment pattern works for any Python web application and can be adapted for Node.js, Go, or other runtimes by modifying the installation commands.
Is SSM command execution available on all EC2 instances?
SSM requires the Systems Manager agent to be installed and the instance to have an IAM role that grants SSM permissions. Amazon Linux 2 and Ubuntu Server 18.04 AMIs come with the SSM agent pre-installed. The IAM role must include the AmazonSSMManagedInstanceCore policy for the instance to accept SSM commands.
Need Help with AWS Infrastructure?
Our cloud experts can help you design automated deployments, optimize AWS costs, and build scalable infrastructure for your applications.
