How to Build and Push Docker Images to AWS ECR: Complete Tutorial
By Braincuber Team
Published on April 1, 2026
How to build and push Docker images to AWS ECR is one of the most essential skills for modern cloud deployment. You have likely encountered the classic problem where code runs perfectly on your machine but throws errors on someone else's server. Docker solves this exact problem by packaging your application with all its dependencies into a portable, reproducible container.
In this complete tutorial and step by step guide, you will learn how to Dockerize your application, configure AWS CLI, create an Amazon Elastic Container Registry repository, and push your Docker images to the cloud. This beginner guide covers everything from Docker installation to a fully deployed container in AWS ECR.
What You'll Learn:
- How to install, configure, and log in to Docker Desktop
- How to create a Dockerfile and .dockerignore for your project
- How to build, tag, and run Docker images locally with port forwarding
- How to install AWS CLI and configure IAM credentials
- How to create an ECR repository using the AWS CLI
- How to authenticate Docker with AWS ECR and push images
Containerize Applications
Package your app with all dependencies into a portable Docker image that runs identically on any machine or cloud environment.
AWS ECR Registry
Store your Docker images in Amazon Elastic Container Registry with IAM security, lifecycle policies, and seamless ECS integration.
Secure Authentication
Authenticate Docker with AWS using IAM credentials and ECR authorization tokens for secure image push and pull operations.
Cloud Deployment Ready
Push images to ECR and deploy them on Amazon ECS, EKS, or EC2 for production-grade container orchestration at scale.
Prerequisites for This Tutorial
Before you begin this step by step guide, make sure you have the following requirements in place:
| Requirement | Details |
|---|---|
| Docker Account | Free account at hub.docker.com/signup |
| Docker Knowledge | Basic understanding of Docker concepts and commands |
| AWS Account | Active AWS account with console access |
| IAM Knowledge | Basic understanding of IAM users, policies, and ECS/ECR |
| Sample Web App | Any application to containerize (Node.js, Python, etc.) |
Need a Sample Application?
If you do not have a web app, you can clone this Express.js + MongoDB example: github.com/joshi-kaushal/members-only. This application will be used throughout this tutorial.
Step 1: Install and Configure Docker Desktop
The first step in this complete tutorial is setting up Docker on your machine. Docker Desktop provides everything you need to build, run, and manage containers with a clean graphical interface.
Download Docker Desktop
Go to docker.com/get-started and download the installer for your operating system. Windows and Mac users get the full Desktop application with GUI. Linux users should follow the official Docker CE installation guide for their distribution.
Install and Launch
Run the installer and follow the prompts. Docker Desktop installs the Docker Engine, Docker CLI, Docker Compose, and Kubernetes (optional). After installation, launch Docker Desktop and wait for it to start the Docker daemon.
Verify Installation
Open your terminal and run docker --version. You should see output like Docker version 24.0.7, build afdd53b. This confirms Docker is properly installed.
Create Docker Account and Login
Create a free account at hub.docker.com/signup. Then authenticate your local Docker CLI by running docker login in your terminal. Enter your credentials and you should see Login Succeeded.
docker --version
docker login
Step 2: Create a Dockerfile for Your Project
Now that Docker is installed, you need to Dockerize your project. This means creating a Dockerfile that contains all the instructions needed to build your application image. The Dockerfile defines the base image, working directory, dependencies, and startup command.
Understanding Dockerfile Instructions
A Dockerfile is a text file without any extension placed in the root of your project directory. Each instruction in the Dockerfile creates a new layer in the resulting Docker image, which enables caching and efficient rebuilds.
| Instruction | Purpose |
|---|---|
FROM | Sets the base image (e.g., node:12.17.0). Use the exact version from your package.json. |
WORKDIR | Sets the working directory inside the container (e.g., /app). |
COPY | Copies files from your host machine into the container filesystem. |
RUN | Executes commands during image build (e.g., npm install). |
ENV | Sets environment variables available to the container at runtime. |
EXPOSE | Documents which port the container listens on (does not publish it). |
CMD | Specifies the default command to run when the container starts. Only one CMD per image. |
Create the .dockerignore File
Before writing the Dockerfile, create a .dockerignore file in your project root. This file works exactly like .gitignore and prevents unnecessary files from being copied into your Docker image. The most important entry is node_modules because dependencies will be installed fresh inside the container.
node_modules
npm-debug.log
.git
.env
Write the Dockerfile
Here is a complete Dockerfile for a Node.js Express application. This configuration follows best practices for layer caching by copying package.json first, installing dependencies, and then copying the rest of the application code.
FROM node:12.17.0
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
ENV PORT=3000
EXPOSE 3000
CMD [ "npm", "start" ]
Key Dockerfile Concepts:
- Layer caching: Copying package.json before the rest of the code means npm install is cached unless dependencies change
- Exec form CMD: Using array syntax
["npm", "start"]runs the command directly without a shell session - Exact version pinning: Using
node:12.17.0instead ofnode:latestensures reproducible builds
Step 3: Build and Run the Docker Image Locally
With the Dockerfile ready, you can now build your first Docker image. The docker build command reads the Dockerfile and creates an image layer by layer.
Build the Docker Image
Run docker build -t <name-tag> . from your project root. The -t flag tags the image with a memorable name. The dot at the end specifies the build context (current directory). You will see each Dockerfile instruction executing in order, ending with Successfully built.
Run the Container with Port Forwarding
Execute docker run -p 3000:3000 <name-tag> to start the container. The -p 3000:3000 flag maps port 3000 on your host machine to port 3000 inside the container. Without this flag, the container runs but is not accessible from your browser.
Verify in Browser
Open http://localhost:3000/ in your browser. If everything is configured correctly, you will see your application running inside the Docker container on your local machine.
# Build the image
docker build -t myapp:v1 .
# Run with port forwarding
docker run -p 3000:3000 myapp:v1
# Verify running containers
docker ps
Understanding Port Forwarding
The -p hostPort:containerPort flag is essential. Even though EXPOSE 3000 is declared in the Dockerfile, it only documents the port. The -p flag actually creates the network bridge that makes the container accessible from your host machine.
Step 4: Install and Configure AWS CLI
To push Docker images to AWS ECR, you need the AWS Command Line Interface (CLI) installed and configured on your system. AWS CLI enables you to interact with all AWS services directly from your terminal, including ECR, ECS, IAM, and more.
Install AWS CLI
Download the AWS CLI installer from the official AWS documentation. For Windows, download the MSI installer and follow the setup wizard. For Mac, use the pkg installer. For Linux, use your distribution's package manager or the bundled installer.
After installation, restart your terminal and verify with:
aws --version
Create an IAM User for AWS CLI
You should never use your root AWS account credentials for CLI access. Instead, create a dedicated IAM user with the minimum required permissions:
Go to IAM Console
Navigate to the IAM section of the AWS Management Console and click Add User.
Enable Programmatic Access
When creating the user, ensure Access Key - Programmatic Access is checked. This generates the access key ID and secret access key needed for CLI configuration.
Attach ECS FullAccess Policy
Attach the AmazonECS_FullAccess managed policy to the user. This grants permissions for ECS and ECR operations. For production, consider creating a custom policy with only the specific ECR actions you need.
Save Access Keys
After creating the user, download or copy the Access Key ID and Secret Access Key. You will not be able to view the secret key again after this step. Store them securely.
Configure AWS CLI
Run the configuration command and enter your IAM credentials:
aws configure
# You will be prompted for:
# AWS Access Key ID [None]: <your-access-key>
# AWS Secret Access Key [None]: <your-secret-key>
# Default region name [None]: us-east-1
# Default output format [None]: <press Enter to skip>
# Verify configuration
aws configure list
Security Best Practice
Never commit your AWS credentials to version control. The aws configure command stores credentials in ~/.aws/credentials which is automatically ignored by most .gitignore templates. For team environments, consider using AWS SSO or IAM roles instead of static access keys.
Step 5: Create an ECR Repository
Amazon Elastic Container Registry (ECR) is a fully managed container registry that makes it easy to store, manage, and deploy Docker container images. Before pushing images, you need to create a repository to hold them.
Create the Repository
Run the following command to create a new ECR repository. Use the same name as your project for simplicity. The response will include a JSON object with the repository details including the URI.
Note the Repository URI
The URI looks like <account-id>.dkr.ecr.<region>.amazonaws.com/<repo-name>. Save this URI because you will need it for tagging and pushing your Docker image.
Verify in AWS Console
Navigate to the ECR section in the AWS Management Console to confirm the repository was created. You will not see any images yet since none have been pushed.
aws ecr create-repository --repository-name myapp --region us-east-1
# Response includes:
# "repositoryUri": "123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp"
Step 6: Authenticate Docker with AWS ECR
Before Docker can push images to ECR, it must authenticate with AWS. The aws ecr get-login-password command retrieves an authorization token using the GetAuthorizationToken API. This token is valid for 12 hours.
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 123456789012.dkr.ecr.us-east-1.amazonaws.com
# Expected output: Login Succeeded
Authentication Breakdown:
- --username AWS: AWS provides this as the default username for ECR authentication
- --password-stdin: Pipes the token securely without exposing it in shell history
- Repository URI: Must match the exact URI of your ECR repository
- Token expiry: The authorization token expires after 12 hours. Re-run the command for new sessions.
Step 7: Tag and Push the Docker Image to ECR
This is the final step in the process. You need to tag your local Docker image with the ECR repository URI, then push it to the remote registry.
Tag the Local Image
The docker tag command creates a new reference to your local image using the ECR repository URI as the tag. This does not create a copy of the image, only a new reference pointer.
# Tag the local image with ECR URI
docker tag myapp:v1 123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp:v1
# Verify the tag was created
docker images | grep myapp
Push to ECR
Now push the tagged image to your ECR repository. The upload speed depends on your image size and internet connection. You will see progress bars for each layer being uploaded.
# Push to ECR
docker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp:v1
# Output shows each layer being pushed:
# The push refers to repository [123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp]
# abc123456789: Pushed
# def987654321: Pushed
# v1: digest: sha256:abc123... size: 1234
Verify the Push:
- Go to the ECR section in the AWS Management Console
- Click on your repository name
- You should see your image with the tag
v1listed - Copy the Image URI for use in ECS task definitions or Kubernetes deployments
Complete Command Reference
Here is a quick reference of all the commands used in this tutorial, from Docker installation to ECR push:
# 1. Verify Docker
docker --version
docker login
# 2. Build image
docker build -t myapp:v1 .
# 3. Test locally
docker run -p 3000:3000 myapp:v1
# 4. Configure AWS CLI
aws configure
# 5. Create ECR repo
aws ecr create-repository --repository-name myapp --region us-east-1
# 6. Authenticate Docker with ECR
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 123456789012.dkr.ecr.us-east-1.amazonaws.com
# 7. Tag and push
docker tag myapp:v1 123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp:v1
docker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp:v1
What Happens Next After Pushing to ECR
Once your Docker image is in ECR, you can deploy it using various AWS services:
Amazon ECS
Deploy containers on Amazon Elastic Container Service with Fargate serverless compute or EC2-backed clusters for full control.
Amazon EKS
Run containers on Amazon Elastic Kubernetes Service for orchestration with Kubernetes-native tooling and auto-scaling.
Amazon EC2
Pull images directly onto EC2 instances and run them with docker-compose or systemd for simple deployment scenarios.
AWS CodePipeline
Build a CI/CD pipeline that automatically builds, tests, and pushes images to ECR on every code commit.
Need Help with Docker and AWS?
Braincuber has deployed 500+ containerized applications on AWS. Our experts can help you architect, Dockerize, and deploy your applications on ECS, EKS, or EC2.
Frequently Asked Questions
How do I push a Docker image to AWS ECR?
Authenticate Docker with ECR using aws ecr get-login-password, tag your local image with the ECR repository URI using docker tag, then push with docker push <ecr-repo-uri>. You must have AWS CLI configured with an IAM user that has ECR access.
What IAM permissions are needed to push Docker images to ECR?
Your IAM user needs AmazonEC2ContainerRegistryFullAccess or a custom policy with ecr:GetAuthorizationToken, layer upload, and image push permissions. The managed policy covers all required actions.
Why is my docker push to ECR failing with no basic auth credentials?
This error means Docker is not authenticated with ECR. Run aws ecr get-login-password --region <region> | docker login --username AWS --password-stdin <account-id>.dkr.ecr.<region>.amazonaws.com to authenticate. The token expires after 12 hours.
How do I create an ECR repository using AWS CLI?
Run aws ecr create-repository --repository-name <repo_name> --region <region_name>. This creates a private repository in your AWS account. Note down the repository URI from the response for tagging and pushing images.
What is the difference between Docker Hub and AWS ECR?
Docker Hub is a public registry by default (with paid private options), while ECR is a fully managed private container registry integrated with AWS services like ECS and EKS. ECR offers IAM-based access control, image scanning, lifecycle policies, and seamless AWS integration.
