How to Deploy a Node.js Docker Container to AWS Lambda: Step by Step
By Braincuber Team
Published on March 7, 2026
A client came to us bleeding $1,250 a month running 4 Node.js microservices on EC2 instances. The APIs handled sporadic traffic — huge spikes during Monday product drops, then dead silence at 3 AM. They were paying for servers to sit idle 82% of the time. We containerized their APIs, pushed the Docker images to AWS ECR, and deployed them to AWS Lambda. Their new monthly compute bill? $3.14. This complete tutorial shows you exactly how to build the same zero-cost-when-idle serverless architecture from scratch without rewriting your entire codebase.
What You'll Learn:
- Why ZIP uploads to Lambda fail for modern Node.js applications
- How to write a multi-stage Dockerfile that Lambda accepts seamlessly
- Step by step setup of an AWS Elastic Container Registry (ECR)
- How to securely authenticate Docker to AWS using IAM policies
- The exact CLI commands to build, tag, and push images to ECR
- How to deploy the container to Lambda and expose it via a public HTTPS URL
Why Docker? The End of the "ZIP File Upload" Era
Everyone starts with Serverless by zipping up an index.js file and uploading it to the AWS Console. That works for a 10-line calculator script. It fails spectacularly when your Node.js API relies on native dependencies, sharp libraries for image processing, or Puppeteer binaries. ZIP files have a strict 250MB limit and terrible environment consistency. Lambda Container Support changes the game. It lets you package your app as a standard Docker image (up to 10GB) and deploy it serverless. You develop locally exactly as it runs in AWS. No nasty runtime surprises.
Step by Step: Deploying Your Containerized API
Create Your AWS ECR Repository
Before you can deploy to Lambda, you need a place to store your Docker image. That's Amazon Elastic Container Registry (ECR). Log into the AWS Console and search for ECR. Click Create repository. Name it lambda-api-practice. Set the Tag Mutability to Mutable (this lets you overwrite the "latest" tag during rapid development). Click create, then copy the Repository URI. It will look like 123456789012.dkr.ecr.us-east-1.amazonaws.com/lambda-api-practice. Save this — you'll need it when building the image.
Grant Your Local Machine Access to ECR
Your laptop can't just push images to AWS blindly. You need an IAM user. Go to the IAM dashboard and click Add users. Name it ecr-deployer. In permissions, choose Attach policies directly and select AmazonEC2ContainerRegistryPowerUser. This restricts the user strictly to ECR pushing/pulling. Create the user, then go to their "Security Credentials" tab. Generate an Access Key (select CLI use case). Run aws configure on your terminal and paste in the Access Key ID and Secret. Never hardcode these into your source code.
Login to ECR and Build the Image
Now you authenticate your local Docker daemon with AWS using a pipe command. This securely grabs a temporary password from AWS CLI and feeds it to Docker login. Once logged in, build your Docker image, ensuring you tag it exactly with the ECR URI you copied in Step 1. Using an incorrect region or missing account ID in the tag will cause the push to fail immediately with an authentication error.
# 1. Login to ECR (Replace with your region and account ID)
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 123456789012.dkr.ecr.us-east-1.amazonaws.com
# 2. Build and Tag the Image
docker build -t 123456789012.dkr.ecr.us-east-1.amazonaws.com/lambda-api-practice:latest .
# 3. Push to ECR
docker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/lambda-api-practice:latest
Create the Lambda Function from Your Engine
Head to the Lambda Console. Click Create function. Do not use author from scratch — select Container image. Name your function. Click Browse images, select your ECR repository, and pick the latest tag you just pushed. Adjust architecture (x86_64 vs arm64) to match how you built your Docker image locally. Click Create. AWS provisions the function, pulls your image, and prepares it for execution on their microVM architecture.
Generate an HTTPS Endpoint with Function URLs
Your Lambda is alive, but isolated. You need an API endpoint. Traditionally, you'd configure API Gateway (which adds complexity and cost). Forget that. Go to the Lambda Configuration tab, select Function URL, and click Create. Set Auth type to NONE (for a public REST API test). Enable CORS if your frontend connects from a different domain. Save it. AWS generates a unique https://xyz...lambda-url.region.on.aws link. Hit it with Postman or cURL. You just deployed a serverless API.
The "Cold Start" Reality
A 10GB Docker container won't boot instantly. The first request to your API after a period of inactivity will trigger a "Cold Start," causing a 1-to-3 second latency delay. If you're building an internal API or a webhook listener, this doesn't matter. If you're rendering an initial customer checkout page, it kills conversions. Use Provisioned Concurrency to keep instances warm if you need strict sub-100ms response times globally.
Why Businesses Make the Serverless Switch
Cost-Efficiency
You pay per invocation and per millisecond of compute time. Zero traffic equals a $0 bill. A startup processing 1.2 million webhook events a month paid $14.50 on Lambda. EC2 would demand a $150/month always-on server to handle the sporadic load.
Infinite Scalability
No load balancers to configure. No autoscaling groups to tune. When traffic jumps from 10 req/sec to 1,000 req/sec, AWS automatically spins up more container instances in milliseconds. It handles Black Friday traffic without breaking a sweat.
Frequently Asked Questions
What is the maximum Docker image size AWS Lambda accepts?
AWS Lambda supports container images up to 10 GB in size. This is a massive upgrade from the 250 MB rigid limit of traditional Lambda ZIP uploads, allowing you to bundle huge machine learning models or complex native Node.js binaries.
My push to ECR failed with "no basic auth credentials." Why?
Your Docker daemon isn't authenticated with AWS. You must run the aws ecr get-login-password command and pipe it into docker login before you attempt to push. The authentication token expires every 12 hours.
Is AWS Serverless bad for background jobs?
Yes and no. Lambda functions have a strict 15-minute execution timeout. If your background job processes a 20GB video file over 45 minutes, Lambda will force-kill it. For long-running background tasks, AWS Fargate or AWS Batch are better architectural decisions.
Still Burning Your Budget on Idle EC2 Instances?
If you're paying thousands a month to keep Nginx and Node.js servers humming along at 2% CPU utilization, you are losing money. We specialize in refactoring clunky servers into sleek, infinite-scaling serverless architectures on AWS. Let's look at your infrastructure. We'll identify exactly which APIs should move to Lambda and calculate the savings before we write a line of code.
