How to Upload Large Files with AWS S3 Multipart: Complete Guide
By Braincuber Team
Published on March 17, 2026
Uploading large files efficiently is crucial for modern applications - traditional single-part uploads can be slow, unreliable, and prone to failure when dealing with files over 100MB. AWS S3 multipart upload divides large files into smaller chunks (typically 5-100MB each), uploads them independently, and combines them to create the final object. This approach provides better fault tolerance, resume capability, and significantly improved upload speeds. This complete tutorial will guide you through implementing a robust multipart upload system with Node.js backend and React frontend.
What You'll Learn:
- How AWS S3 multipart upload works and its benefits
- Setting up AWS S3 bucket with proper policies
- Creating Node.js backend with Express and AWS SDK
- Implementing multipart upload endpoints and logic
- Building React frontend with progress tracking
- Error handling and retry mechanisms
- Security best practices and IAM configurations
- Testing and optimizing upload performance
Understanding Multipart Upload Architecture
Before diving into implementation, let's understand how multipart upload transforms large file handling:
Traditional Upload
Single file upload to server, then to S3. Prone to timeouts, memory issues, and complete failure on network interruption. No resume capability.
Multipart Upload
File split into parts (5-100MB), uploaded independently, combined by S3. Fault-tolerant, resumable, 3-5x faster for large files. Automatic retry on failures.
Speed: 3-5x faster than traditional uploads
Reliability: Automatic retry and resume capability
Memory: 90% less server memory usage
Scalability: Handles files up to 5TB efficiently
Prerequisites and Setup
Before starting, ensure you have the following tools and accounts ready:
AWS Account and IAM User
Create AWS account with IAM user credentials. Ensure the user has programmatic access to S3 with appropriate permissions (s3:PutObject, s3:AbortMultipartUpload, s3:ListMultipartUploads, s3:UploadPart).
Development Environment
Node.js 16+ installed, npm or yarn package manager, code editor (VS Code recommended), and basic knowledge of JavaScript, React, and Express.js.
Required Packages
Backend: express, dotenv, aws-sdk, multer, cors. Frontend: react, axios. Development tools: nodemon (for development), AWS CLI for bucket management.
Step 1: Setting Up AWS S3
First, let's create and configure the S3 bucket that will store our uploaded files:
Create S3 Bucket
Log into AWS Management Console, navigate to S3 service, and create a new bucket. Use a unique name (e.g., 'large-file-uploads-2024'). Keep default settings initially, we'll configure access via IAM policies.
Configure Bucket Policy
Navigate to Permissions tab, edit Bucket Policy, and apply a policy allowing public read access (for downloads) while maintaining secure upload access through your backend.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::your-bucket-name/*"
}
]
}
Step 2: Node.js Backend Implementation
Now let's create the Express.js backend that will handle multipart uploads to S3:
Initialize Node.js Project
Create project directory and initialize with npm. Install required packages: express, dotenv, aws-sdk, multer, and cors for cross-origin requests.
Create Server Configuration
Set up Express server with CORS middleware, AWS SDK configuration, and body parsing limits. Configure environment variables for AWS credentials and region.
Set Up Middleware
Configure CORS for frontend communication, set body parser limits (50MB), and initialize AWS S3 service with credentials from environment variables.
npm install: express dotenv aws-sdk multer cors
Development: nodemon app.js (for auto-restart)
Production: PM2 process management
Initialize Upload Endpoint
The upload initialization endpoint creates the multipart upload and returns the upload ID to the frontend:
POST /uploads/init
Creates multipart upload with unique ID
Returns uploadId and partSize to frontend
Generates pre-signed URLs for each part
Upload Part Endpoint
This endpoint handles individual part uploads and tracks progress:
POST /uploads/:uploadId/part/:partNumber
Uploads individual parts to S3
Validates part size and order
Updates upload progress in database
Complete Upload Endpoint
The final endpoint combines all uploaded parts into the complete file:
POST /uploads/:uploadId/complete
Calls S3 CreateMultipartUpload to combine parts
Returns final S3 object URL
Cleans up temporary upload data
Step 3: React Frontend Implementation
Now let's create the React frontend that will handle file splitting and upload coordination:
Initialize React Project
Create React app using Create React App. Install axios for HTTP requests and set up component structure for file upload functionality.
Create Upload Component
Build component with file input, progress bar, and upload status. Implement chunking logic to split large files into optimal part sizes (5-100MB per part).
Implement Progress Tracking
Track upload progress with visual indicators. Show percentage complete, upload speed, estimated time remaining, and handle pause/resume functionality.
npm install: react axios
Development: npm start
Production: npm run build
Advanced Features and Optimization
Let's implement advanced features to make our upload system production-ready:
Error Handling and Retry Logic
Implement exponential backoff for failed uploads, automatic retry with maximum attempts, and detailed error logging for debugging network issues.
Security Best Practices
Use temporary signed URLs with expiration, validate file types and sizes, implement rate limiting, and sanitize all inputs to prevent malicious uploads.
Performance Optimization
Optimal part size of 5-10MB for most networks, parallel upload of multiple parts, compression before upload, and CDN integration for faster downloads.
Production Optimization Tip
For production deployments, consider AWS Transfer Acceleration for faster uploads to S3 from distant locations. This can reduce upload times by up to 50% for international users.
Testing and Implementation
Let's test our implementation and ensure everything works correctly:
Test Part Upload
Upload a test file (100MB+) and verify individual parts are created correctly. Check S3 console for multipart upload in progress and validate part numbers and ETags.
Test Complete Upload
Verify the final object is created correctly, check file integrity, and test download functionality. Monitor network tab for upload performance and error handling.
Performance Testing
Test with different file sizes (100MB, 500MB, 1GB+) and network conditions. Monitor memory usage, upload speeds, and error rates to optimize part size and concurrent uploads.
| File Size | Optimal Parts | Upload Time | Memory Usage |
|---|---|---|---|
| 100MB | 20 parts (5MB each) | 2-3 minutes | Low (50MB) |
| 500MB | 50 parts (10MB each) | 5-8 minutes | Medium (200MB) |
| 1GB | 100 parts (10MB each) | 10-15 minutes | High (400MB) |
| 5GB | 500 parts (10MB each) | 30-45 minutes | Very High (2GB) |
Testing Checklist
✅ File splits into correct part sizes
✅ All parts upload successfully
✅ Final object combines correctly
✅ Download link works
✅ Progress tracking functional
✅ Error handling tested
✅ Memory usage within limits
✅ CORS configured properly
Deployment and Scaling Considerations
When moving to production, consider these scaling and deployment factors:
Load Balancing
Use multiple backend instances behind a load balancer for high-volume uploads. Consider auto-scaling based on CPU and memory usage during peak times.
Database Scaling
Use Redis or DynamoDB for upload metadata and session management. Implement connection pooling and database indexing for fast queries.
Monitoring and Logging
Implement CloudWatch for metrics, use structured logging (JSON format), and set up alerts for failed uploads, high memory usage, and unusual patterns.
Recommended Stack: React + Node.js + AWS S3 + Redis + CloudWatch
Hosting: AWS ECS/EKS or Docker containers
CDN: CloudFront for download acceleration
Database: RDS PostgreSQL with connection pooling
Frequently Asked Questions
What's the optimal part size for multipart upload?
5-10MB is optimal for most networks. Larger parts reduce retry impact on failures but increase memory usage. Smaller parts increase overhead and API calls.
How many parts should I split my file into?
Divide file size by optimal part size (5-10MB). A 1GB file = 100-200 parts. AWS supports up to 10,000 parts per multipart upload. More parts = better parallelism but higher complexity.
Should I use presigned URLs for uploads?
Yes, for security. Presigned URLs expire after a set time and limit access to specific files. They allow direct uploads to S3 from browser while keeping your AWS credentials secure.
How do I handle upload failures?
Implement exponential backoff, retry failed parts individually, and provide clear error messages. Allow users to resume uploads from the last successful part.
What's the maximum file size for multipart upload?
5TB per object and 5GB per part. AWS S3 supports up to 10,000 parts per upload, making it suitable for extremely large files when implemented correctly.
Need Help Implementing AWS S3 Multipart Upload?
Our experts can help you design and implement scalable file upload systems, optimize performance, set up proper security, and deploy production-ready solutions for handling large files efficiently.
