How to Build AWS CDK v2 Three-Tier Serverless Application: Complete Guide
By Braincuber Team
Published on April 4, 2026
What You'll Learn:
- Initialize an AWS CDK v2 project with TypeScript and bootstrap your AWS account
- Build the data tier with DynamoDB table and OneTable schema modeling
- Create Lambda handlers with NodejsFunction construct and ARM64 architecture
- Set up HTTP API Gateway with CORS configuration and Lambda integrations
- Build a React frontend with Vite, TypeScript, and esbuild bundling
- Deploy the presentation tier with S3, CloudFront Distribution, and AwsCustomResource
A three-tier web application has a presentation layer, an application layer, and a database layer. This familiar pattern is fertile ground for learning new technologies like the AWS Cloud Development Kit (CDK). In this complete tutorial, we will create a simple note-taking application using a DynamoDB table, HTTP API endpoints, Lambda handlers, and a frontend React application with the CloudFront Content Delivery Network (CDN). All of this can be deployed to an AWS account using a single command. And all of it will be written in TypeScript. This step by step guide will show you exactly how to build and deploy a full-stack serverless application.
Getting Started: Prerequisites and Setup
To begin with, we will need an AWS account and credentials available in our command line. All the resources deployed in this tutorial should remain in the free tier of use, however a credit card is still required to sign up for an AWS account.
When working with AWS, it is a good idea to install the AWS CLI. You will also need to have a recent version of Node.js installed.
AWS Account and CLI
An AWS account with credentials configured via AWS CLI. All resources should remain within the free tier.
Node.js and npm
A recent version of Node.js installed for running the CDK CLI, esbuild, and the React development server.
CDK Bootstrap
Run npx cdk bootstrap to deploy the bootstrap stack that manages assets in your AWS account.
TypeScript Knowledge
Basic familiarity with TypeScript, React, and AWS services will help you follow along with this tutorial.
How to Initialize the CDK Application
To get started, we can use the cdk command-line utility to scaffold an application.
mkdir cdk-three-tier-serverless && cd cdk-three-tier-serverless
npx cdk init app --language=typescript
This will create some files to get us started and download the necessary dependencies.
CDK v1 vs v2 - What is the Difference?
AWS CDK v2 was made generally available in December 2021. AWS has announced that v1 will enter a maintenance phase and eventually end support for v1 in June of 2023. The primary difference between v1 and v2 is that v2 does a better job of managing dependencies. Published constructs built for v1 will need to be updated before they can work in v2 applications.
How to Bootstrap Your AWS Account
In order to use our AWS account with AWS CDK, we must first bootstrap the account by deploying a simple stack to manage our assets in the account. You can do this by entering npx cdk bootstrap at the command line. It is best to do this after initializing a project or the bootstrap will ask for additional information.
Best Practice: IAM Roles from Bootstrap
The bootstrap will create several roles that can be used to deploy, manage assets and look up resource ARNs. Although you can complete this tutorial with a user that has the AdministratorAccess policy, that is not a best practice. Consider constructing a fine-grained policy with sts:AssumeRole and iam:PassRole actions targeting only the CDK-created roles.
Building the Data Tier: DynamoDB Table
We will start by building out the data tier. We will be able to deploy our application each step of the way and check our progress in the AWS Console.
The init operation will have created a file called cdk-three-tier-serverless-stack.ts. We can start there to build out our application. First let us remove the commented code and add a Table declaration. Note that, unlike CDK v1 applications, there is no need to install additional packages to start using DynamoDB.
import { RemovalPolicy, Stack, StackProps } from 'aws-cdk-lib';
import { AttributeType, BillingMode, Table } from 'aws-cdk-lib/aws-dynamodb';
import { Construct } from 'constructs';
export class CdkThreeTierServerlessStack extends Stack {
constructor(scope: Construct, id: string, props?: StackProps) {
super(scope, id, props);
const table = new Table(this, 'NotesTable', {
billingMode: BillingMode.PAY_PER_REQUEST,
partitionKey: { name: 'pk', type: AttributeType.STRING },
removalPolicy: RemovalPolicy.DESTROY,
sortKey: { name: 'sk', type: AttributeType.STRING },
tableName: 'NotesTable',
});
}
}
We can immediately deploy this table using npx cdk deploy and then inspect it in the console. The table uses PAY_PER_REQUEST billing mode which means we only pay for the read and write operations we actually perform, making it ideal for applications with unpredictable traffic patterns.
Modeling Data using DynamoDB OneTable
OneTable is a tool for managing DynamoDB queries. The concept behind it is that several different entities can be modeled in the same DynamoDB table, a practice endorsed by many experts in the field. In our simple application, we will just have the single entity notes, but we will use OneTable anyway because it will help manage our schema. Since DynamoDB is a NoSQL database, the schema is not defined at table creation and instead, we will define it in application code.
To begin, we need to install dependencies:
npm i @aws-sdk/client-dynamodb dynamodb-onetable
We are going to create two Lambda functions in a moment and we will want to share a model between them. Let us create a fns folder under lib and create files called notesTable.ts, readFunction.ts and writeFunction.ts.
import { DynamoDBClient } from '@aws-sdk/client-dynamodb';
import { Entity, Table } from 'dynamodb-onetable';
import Dynamo from 'dynamodb-onetable/Dynamo';
const client = new Dynamo({ client: new DynamoDBClient({}) });
const schema = {
indexes: {
primary: { hash: 'pk', sort: 'sk' },
},
models: {
note: {
type: { required: true, type: 'string', value: 'note' },
pk: { type: 'string', value: 'note' },
sk: { type: 'string', value: '${date}' },
note: { required: true, type: 'string' },
date: { required: true, type: 'string' },
subject: { required: true, type: 'string' },
},
},
version: '0.1.0',
params: { typeField: 'type' },
format: 'onetable:1.0.0',
} as const;
export type NoteType = Entity<typeof schema.models.note>;
const table = new Table({
client, name: 'NotesTable', schema, timestamps: true,
});
export const Notes = table.getModel<NoteType>('note');
We are defining the properties of type, subject, note, and date for the model. All of these will have the string type. We are also going to indicate that the partition key will be always set to note. This is fine for a small sample application, but for something larger, it would make sense to use a value like a user id or account id based on the kinds of queries or access patterns the application requires.
The sort key and date field will have exactly the same data in them. This data duplication is a best practice because it will allow us to have different kinds of entities in our table and some of them may not be sorted by date.
Building the Application Tier: Lambda and API Gateway
Our application tier will consist of some Lambda functions and an API Gateway to connect them to the Internet.
Lambda Handlers
We will now fill in our Lambda handlers. We can add extra typings to make it easier to work in a TypeScript environment.
npm i -D @types/aws-lambda
Thanks to OneTable extracting away a lot of the complexity of dealing with DynamoDB, our Lambda handlers are quite simple. Our read function executes a find operation and returns the result.
import type { APIGatewayProxyResultV2 } from 'aws-lambda';
import { Notes } from './notesTable';
export const handler = async (): Promise<APIGatewayProxyResultV2> => {
const notes = await Notes.find({ pk: 'note' }, { limit: 10, reverse: true });
return { body: JSON.stringify(notes), statusCode: 200 };
};
Adding the limit and reverse parameters means the query will return the ten most recent notes, automatically sorted by the sort key.
Our write function is similarly quite simple.
import type {
APIGatewayProxyEventV2,
APIGatewayProxyResultV2,
} from 'aws-lambda';
import { Notes } from './notesTable';
export const handler = async (
event: APIGatewayProxyEventV2
): Promise<APIGatewayProxyResultV2> => {
const body = event.body;
if (body) {
const notes = await Notes.create(JSON.parse(body));
return { body: JSON.stringify(notes), statusCode: 200 };
}
return { body: 'Error, invalid input!', statusCode: 400 };
};
The NodejsFunction Construct
Returning to our stack, we now need to create the function constructs. Our Lambda functions will be written in TypeScript and thus will require a transpilation step before they can run in the Lambda runtime.
Fortunately, the CDK provides a NodejsFunction construct that will take care of this for us. NodejsFunction uses esbuild, a very fast transpiler. esbuild is not a direct dependency of CDK, so we will need to install it to avoid the slower fallback, which builds in Docker.
npm i -D esbuild
// In stack file:
import { Architecture } from 'aws-cdk-lib/aws-lambda';
import { NodejsFunction } from 'aws-cdk-lib/aws-lambda-nodejs';
import { RetentionDays } from 'aws-cdk-lib/aws-logs';
const readFunction = new NodejsFunction(this, 'ReadNotesFn', {
architecture: Architecture.ARM_64,
entry: `${__dirname}/fns/readFunction.ts`,
logRetention: RetentionDays.ONE_WEEK,
});
const writeFunction = new NodejsFunction(this, 'WriteNoteFn', {
architecture: Architecture.ARM_64,
entry: `${__dirname}/fns/writeFunction.ts`,
logRetention: RetentionDays.ONE_WEEK,
});
// Grant permissions
table.grantReadData(readFunction);
table.grantWriteData(writeFunction);
Our list of imports is growing, but all of them were installed along with aws-cdk-lib, so there is no worry about versioning. One more thing we will need is to grant permissions to our functions to access the table using table.grantReadData() and table.grantWriteData().
HTTP API Gateway
We will build our user-facing API using AWS API Gateway HTTP API. HTTP API is a lower-cost alternative to REST API. The CDK construct for HTTP API is still experimental, so we will need to install additional modules to use it.
npm i @aws-cdk/aws-apigatewayv2-alpha @aws-cdk/aws-apigatewayv2-integrations-alpha
import {
CorsHttpMethod, HttpApi, HttpMethod,
} from '@aws-cdk/aws-apigatewayv2-alpha';
import { HttpLambdaIntegration } from '@aws-cdk/aws-apigatewayv2-integrations-alpha';
import { CfnOutput } from 'aws-cdk-lib';
const api = new HttpApi(this, 'NotesApi', {
corsPreflight: {
allowHeaders: ['Content-Type'],
allowMethods: [CorsHttpMethod.GET, CorsHttpMethod.POST],
allowOrigins: ['*'],
},
});
const readIntegration = new HttpLambdaIntegration('ReadIntegration', readFunction);
const writeIntegration = new HttpLambdaIntegration('WriteIntegration', writeFunction);
api.addRoutes({
integration: readIntegration,
methods: [HttpMethod.GET],
path: '/notes',
});
api.addRoutes({
integration: writeIntegration,
methods: [HttpMethod.POST],
path: '/notes',
});
new CfnOutput(this, 'HttpApiUrl', { value: api.apiEndpoint });
API Gateway will automatically generate a URL for our endpoint. We could apply a custom domain, but that would cost something, so we will use the generated URL for now. It is desirable to output that from our stack so we do not need to look it up on the console. We can add CfnOutput to our aws-cdk-lib imports and one more line to our stack.
Now let us deploy it again with npx cdk deploy. We will be rewarded with output that looks something like this:
Outputs:
CdkThreeTierServerlessStack.HttpApiUrl = https://g50qzchav1.execute-api.us-east-1.amazonaws.com
We can immediately open the API URL in a web browser and see the working API. Since nothing is in the database yet, we will just get an empty array back.
Capture the API URL
In order to have a nicer developer experience, we can actually store that url in a local config file for use in our project. This can be done by adding the --outputs-file argument to our deploy command. We can add this to our npm scripts to output a config.json. It is probably a good idea to add that config.json file to our .gitignore.
Building the Presentation Tier: React and CloudFront
Lastly let us build out the presentation layer. We will use React in this tutorial. The presentation layer will be served via a CloudFront Distribution, but it can be built and deployed as part of our CDK application.
React App Setup
A cool thing about full-stack TypeScript applications is we can manage all our dependencies in one place. We are going to build a React application in TypeScript. We will bundle it with esbuild and use vitejs, a nice tool that adds live reload and a few other quality-of-life capabilities to esbuild.
npm i react react-dom
npm i -D @types/react @types/react-dom @vitejs/plugin-react-refresh vite
By convention, vitejs wants an index.html in the root of the project. The index.html directly refers to a main.tsx. Let us create a new directory under lib called web and add App.tsx, index.css, main.tsx, and utils.ts in that subdirectory.
Since we are adding React to the project, we need to modify our tsconfig.json adding the following keys and values:
"jsx": "react",
"esModuleInterop": true,
"allowSyntheticDefaultImports": true,
"lib": ["DOM", "es2018"]
import React, { useEffect, useState } from 'react';
import { NoteType } from '../fns/notesTable';
import { getNotes, saveNote } from './utils';
const App = () => {
const [body, setBody] = useState('');
const [notes, setNotes] = useState([]);
const [subject, setSubject] = useState('');
useEffect(() => {
getNotes().then((n) => setNotes(n));
}, []);
const clickHandler = async () => {
if (body && subject) {
setBody('');
setSubject('');
await saveNote({
date: new Date().toISOString(),
note: body,
subject,
type: 'note',
});
const n = await getNotes();
setNotes(n);
}
};
return (
<div>
<div>
<input onChange={(e) => setSubject(e.target.value)}
placeholder="Note Subject" type="text" value={subject} />
<textarea onChange={(e) => setBody(e.target.value)}
placeholder="Note Body" value={body}></textarea>
<button onClick={clickHandler}>save</button>
</div>
<table>
<thead><tr><th>Subject</th><th>Note</th><th>Date</th></tr></thead>
<tbody>
{notes.map((note: NoteType) => (
<tr key={note.date}>
<td>{note.subject}</td>
<td>{note.note}</td>
<td>{new Date(note.date).toLocaleString()}</td>
</tr>
))}
</tbody>
</table>
</div>
);
};
export default App;
We need to build out our http client in utils.ts. Here we have an extra step where we will fetch that HTTP API url from the config.json file we created earlier. This way we can have a local development environment without needing to copy-paste URLs.
import { NoteType } from '../fns/notesTable';
let url = '';
const getUrl = async () => {
if (url) return url;
const response = await fetch('./config.json');
url = `${(await response.json()).CdkThreeTierServerlessStack.HttpApiUrl}/notes`;
return url;
};
export const getNotes = async () => {
const result = await fetch(await getUrl());
return await result.json();
};
export const saveNote = async (note: NoteType) => {
await fetch(await getUrl(), {
body: JSON.stringify(note),
headers: { 'Content-Type': 'application/json' },
method: 'POST',
mode: 'cors',
});
};
All that done, we can start our development server using npx vite, then view the web application on http://localhost:3000. The server will detect changes and reload if we make changes.
CloudFront Distribution
In this section, we will add several more constructs to cdk-three-tier-serverless-stack.ts. Our web application will consist of an S3 Bucket for storage, a CloudFront Distribution and build step for the React application and a Custom Resource that will provide our API url to the web application.
Creating an S3 Bucket in CDK is easy. Note that while S3 websites are possible, this will not be an S3 website because we want to use CloudFront for global CDN and https.
import { BlockPublicAccess, Bucket } from 'aws-cdk-lib/aws-s3';
import {
Distribution, OriginAccessIdentity, ViewerProtocolPolicy,
} from 'aws-cdk-lib/aws-cloudfront';
import { S3Origin } from 'aws-cdk-lib/aws-cloudfront-origins';
const websiteBucket = new Bucket(this, 'WebsiteBucket', {
autoDeleteObjects: true,
blockPublicAccess: BlockPublicAccess.BLOCK_ALL,
removalPolicy: RemovalPolicy.DESTROY,
});
const originAccessIdentity = new OriginAccessIdentity(this, 'OriginAccessIdentity');
websiteBucket.grantRead(originAccessIdentity);
const distribution = new Distribution(this, 'Distribution', {
defaultBehavior: {
origin: new S3Origin(websiteBucket, { originAccessIdentity }),
viewerProtocolPolicy: ViewerProtocolPolicy.REDIRECT_TO_HTTPS,
},
defaultRootObject: 'index.html',
errorResponses: [
{ httpStatus: 404, responseHttpStatus: 200, responsePagePath: '/index.html' },
],
});
This S3 Bucket has no public access. Instead, we are going to give access via CloudFront Distribution. To do that, we will need to use the OriginAccessIdentity construct to grant the read access CloudFront will need. This Distribution is designed for a single-page application like React and will upgrade all traffic to https.
Bundling the React Application
For the next part, we are going to add one new helper library, fs-extra. This will make it easier to copy our build files around in the application.
npm i -D @types/fs-extra fs-extra
// In stack file:
import { execSync, ExecSyncOptions } from 'child_process';
import { join } from 'path';
import { copySync } from 'fs-extra';
import { Source } from 'aws-cdk-lib/aws-s3-deployment';
import { DockerImage } from 'aws-cdk-lib';
const execOptions: ExecSyncOptions = {
stdio: ['ignore', process.stderr, 'inherit'],
};
const bundle = Source.asset(join(__dirname, 'web'), {
bundling: {
command: ['sh', '-c', 'echo "Docker build not supported. Please install esbuild."'],
image: DockerImage.fromRegistry('alpine'),
local: {
tryBundle(outputDir: string) {
try { execSync('esbuild --version', execOptions); } catch { return false; }
execSync('npx vite build', execOptions);
copySync(join(__dirname, '../dist'), outputDir, { recursive: true });
return true;
},
},
},
});
new BucketDeployment(this, 'DeployWebsite', {
destinationBucket: websiteBucket,
distribution,
logRetention: RetentionDays.ONE_DAY,
prune: false,
sources: [bundle],
});
The bundler will run vite build which puts our transpiled web application under /dist, then it will copy those files into the CDK staging directory (usually cdk.out). We will round all this out with a BucketDeployment that actually handles shipping our changes to the target S3 Bucket.
AwsCustomResource for config.json
All this is pretty good, but we will still be lacking a config.json file that will help the React application know our HTTP API URL. We could deploy the stack once, generating the file, then bundle it up and ship it, but that means we would have to deploy twice to stand up our application. It would be better to generate this file on the fly the first time we deploy.
We can do that with AwsCustomResource. The Custom Resource will implicitly create a Lambda Function that can receive the generated URL, then make an AWS SDK call to store it in S3 where our Web Application can find it. All of this can be done with just a few lines of code!
import {
AwsCustomResource, AwsCustomResourcePolicy, PhysicalResourceId
} from 'aws-cdk-lib/custom-resources';
import { PolicyStatement } from 'aws-cdk-lib/aws-iam';
new AwsCustomResource(this, 'ApiUrlResource', {
logRetention: RetentionDays.ONE_DAY,
onUpdate: {
action: 'putObject',
parameters: {
Body: Stack.of(this).toJsonString({
[this.stackName]: { HttpApiUrl: api.apiEndpoint },
}),
Bucket: websiteBucket.bucketName,
CacheControl: 'max-age=0, no-cache, no-store, must-revalidate',
ContentType: 'application/json',
Key: 'config.json',
},
physicalResourceId: PhysicalResourceId.of('config'),
service: 'S3',
},
policy: AwsCustomResourcePolicy.fromStatements([
new PolicyStatement({
actions: ['s3:PutObject'],
resources: [websiteBucket.arnForObjects('config.json')],
}),
]),
});
new CfnOutput(this, 'DistributionDomain', {
value: distribution.distributionDomainName,
});
Now we can visit the Distribution URL and see our working application! We will see our existing notes and can add new ones as well!
Three-Tier Architecture Summary
| Tier | AWS Services | Purpose |
|---|---|---|
| Data Tier | DynamoDB, OneTable | NoSQL database for storing notes with partition key and sort key schema |
| Application Tier | Lambda, API Gateway HTTP API | Serverless compute with read/write handlers and CORS-enabled API endpoints |
| Presentation Tier | S3, CloudFront, React, Vite | Static website hosting with global CDN, HTTPS, and SPA routing support |
Deployment Commands Reference
# Step 1: Initialize project
mkdir cdk-three-tier-serverless && cd cdk-three-tier-serverless
npx cdk init app --language=typescript
# Step 2: Bootstrap AWS account
npx cdk bootstrap
# Step 3: Deploy the full stack
npx cdk deploy
# Step 4: Deploy with outputs file
npx cdk deploy --outputs-file config.json
# Step 5: Start local React dev server
npx vite
# Step 6: Clean up resources when done
npx cdk destroy
Frequently Asked Questions
What is the difference between AWS CDK v1 and v2?
CDK v2 does a better job of managing dependencies by bundling all AWS constructs into a single aws-cdk-lib package. V1 required separate packages for each service. V1 entered maintenance in June 2023 and v2 is the current standard.
What is DynamoDB OneTable and why use it?
OneTable is a tool for managing DynamoDB queries by defining schemas in application code. Since DynamoDB is NoSQL, the schema is not defined at table creation. OneTable simplifies querying and modeling multiple entities in a single table.
Why use HTTP API instead of REST API in API Gateway?
HTTP API is a lower-cost alternative to REST API with lower latency. It supports CORS, Lambda integrations, and JWT authorizers. For most serverless applications, HTTP API provides all the features needed at a reduced cost.
What is the purpose of AwsCustomResource in this tutorial?
AwsCustomResource generates the config.json file on the fly during deployment, storing the API URL in S3. This eliminates the need to deploy twice - once to get the URL and again to bundle it with the React app.
How do I clean up all resources after this tutorial?
Run npx cdk destroy to remove the stack and all resources to avoid incurring bills. This will delete the DynamoDB table, Lambda functions, API Gateway, S3 bucket, and CloudFront distribution created during the tutorial.
Need Help with AWS Architecture?
Our experts can help you design and deploy scalable serverless applications using AWS CDK and modern cloud-native patterns.
