How to Build Serverless REST API with AWS Lambda: Complete Tutorial
By Braincuber Team
Published on April 2, 2026
As developers, we are always trying to optimize everything from how people communicate to how people buy things. The goal is to make humans arguably more productive. In the spirit of making humans more productive, the software development landscape has seen a dramatic rise in the emergence of developer productivity tools. This is especially true of the software infrastructure space. Innovators are creating solutions that allow developers to focus more on writing actual business logic and less on mundane deployment concerns. The need to improve developer experience and cut down costs are some of the core drivers of serverless computing. This complete tutorial provides a beginner guide with a step by step guide on how to set up a rudimentary serverless REST API with AWS Lambda and API Gateway.
What You'll Learn:
- What serverless computing is and how it evolved from traditional servers
- The difference between IaaS, PaaS, and Serverless cloud computing models
- Function as a Service (FaaS) vs Backend as a Service (BaaS)
- How to create and configure an AWS Lambda function
- How to expose Lambda functions via AWS API Gateway
- How to test and deploy a complete serverless REST API
No Infrastructure Management
Focus entirely on writing business logic while AWS handles servers, scaling, patching, and security automatically.
Pay Per Execution
You are charged precisely based on the number of requests your functions handle, not for idle server time.
Automatic Scaling
AWS Lambda automatically scales from zero to thousands of concurrent executions based on incoming traffic.
Event-Driven Architecture
Functions only run when invoked by events like HTTP requests, file uploads, or database changes.
What Is Serverless?
To understand what serverless is, first we need to understand what servers are and how they had evolved over time.
Yes, when we build software systems, most times we build them for people. For that reason, we make these applications findable on the interwebs. Making an application discoverable, ideally, entails uploading that application to a special computer that runs 24/7 and is super fast. This special computer is called a server.
The Evolution of Servers
Two decades ago, when companies wanted to upload a piece of software they had built to a server, they would have to purchase a physical computer, configure the computer, and then deploy their application to that computer.
In cases where they needed to upload several applications, they would have to get and setup multiple servers, too. Everything was done on-premise.
But it did not take long for people to notice the many problems that came with approaching servers that way.
There is the problem of developer productivity: a developer's attention is divided between writing code and dealing with the infrastructure that serves the code. This issue could easily be addressed by getting other people to deal with the infrastructure issues but this leads to a second problem: the problem of cost. These people dealing with the infrastructure concerns would have to be paid, right? In fact, just having to purchase a server even for seemingly basic test applications is in itself a costly affair.
Furthermore, if we begin to factor in other elements like scaling a server's computing capability when there is a spike in traffic or simply just updating the server's OS and drivers over time...well, you will begin to see how exhausting it is to keep an in-house server. People craved something better. There was a need.
Amazon responded to that need when it announced the release of Amazon Web Services (AWS) in 2006. AWS notoriously disrupted the software infrastructure domain. It was a revolutionary shift from traditional servers. AWS took away the need for organizations to set up their in-house server. Instead, organizations and even individuals could just upload their applications to Amazon's computers over the internet for some fee. And this server-as-a-service model announced the start of cloud computing.
What Is Cloud Computing?
Cloud Computing is fundamentally about storing files or executing code (sometimes both) on someone else's computer, usually, over a network.
Cloud computing platforms like AWS, Microsoft Azure, Google Cloud Platform, Heroku, and others exist to save people the stress of having to setup and maintain their own servers.
What is outstandingly unique about cloud computing is the fact that you could be in Nigeria, and rent a computer that is in the US. You can then access that computer you have rented and do stuff with it over the internet. Essentially, cloud vendors provide us with compute environments to upload and run our custom software.
The environment we get from these providers is something that has progressed, and now exists in different forms. In cloud computing parlance, we use the term cloud computing models to refer to the different environments most cloud vendors offer. Each new model in the cloud computing scene is usually created to enhance developer productivity and shrink infrastructure and labor costs.
Infrastructure as a Service (IaaS)
For example, when AWS was first launched, it had the Elastic Compute Cloud (EC2) service. The EC2 is structurally a bare-bones machine. People that pay for EC2 service would have to do a lot of configuration like installing an OS, a database, and maintaining these things for as long as they own the service.
While EC2 offers lots of flexibility (for example, you can install whatever OS you want), it also requires lots of effort to work with. EC2 and other services like it across other cloud providers fall under the cloud computing model called Infrastructure as a Service (IaaS).
Platform as a Service (PaaS)
The major problem with IaaS is having to configure and maintain a lot of things. For example, OS installation, patch upgrades, discovery, and so on.
In the PaaS model, all that is abstracted. A machine in the PaaS model comes pre-installed with an OS, and patch upgrades on the machine amongst others are the vendor's responsibility. In the PaaS model, developers only deploy their applications, and the cloud providers handle some of the low level stuff. While this model is easier to work with, it also implies less flexibility. Elastic Beanstalk from AWS and Heroku are some of the examples of offerings in this model.
The PaaS model undeniably took away most of the dreary configuration and maintenance tasks. But beyond just the tasks that make our software available on the internet, people began to recognize some of the limitations of the PaaS model too.
For example, in both the IaaS and PaaS models, developers had to manually deal with scaling up/down a server's computing capability. Additionally, with both the IaaS and PaaS platforms, most vendors charge a flat fee for their services it is not based on usage. In situations where the fee is based on usage, it is usually not very precise. Lastly, in both the IaaS and PaaS models, applications are long-lived (always running even when there are no requests coming in). That in turn results in the inefficient use of server resources.
The Rise of Serverless Computing
Just like IaaS and PaaS, Serverless is a cloud computing model. It is the most recent evolution of the cloud just after PaaS. Like IaaS and PaaS, with serverless, you do not have to buy physical computers.
Furthermore, just as it is in the PaaS model, you do not have to significantly configure and maintain servers. Additionally, the serverless model went a step further: it took away the need to manage long lived application instances, and to manually scale server resources up or down based on traffic. Payment is precisely based on usage, and security concerns are also abstracted.
In the serverless model, you no longer have to worry about anything infrastructure-related because the cloud providers handle all that. And this is exactly what the serverless model is all about: allowing developers and whole organizations to focus on the dynamic aspects of their project while leaving all the infrastructure concerns to the cloud provider.
Literally all the infrastructure concerns from setting up and maintaining the server to auto scaling server resources up from and down to zero and security concerns. And in fact one of the killer features of the serverless model is the fact that users are charged precisely based on the number of requests the software they deployed has handled.
So we can now say that serverless is the term we use to refer to any cloud solution that takes away all the infrastructure concerns we normally would have to worry about. For example, AWS Lambda, Azure functions, and others.
| Cloud Model | What You Manage | What Vendor Manages | Examples |
|---|---|---|---|
| On-Premise | Everything: hardware, OS, apps, data | Nothing | Physical servers in office |
| IaaS | OS, apps, data, runtime | Hardware, virtualization, networking | AWS EC2, Google Compute Engine |
| PaaS | Apps, data | OS, runtime, middleware, hardware | Heroku, AWS Elastic Beanstalk |
| Serverless | Apps, data, functions | Everything else including auto-scaling | AWS Lambda, Azure Functions |
Function as a Service vs Backend as a Service
All serverless solutions belong to one of two categories:
- Function as a Service (FaaS)
- Backend as a Service (BaaS)
A cloud offering is considered a BaaS and by extension serverless if it replaces certain components of our application that we would normally code or manage ourselves. For example, when you use Google's Firebase authentication service or Amazon's Cognito service to handle user authentication in your project, then you have leveraged a BaaS offering.
A cloud offering is considered a FaaS and by extension serverless if it takes away the need to deploy our applications as single instances that are then run as processes within a host. Instead, we break down our application into granular functions (with each function ideally encapsulating the logic of a single operation). Each function is then deployed to the FaaS platform.
From the image above you can see that FaaS platforms offer an entirely different way of deploying applications. We have seen nothing like it before: there are no hosts and application processes. As a result, we do not have some code that is constantly running and listening for requests.
Instead, we have functions that only run when invoked, and they are being torn down as soon as they are done processing the task they were called upon to perform.
If these functions are not always running and listening for requests, how then are they invoked, you might ask?
All FaaS platforms are event-driven. Essentially, every function we deploy is mapped to some event. And when that event occurs, the function is triggered.
Summarily, we use the term serverless to describe a Function as a Service or Backend as Service cloud solution where:
- We do not have to manage long lived application instances or hosts for applications we deploy.
- We do not have to manually scale up/down computing resources depending on traffic because the server automatically does that for us.
- The pricing is precisely based on usage.
Serverless Example Project
Here, we will be setting up a minimal, perhaps uninteresting serverless REST API with AWS lambda and API Gateway.
They are serverless cloud solutions. Remember we stated that all serverless cloud solutions belong to one of two categories: BaaS and FaaS. AWS Lambda is a Function as a Service platform and API Gateway is a Backend as a Service solution.
How is API Gateway a Backend as a Service platform, you might ask? Well, normally we implement routing in our applications ourselves. With API Gateway, we do not have to do that instead, we cede the routing task to API Gateway.
The connection is simple. AWS Lambda is where we will be deploying our actual application code. But because AWS Lambda is a FaaS platform, we are going to break our application into granular functions, with each function handling a single operation. We will then deploy each function to AWS Lambda.
AWS Lambda, like all other FaaS platforms, is event-driven. What that means is, when you deploy a function to the platform, that function only does something when some event it is tied to happens. An event could be anything from an HTTP request to a file being uploaded to s3.
In our case, we will be deploying a minimal REST API backend. Because we are going serverless and more specifically the FaaS way, we are going to break down our REST backend into independent functions. Each function will be tied to some HTTP request.
API Gateway is the tool that we will use to tie a request to a function we have deployed. So when that particular request comes in, the function is invoked. Think of API Gateway as routing-as-a-service-tool.
Step 1: Create an AWS Account
To create an account on AWS, follow the steps in module 1 of this AWS setup guide.
Step 2: Code Our Lambda Function on AWS
Remember, we are going to need a function that does the actual text paraphrasing, and this is where we do that. Follow the steps below to create the lambda function:
Navigate to Lambda Service
Login to your AWS account using the credentials from Step 1. In the search field, input 'lambda', and then select Lambda from the list of services displayed.
Create a New Function
Click the create function button on the Lambda page. Keep the default Author from scratch card selected. Enter _paraphrase_text in the Name field.
Select Runtime and Create
Select Python 3.9 for the Runtime. Leave all the other default settings as they are and click on create function.
Add the Lambda Function Code
Scroll down to the Function code section and replace the existing code in the lambda_function.py code editor with the code below.
import http.client
def lambda_handler(event, context):
# TODO implement
conn = http.client.HTTPSConnection("paraphrasing-tool1.p.rapidapi.com")
payload = event['body']
headers = {
'content-type': "application/json",
'x-rapidapi-host': "paraphrasing-tool1.p.rapidapi.com",
'x-rapidapi-key': "your api key here"
}
conn.request("POST", "/api/rewrite", payload, headers)
res = conn.getresponse()
data = res.read()
return {
'statusCode': 200,
'body': data
}
We are using this API for the paraphrasing functionality. Head over to that page, subscribe to the basic plan, and grab the API key (it is free).
Step 3: Test Our Lambda Function on AWS
Here, we will test our lambda function with a sample input to see that it produces the expected behavior: paraphrasing whatever text it is passed.
Configure Test Event
From the main edit screen for your function, select Configure test event from the Test dropdown. Keep Create new test event selected. Enter TestRequestEvent in the Event name field.
Add Test Payload
Copy and paste the test event JSON into the editor. You can replace the body text with whatever content you want.
Run the Test
Click create. On the main function edit screen, click Test with TestRequestEvent selected in the dropdown. Scroll to the top of the page and expand the Details section of the Execution result section.
Verify Results
Verify that the execution succeeded and that the function result shows a statusCode: 200 with the paraphrased text in the body.
{
"path": "/paraphrase",
"httpMethod": "POST",
"headers": {
"Accept": "*/*",
"content-type": "application/json; charset=UTF-8"
},
"queryStringParameters": null,
"pathParameters": null,
"body": "{\r\"sourceText\": \"The bone of contention right now is how to make plenty money.\"\r\r\n }"
}
Response
{
"statusCode": 200,
"body": "{\"newText\":\"The stumbling block right now is how to make big bucks.\"}"
}
As seen above, the original text we passed our lambda function has been paraphrased.
Step 4: Exposing Our Lambda Function Via the API Gateway
Now that we have coded our lambda function and it works, here, we will expose the function through a REST endpoint that accepts a POST request. Once a request is sent to that endpoint, our lambda function will be called.
Create the API
Navigate to API Gateway
In the search field, search and select API Gateway.
Create REST API
On the API Gateway page, there are four cards under the choose an API type heading. Go to the REST API card and click build.
Configure and Create
Next, provide all the required information. For endpoint type, select Edge optimized. Click Create API.
Create the Resource and Method
In the steps above we created an API. But an API usually has endpoint(s). An endpoint usually specifies a path and the HTTP method it supports. For example GET /get-user. Here, we call the path resource, and the HTTP verb tied to a path a method. Thus, resource + method = REST endpoint.
Here, we are going to create one REST endpoint that will allow users pass a text block to be paraphrased to our lambda function. Follow the steps below to accomplish that:
Create Resource
First from the Actions dropdown select Create Resource. Next, fill the input fields and tick the check-box to enable CORS as shown in the image and click Create Resource. You could replace resource name and resource path with anything you see fit.
Create POST Method
With the newly created /paraphrase resource selected, from the Action dropdown select Create Method. Select POST from the new dropdown that appears, then click the checkmark.
Link to Lambda Function
Provide all the other info and select the lambda function name we created in one of the previous steps. Click save.
Deploy the API
Deploy to Production Stage
In the Actions drop-down list, select Deploy API. Select [New Stage] in the Deployment stage drop-down list. Enter production or whatever you wish for the Stage Name. Choose Deploy.
Note the Invoke URL
Note the invoke URL is your API's base URL. It should look something like: https://wrl34unbe0.execute-api.eu-central-1.amazonaws.com/production
Test Your Endpoint
To test your endpoint, you can use Postman or curl. Append the path to your endpoint to the end of the invoke URL like so: https://your-api-id.execute-api.region.amazonaws.com/production/paraphrase. The request method should be POST.
Send Test Payload
When testing, also add the expected payload to the request: {"sourceText": "The bone of contention right now is how to make plenty money."}
{
"sourceText": "The bone of contention right now is how to make plenty money."
}
Key Serverless Principles Demonstrated:
- Event-Driven: Lambda function only runs when API Gateway receives a POST request
- No Server Management: AWS handles all infrastructure, scaling, and security
- Pay Per Use: You only pay for the milliseconds your function executes
- Auto-Scaling: AWS Lambda automatically handles traffic spikes without manual intervention
Wrapping Things Up
Well that is it. First we learned all about the term serverless and then we went on to set up a light weight serverless REST API with AWS Lambda and API Gateway.
Serverless does not imply the total absence of servers, though. It is essentially about having a deployment flow where you do not have to worry about servers. The servers are still present, but they are being taken care of by the cloud provider.
We are going serverless each time if we build a significant components of our application on top of BaaS technologies or whenever we structure our application to be compatible with any FaaS platform (or when we do both).
Need Help Building Serverless Applications?
Braincuber's cloud experts can help you architect, build, and deploy serverless applications on AWS Lambda, API Gateway, and more. 500+ successful cloud projects delivered.
Frequently Asked Questions
What is serverless computing?
Serverless is a cloud computing model where you do not have to manage servers, scale resources, or handle infrastructure. Cloud providers like AWS handle everything while you focus on writing code. You pay only for actual usage.
What is the difference between FaaS and BaaS?
FaaS (Function as a Service) lets you deploy individual functions that run when triggered by events. BaaS (Backend as a Service) replaces backend components you would normally build yourself, like authentication or databases.
How does AWS Lambda pricing work?
AWS Lambda charges based on the number of requests and the duration your code runs, measured in milliseconds. There is also a generous free tier that includes 1 million requests and 400,000 GB-seconds of compute time per month.
What is AWS API Gateway used for?
API Gateway acts as a routing-as-a-service that connects HTTP requests to your Lambda functions. It handles request routing, CORS, authentication, rate limiting, and API versioning without requiring you to manage any servers.
When should I use serverless vs traditional servers?
Use serverless for event-driven workloads, APIs with variable traffic, and when you want to minimize infrastructure management. Use traditional servers for long-running processes, applications requiring specific OS configurations, or when you need predictable performance.
