How to Schedule Ad-Hoc Tasks Using DynamoDB TTL and Lambda
By Braincuber Team
Published on May 8, 2026
CloudWatch Events lets you create cron jobs with Lambda, but it is not designed for running millions of ad-hoc tasks. The default limit of 100 rules per region makes it impractical for use cases like tournament timers, event reminders, or to-do notifications. This complete guide shows you how to use DynamoDB Time-To-Live combined with DynamoDB Streams and Lambda to build a highly scalable ad-hoc task scheduler that can handle millions of pending tasks, with real benchmark data from Yan Cui's experiments across multiple AWS regions.
What You Will Learn:
- Why CloudWatch Events is unsuitable for ad-hoc task scheduling at scale
- How DynamoDB TTL and Streams can be repurposed as a scheduling mechanism
- How to implement the scheduler Lambda function that writes tasks with TTL
- How to implement the executor Lambda that processes expired items via DynamoDB Streams
- Real precision benchmarks from experiments across multiple AWS regions
- When this approach is appropriate and what its limitations are
Prerequisites
| Requirement | Details |
|---|---|
| AWS Account | Active account with DynamoDB, Lambda, and IAM access |
| AWS CLI | Configured with appropriate credentials |
| Node.js | Node.js 12+ for Lambda function code |
| DynamoDB Knowledge | Basic understanding of tables, TTL, and DynamoDB Streams |
Understanding the Problem
CloudWatch Events is designed for executing recurring tasks. Its default limit of 100 rules per region per account makes it unsuitable for use cases requiring millions of ad-hoc tasks, each scheduled to execute once at a specific time. While you can request a limit increase, the architecture itself is not built for this workload pattern.
There are many real-world scenarios that need such a service:
Tournament Systems
Games need to execute business logic when tournaments start and finish. Each tournament has unique start and end times requiring individual scheduled tasks.
Event Reminders
Platforms like Eventbrite or Meetup need timely reminders to attendees before events start. Each event generates unique reminder schedules for potentially millions of attendees.
To-Do Reminders
Task trackers like Wunderlist need to send notifications when tasks become due. Each user can have dozens of individually scheduled reminders.
Scheduled Notifications
Marketing campaigns, payment reminders, and subscription renewals all need precise timing. A scheduling service abstracts away the complexity of managing individual timers.
System Architecture Overview
The DynamoDB TTL scheduling approach uses three key AWS services working together. A DynamoDB table stores all scheduled tasks. A scheduler Lambda writes tasks with a TTL set to the execution time. When DynamoDB deletes expired items, it publishes REMOVE events to a DynamoDB Stream, which triggers an executor Lambda function that processes each expired task.
Scheduled Items Table
DynamoDB table that holds all scheduled tasks. Each item has a TTL attribute set to the epoch timestamp when the task should execute. Scales to millions of open tasks.
Scheduler Function
Lambda function that writes new tasks into the scheduled_items table. Sets the TTL attribute to the epoch timestamp of the desired execution time. Can handle thousands of writes per second.
Executor Function
Lambda function subscribed to the DynamoDB Stream for the scheduled_items table. Reacts to REMOVE events, which indicate items have been deleted by the TTL process. Processes the expired task payload.
Step-by-Step Implementation
Create the DynamoDB Table with TTL Enabled
Create a DynamoDB table named scheduled_items with a primary key. Enable TTL on the table and set the TTL attribute to ttl. This attribute will store the epoch timestamp when each item should expire. Enable DynamoDB Streams with NEW_AND_OLD_IMAGES to capture both the item data before deletion and the deletion event itself. The Stream ARN will be used to trigger the executor Lambda.
aws dynamodb create-table --table-name scheduled_items --attribute-definitions AttributeName=taskId,AttributeType=S --key-schema AttributeName=taskId,KeyType=HASH --billing-mode PAY_PER_REQUEST --stream-specification StreamEnabled=true,StreamViewType=NEW_AND_OLD_IMAGES
aws dynamodb update-time-to-live --table-name scheduled_items --time-to-live-specification Enabled=true,AttributeName=ttl
Implement the Scheduler Lambda Function
The scheduler function receives a scheduling request and writes an item to the scheduled_items table. The item includes a ttl attribute set to the epoch timestamp when the task should execute, along with a payload containing the task type and any parameters needed by the executor. The function should return the taskId so the caller can reference the scheduled task.
const AWS = require('aws-sdk')
const dynamo = new AWS.DynamoDB.DocumentClient()
const { v4: uuidv4 } = require('uuid')
module.exports.scheduler = async (event) => {
const { taskType, scheduledTime, payload } = JSON.parse(event.body)
const taskId = uuidv4()
const epochSeconds = Math.floor(new Date(scheduledTime).getTime() / 1000)
await dynamo.put({
TableName: 'scheduled_items',
Item: {
taskId: taskId,
ttl: epochSeconds,
taskType: taskType,
payload: payload,
createdAt: new Date().toISOString()
}
}).promise()
return {
statusCode: 200,
body: JSON.stringify({ taskId, scheduledTime })
}
}
Implement the Executor Lambda Function
The executor function is triggered by DynamoDB Stream events. It filters for REMOVE events, which indicate items deleted by TTL. Each event contains the old image of the deleted item, which includes the taskType and payload. The function dispatches the task to the appropriate handler based on taskType. DynamoDB Streams deliver events in order within each shard, so tasks are processed sequentially.
module.exports.executeOnSchedule = async (event) => {
for (const record of event.Records) {
if (record.eventName !== 'REMOVE') continue
const task = AWS.DynamoDB.Converter.unmarshall(
record.dynamodb.OldImage
)
console.log('Executing task:', task.taskId, task.taskType)
switch (task.taskType) {
case 'send_reminder':
await sendReminder(task.payload)
break
case 'start_tournament':
await startTournament(task.payload)
break
case 'process_due_task':
await processDueTask(task.payload)
break
default:
console.log('Unknown task type:', task.taskType)
}
}
}
Configure the DynamoDB Stream Trigger
Create an event source mapping between the DynamoDB Stream of your scheduled_items table and the executor Lambda function. Use the TRIM_HORIZON starting position to process all existing stream records. Set the batch size based on your processing needs. The executor Lambda needs IAM permissions for dynamodb:DescribeStream, dynamodb:GetRecords, dynamodb:GetShardIterator, and dynamodb:ListStreams on the scheduled_items table stream.
STREAM_ARN=$(aws dynamodb describe-table --table-name scheduled_items --query 'Table.LatestStreamArn' --output text)
aws lambda create-event-source-mapping --function-name execute-on-schedule --event-source-arn $STREAM_ARN --starting-position TRIM_HORIZON --batch-size 10
Scalability Analysis
This approach scales exceptionally well for the number of open tasks. Since every pending scheduled task is simply an item in a DynamoDB table, the system can handle millions of concurrent open tasks with no architecture changes. DynamoDB handles thousands of writes per second, so the scheduler can accept new tasks at high velocity.
For hotspot scenarios where many tasks expire simultaneously, DynamoDB Streams auto-scales the number of shards as throughput increases. However, events within each shard are processed sequentially. If millions of tasks expire at the same time, the executor Lambda must process them one by one per shard, which introduces latency proportional to the queue depth and per-event processing time. For extreme hotspots such as Superbowl kickoff with millions of reminders firing simultaneously, the system will process all events but cannot guarantee on-time execution.
Precision Benchmarks
According to the official AWS documentation, DynamoDB TTL deletes expired items within 48 hours. This wide margin of error makes precision a critical concern. Yan Cui conducted automated experiments using a Step Functions state machine that created items with TTLs between 1 and 10 minutes, tracked scheduled versus actual execution times, and waited for all items to be deleted.
| AWS Region | Avg Delay (mins) | Observation |
|---|---|---|
| US-EAST-1 | 11+ | Consistent delay regardless of item count |
| EU-WEST-1 | 4-6 | Better performance, likely due to lower load |
| AP-SOUTHEAST-1 | 5-7 | Similar to EU, better than US-EAST-1 |
| EU-WEST-2 | 6-8 | Slightly higher variance than EU-WEST-1 |
The results show that on average, tasks execute over 11 minutes after their scheduled time in US-EAST-1. Other regions showed better performance, possibly because the US-EAST-1 TTL process had been warmed by ongoing experiments. The experiment was consistent regardless of the number of items in the table, suggesting the delay is a property of the TTL deletion process itself rather than a scaling issue.
Precision Limitations
Based on experimental data, DynamoDB TTL-based scheduling cannot guarantee execution within minutes of the scheduled time. This makes it unsuitable for time-sensitive use cases where precision matters. However, for workloads that can tolerate delays of 5 to 15 minutes such as sending non-urgent notification digests or batch processing, this approach offers excellent scalability at minimal cost.
Alternative Approaches Compared
Yan Cui experimented with several alternative approaches before settling on the DynamoDB TTL pattern for analysis. Each approach has different trade-offs across precision, scalability, and operational complexity.
| Approach | Precision | Open Task Scale | Hotspot Scale |
|---|---|---|---|
| CloudWatch Events | Seconds | 100 rules limit | Limited |
| .NET Timer Class | Milliseconds | Memory-bound | Memory-bound |
| SQS Visibility Timeout | Minutes | High | High |
| DynamoDB TTL | 5-15 min | Millions | Very high |
When to Use This Approach
The DynamoDB TTL scheduling pattern is best suited for use cases where scalability matters more than precision. If your application can tolerate tasks executing 5 to 15 minutes late and you need to schedule millions of unique tasks, this is an excellent serverless solution. It excels at workloads like sending non-critical notification batches, processing delayed analytics, or running nightly maintenance tasks that only need approximate timing.
For use cases requiring second-level precision such as real-time payment processing, critical infrastructure monitoring, or time-sensitive user notifications, consider using Amazon EventBridge Scheduler (the evolved version of CloudWatch Events) or Step Functions with Wait states, which provide far tighter scheduling guarantees.
Frequently Asked Questions
Can I improve DynamoDB TTL deletion precision?
No, the TTL deletion process is controlled internally by DynamoDB and there are no configuration options to tune it. AWS documentation states expired items are deleted within 48 hours. The observed 5 to 15 minute delay is an empirical result and not guaranteed by any SLA. You cannot force TTL deletion to happen faster.
How much does the DynamoDB TTL scheduler cost?
The cost is primarily DynamoDB write and read capacity for storing scheduled items plus Lambda invocation costs. With on-demand billing, you pay per request. A million scheduled tasks cost roughly $1.25 in DynamoDB writes plus Lambda execution time. The DynamoDB Stream and TTL deletion operations do not incur additional read costs.
What happens if the executor Lambda fails?
DynamoDB Streams uses at-least-once delivery. If the executor Lambda throws an error, the stream records are retried based on the Lambda event source mapping retry policy. After exhausting retries, failed records can be sent to a dead-letter queue (DLQ) for manual inspection or reprocessing. The stream processing continues from the last successful record.
Does the scheduler work across AWS regions?
Yes, but with different precision characteristics. Benchmarks showed US-EAST-1 averaged over 11 minutes delay while EU-WEST-1 averaged 4 to 6 minutes. Cross-region scheduling would require additional latency for the scheduler Lambda call. The scheduling table and executor Lambda must be in the same region since DynamoDB Streams cannot cross regions.
Can I cancel a scheduled task before it executes?
Yes, simply delete the item from the scheduled_items table using its taskId. If the item is already deleted by TTL, the executor Lambda will process it. For finer-grained control, add a status attribute to each item and check it in the executor before processing. This allows soft-cancellation by marking items as cancelled in a separate update before TTL deletion.
Need Help Building AWS Serverless Schedulers?
Our cloud experts can help you design and implement scalable scheduling systems on AWS, from DynamoDB TTL patterns to EventBridge and Step Functions orchestration.
