How to Extend AWS Infrastructure with Direct Connect: Terraform Guide
By Braincuber Team
Published on April 10, 2026
What You'll Learn:
- What AWS Direct Connect is and how BGP routing works
- Why Transit VPC architecture is recommended for Direct Connect
- How to configure Transit VPC using Terraform
- Setting up Direct Connect Gateway and private virtual interfaces
- VPC peering between main and transit VPCs
- Deploying HAproxy router service for network routing
AWS Direct Connect allows you to establish dedicated network connections from your premises to Amazon VPC. This can reduce network costs, increase bandwidth throughput, and provide a more consistent network experience compared to internet-based VPN connections.
This beginner guide walks you through connecting your existing AWS infrastructure to a large private network using Direct Connect, managed entirely through Terraform as infrastructure as code.
Understanding the Problem
The challenge: your services within a VPC need to communicate with services in a separate private network. The network provider offers an AWS hosted connection as part of a signed contract to grant access via AWS Direct Connect.
Key questions to answer:
- How do you integrate Direct Connect into an existing Terraform-managed infrastructure?
- What are the best practices for routing configuration?
- How do you handle IP range limitations from the network provider?
What is AWS Direct Connect?
AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to Amazon VPC. A network provider has AWS facilities in a shared data center, allowing you to make a direct connection between your AWS network components and their network using physical hardware.
How Direct Connect Works
The basic implementation involves:
Configure Direct Connections
Create one or two Direct Connections in the AWS console, which automatically creates a Direct Connect Gateway.
Attach Private VIF
Attach a private Virtual Interface (VIF) to the gateway for each connection.
Exchange Routing Policies
Coordinate with provider network engineers to exchange BGP routing policies and configure allowed prefixes.
Critical Limitation: IP Range Restrictions
Important Discovery
AWS allocates private IPs in the 169.x.x.x range for BGP sessions and advertises the entire VPC CIDR block over BGP. Your advertised IP range is limited to a /30 maximum, and you cannot advertise individual subnets.
If your VPC CIDR is too large for the network provider to accept, you will need to create a separate VPC with a smaller CIDR block. This is where the Transit VPC pattern comes in.
The Transit VPC Solution
AWS recommends using a Transit VPC pattern for integrating Direct Connect connections. The Transit VPC acts as a bridge between your main VPC and the external network, using a unique IP range provided by your network provider.
Isolated CIDR
Use a smaller, provider-approved IP range that meets their routing policy requirements.
VPC Peering
Connect your main VPC to the Transit VPC using VPC peering for seamless communication.
Main VPC Terraform Configuration
First, define your main VPC using the Terraform AWS VPC module. This is the existing VPC you want to extend with Direct Connect access.
variable "main_vpc_name" {
description = "Name of your main VPC"
}
variable "main_vpc_cidr" {
description = "CIDR of your main VPC, e.g. 10.1.0.0/16"
}
variable "private_app_subnet" {
description = "Private subnet of your main VPC, e.g. 10.1.2.0/24"
}
variable "main_vpc_key_name" {
default = "main-vpc-key"
description = "Name of SSH key of your main VPC"
}
variable "aws_availability_zone" {
description = "Your AWS AZ of your main VPC"
}
terraform {
backend "s3" {
bucket = "your-terraform-states-bucket"
key = "terraform.tfstate"
}
}
module "vpc" {
version = "~> v2.0"
source = "terraform-aws-modules/vpc/aws"
name = var.main_vpc_name
cidr = var.main_vpc_cidr
azs = [var.aws_availability_zone]
private_subnets = [var.private_app_subnet]
single_nat_gateway = true
one_nat_gateway_per_az = false
enable_nat_gateway = true
enable_vpn_gateway = false
tags = {
Terraform = "true"
}
}
Export the necessary values from the main VPC module that will be used by the Transit VPC:
output "main_vpc_id" {
value = module.vpc.vpc_id
}
output "main_vpc_range" {
value = module.vpc.vpc_cidr_block
}
output "main_vpc_az" {
value = module.vpc.azs.0
}
output "main_vpc_key_name" {
value = var.main_vpc_key_name
}
output "main_private_routing_table_id" {
value = module.vpc.private_route_table_ids.0
}
output "main_public_routing_table_id" {
value = module.vpc.public_route_table_ids.0
}
Transit VPC Terraform Configuration
Create a separate Terraform state for the Transit VPC. Import the outputs from the main VPC as remote state:
locals {
main_private_routing_table = data.terraform_remote_state.main.outputs.main_private_routing_table_id
main_public_routing_table = data.terraform_remote_state.main.outputs.main_public_routing_table_id
main_vpc_id = data.terraform_remote_state.main.outputs.main_vpc_id
main_vpc_range = data.terraform_remote_state.main.outputs.main_vpc_range
main_vpc_az = data.terraform_remote_state.main.outputs.main_vpc_az
main_vpc_key_name = data.terraform_remote_state.main.outputs.main_vpc_key_name
}
Define the Transit VPC variables:
| Variable | Description |
|---|---|
| transit_vpc_cidr | Your unique IP range in the network (e.g., 10.10.14.0/24) |
| transit_private_subnet | Private subnet for Transit VPC (e.g., 10.10.14.0/25) |
| transit_public_subnet | Public subnet for NAT gateway (e.g., 10.10.14.128/25) |
| network_dns_server | Primary DNS server IP from your network provider |
| network_dns_server_2 | Secondary DNS server IP from your network provider |
module "transit-vpc" {
version = "~> v2.0"
source = "terraform-aws-modules/vpc/aws"
name = var.transit_vpc_name
cidr = var.transit_vpc_cidr
azs = [local.main_vpc_az]
private_subnets = [var.transit_private_subnet]
public_subnets = [var.transit_public_subnet]
single_nat_gateway = true
one_nat_gateway_per_az = false
enable_nat_gateway = true
enable_vpn_gateway = false
enable_dhcp_options = true
dhcp_options_domain_name = var.dhcp_options_domain_name
dhcp_options_domain_name_servers = [
var.network_dns_server,
var.network_dns_server_2
]
tags = {
Terraform = "true"
}
}
Direct Connect Terraform Configuration
Configure Direct Connect resources including the gateway and private virtual interfaces. These values are provided by your network provider:
| Variable | Description |
|---|---|
| bgp_provider_asn | BGP autonomous system number of the provider |
| provider_vln_id | BGP VLAN ID from the provider |
| primary/secondary_bgp_key | BGP authentication keys for primary/secondary VIFs |
| primary/secondary_connection_id | Connection IDs from AWS console |
| primary/secondary_amazon_address | Amazon-side BGP IP addresses |
| primary/secondary_customer_address | Customer-side BGP IP addresses |
# Direct Connect Gateway
resource "aws_dx_gateway" "provider-gateway" {
name = "provider-dc-gateway"
amazon_side_asn = "64512" # Default value
}
# Associate gateway with Transit VPC VPN gateway
resource "aws_dx_gateway_association" "transit" {
dx_gateway_id = aws_dx_gateway.provider-gateway.id
associated_gateway_id = aws_vpn_gateway.transit_vpn_gw.id
allowed_prefixes = [var.transit_vpc_cidr]
}
# Primary Virtual Interface
resource "aws_dx_private_virtual_interface" "primary" {
connection_id = var.primary_connection_id
name = "provider-vif-primary"
vlan = var.provider_vln_id
address_family = "ipv4"
bgp_asn = var.bgp_provider_asn
amazon_address = var.primary_amazon_address
customer_address = var.primary_customer_address
dx_gateway_id = aws_dx_gateway.provider-gateway.id
bgp_auth_key = var.primary_bgp_key
}
# Secondary Virtual Interface (for redundancy)
resource "aws_dx_private_virtual_interface" "secondary" {
connection_id = var.secondary_connection_id
name = "provider-vif-secondary"
vlan = var.provider_vln_id
address_family = "ipv4"
bgp_asn = var.bgp_provider_asn
amazon_address = var.secondary_amazon_address
customer_address = var.secondary_customer_address
dx_gateway_id = aws_dx_gateway.provider-gateway.id
bgp_auth_key = var.secondary_bgp_key
}
VPC Peering Configuration
Connect your main VPC to the Transit VPC using VPC peering. This allows services in both VPCs to communicate:
# Create VPC Peering Connection
resource "aws_vpc_peering_connection" "main-to-transit" {
peer_vpc_id = module.transit-vpc.vpc_id
vpc_id = local.main_vpc_id
auto_accept = true
tags = {
Name = "VPC Peering between main and transit VPC"
}
}
# Route from main private subnet to transit VPC
resource "aws_route" "from-main-to-transit" {
route_table_id = local.main_private_routing_table
destination_cidr_block = var.transit_vpc_cidr
vpc_peering_connection_id = aws_vpc_peering_connection.main-to-transit.id
}
# Route from main public subnet to transit VPC
resource "aws_route" "from-main-public-to-transit" {
route_table_id = local.main_public_routing_table
destination_cidr_block = var.transit_vpc_cidr
vpc_peering_connection_id = aws_vpc_peering_connection.main-to-transit.id
}
# Route from transit VPC to main VPC
resource "aws_route" "from-transit-to-main" {
route_table_id = module.transit-vpc.private_route_table_ids.0
destination_cidr_block = local.main_vpc_range
vpc_peering_connection_id = aws_vpc_peering_connection.main-to-transit.id
}
Security Group for Transit VPC
Configure the security group to allow HTTP traffic from the main VPC:
resource "aws_security_group" "transit_vpc_sg" {
name = "transit-vpc-sg"
description = "Transit VPC SG"
vpc_id = module.transit-vpc.vpc_id
ingress {
description = "Allow HTTP from main VPC"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = [local.main_vpc_range]
}
ingress {
description = "Allow SSH from main VPC"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [local.main_vpc_range]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = { Name = "transit-vpc" }
}
Router Service with HAproxy
Deploy a router service using Docker and HAproxy within the Transit VPC to route traffic to the external network. This approach follows the open-closed principle, making it easy to extend without modifying existing services.
Create VPN Gateway
Attach a VPN gateway to the Transit VPC for OpenVPN access management.
Deploy EC2 Router Instance
Launch an EC2 instance with Docker installed, running HAproxy in a container.
Configure DNS Resolution
Add router private IP to hosts file on services that need to access the external network.
# VPN Gateway for Transit VPC
resource "aws_vpn_gateway" "transit_vpn_gw" {
tags = { Name = "transit-vpn-gw" }
}
resource "aws_vpn_gateway_attachment" "vpn_attachment" {
vpc_id = module.transit-vpc.vpc_id
vpn_gateway_id = aws_vpn_gateway.transit_vpn_gw.id
}
resource "aws_vpn_gateway_route_propagation" "transit" {
vpn_gateway_id = aws_vpn_gateway.transit_vpn_gw.id
route_table_id = module.transit-vpc.private_route_table_ids.0
}
# Router Instance
resource "aws_instance" "router" {
ami = "ami-0eb89db7593b5d434" # Ubuntu 20.04 LTS
instance_type = "t2.micro"
availability_zone = local.main_vpc_az
key_name = local.main_vpc_key_name
subnet_id = module.transit-vpc.private_subnets.0
private_ip = var.router_private_ip
vpc_security_group_ids = [aws_security_group.router_sg.id]
user_data = file("router_init.sh")
associate_public_ip_address = false
tags = {
Name = "transit-vpc-router"
Managed = "terraform"
}
}
#!/bin/bash
# Install Docker
apt-get update && apt-get install -y docker-ce
# Create HAproxy configuration
cat > /home/ubuntu/haproxy.cfg <<- "EOF"
global
log stdout local0
daemon
maxconn 4000
defaults
log global
mode http
option httplog
timeout connect 5s
timeout check 5s
timeout client 60s
timeout server 60s
frontend http-in
bind *:80
acl domain1_acl hdr(host) -i domain-name-1.internal.com
acl domain2_acl hdr(host) -i domain-name-2.internal.com
use_backend domain1 if domain1_acl
use_backend domain2 if domain2_acl
backend domain1
mode http
option forwardfor
http-request replace-header Host .* domain-name-1.internal.com
server domain1 domain-name-1.internal.com:443 ssl verify none
backend domain2
mode http
option forwardfor
http-request replace-header Host .* domain-name-2.internal.com
server domain2 domain-name-2.internal.com:443 ssl verify none
EOF
# Launch HAproxy container
docker run -d --restart always --name haproxy --net=host -v /home/ubuntu:/usr/local/etc/haproxy:ro haproxy:2.1-alpine
Final Configuration Steps
After deploying the infrastructure, complete these final steps:
Configure OpenVPN Routing
Add both Transit VPC CIDR and main VPC CIDR to the OpenVPN server routing settings under VPN Settings.
Update Hosts File
Add the router private IP and domain names to the hosts file on instances in the main VPC that need external network access.
Test Connectivity
Start making HTTP requests to the external domain names and verify routing through the Transit VPC.
Frequently Asked Questions
What is the Transit VPC pattern in AWS?
Transit VPC is an AWS-recommended architecture where a separate VPC acts as a hub for connecting Direct Connect, VPN, and VPC peering connections. It isolates routing complexity from your main VPC.
Why can I only advertise a /30 CIDR over Direct Connect?
AWS allocates private IPs in the 169.x.x.x range for BGP sessions and advertises the VPC CIDR block. Network providers often have routing policy limits, requiring smaller CIDR blocks that is why Transit VPC with a unique provider-assigned range is necessary.
What is the difference between primary and secondary VIFs?
Primary and secondary virtual interfaces provide redundancy for Direct Connect. If the primary connection fails, BGP automatically fails over to the secondary connection, ensuring continuous network availability.
Why use HAproxy instead of migrating services to Transit VPC?
HAproxy routing follows the open-closed principle, allowing you to add new routes without modifying existing services. It also avoids the complexity of updating database connections, log storage, and other dependencies when migrating services.
How do I manage Terraform state for multiple VPCs?
Use separate Terraform workspaces or backends for each VPC. Use terraform_remote_state data source to import outputs from one state into another, enabling clean separation while maintaining necessary dependencies.
Need Help with AWS Infrastructure?
Our AWS experts can help you design and implement Direct Connect solutions and complex VPC architectures for your enterprise needs.
