AWS to Azure Connectivity

AWS to Azure Connectivity

Several years ago I ran through an exercise of connecting AWS to Azure. At the time, I used an Amazon Linux EC2 instance running OpenSWAN on the AWS side to a virtual gateway on the Azure side. I wanted to try this time just using the managed VPN services on both sides and range find the throughput I could expect between the two.

At a high level the diagram looks like this.

Azure Setup

I don’t live in Azure-land very often, so a bit of fumbling through some of the steps required on this side. The desired setup is to create a virtual network address space of This CIDR does not overlap my AWS VPC CIDR (

Will basically perform these steps:

  1. Create the virtual network on the Azure side
  2. Add the Gateway Subnet
  3. Add the rest of the subnets to sort of … mirror my AWS network
  4. Create a new public IPv4 Address
  5. Create a new virtual gateway using the public IPv4 Address above



AWS Configuration

We’ll jump over to AWS to begin some configuration on the AWS side. I won’t go through the initial steps, but we have a simple VPC spread across two different availability zones using a Public/Private subnet in each.

The VPC CIDR / Subnet configuration looks like this:


Public AZ-A :
Public AZ-B :
Private AZ-A :
Private AZ-B :

Create a new virtual private gateway and attach it to the VPC:

Create a new customer gateway using the public IPv4 address from Azure

Create the VPN connection on the AWS side and add the static IP prefixes for the Azure VNET (


Next download the configuration using the “Generic” option

Within this file you will find the pre-shared key as well as the virtual private gateway IP address.

Now do these steps:

  1. Create a local network gateway on the Azure side using the VPG Public IPv4 Address and the VPC CIDR from AWS.
  2. Add a connection to the gateway using the pre-shared key from the configuration file.



At this point, we should show a connection from both sides.




Just rinse and repeat if you want to setup the second tunnel.

Now add the route to the Azure CIDR into the AWS Route table.

We should be able to ping from both sides now.

AWS to Azure


[ec2-user@ip-192-168-9-54 ~]$ ping

PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=13.3 ms
64 bytes from icmp_seq=2 ttl=64 time=14.6 ms
64 bytes from icmp_seq=3 ttl=64 time=12.7 ms
64 bytes from icmp_seq=4 ttl=64 time=12.9 ms
64 bytes from icmp_seq=5 ttl=64 time=13.0 ms
64 bytes from icmp_seq=6 ttl=64 time=13.0 ms

Azure to AWS

azvm-user@lnxhost2:~$ ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=39 ttl=254 time=13.1 ms
64 bytes from icmp_seq=40 ttl=254 time=14.3 ms
64 bytes from icmp_seq=41 ttl=254 time=12.8 ms
64 bytes from icmp_seq=42 ttl=254 time=13.3 ms
64 bytes from icmp_seq=43 ttl=254 time=13.0 ms
64 bytes from icmp_seq=44 ttl=254 time=12.9 ms

Showing around 13ms of latency between the two with both environments in the us-west-2 region. That’s not too bad, so I decided to run a few iperf3 tests between nodes and see what sort of bandwidth I could see.

Not too bad! Probably not a setup you would use in a production environment, but for POC testing, definitely doable. Hope this is helpful.

Cloud on….

Leave a Comment