In this ATC Insight

Introduction

This article will give you a basic understanding of the Viptela vManage Cloud OnRamp for Multicloud when connecting to Google Cloud Platform(GCP) and the business challenges it addresses. It will also layout steps an organization could use Cisco OnRamp for GCP through an ATC architect's experience deploying the product. Cisco OnRamp also supports AWS (Amazon Web Services) and Azure (Microsoft) integration in the Multi-Cloud onRamp solution.

Today many organizations have Cloud-based applications. The Cloud allows for quick deployment in an agile environment. In many cases, development teams can use cloud resources without other IT teams. This is great for the development teams; however, what is developed must be used by other groups, locations, or even customers must use what is created. This is where the networking and security teams are typically brought in.

Networking teams now must take the time to map out how the application within the Cloud will be linked with the network infrastructure. This may require several cloud-based network/security resources to be deployed and have policies configured which can be a very time-consuming process and difficult to manage/scale. Cisco's Multicloud OnRamp is a solution that streamlines the process of adding or removing cloud resources from the network. Within just a few minutes, a new cloud region can be stood up and connected to the resources housed in that region. At the same time, policies can limit what parts of the network can reach each resource by using a simple grid design. 

ATC Test Environment

The Global Solutions Development - Connectivity team created a test environment to deploy Cisco OnRamp with GCP with two remote points of presence, one in London, UK, and the other in Sydney AU. A Google Project was created with three VPCs, each in a different region. A compute instance was deployed in each region and connected to the local VPC to simulate an application or service.

 

 

 

As a comparison, manual tunnels were first created between three branch locations and Google. This process was not overly complicated but took time to complete and plan an IP addressing scheme. There were multiple spots in planning and deployment where human errors could be injected. After testing and gathering baseline data which will be displayed later, the tunnels were removed, and the configuration of Cisco OnRamp to GCP commenced.

A few tasks were required within Google to allow Cisco's OnRamp to work. A service account was created and provided a list of permissions needed, and APIs were enabled.

Google Cloud Platform Permissions for Service Account:

  • Computer Image User
  • Compute Instant Admin
  • Compute Network Admin
  • Compute Public IP Admin
  • Compute Security Admin
  • Service Account User
  • Hub & Spoke Admin
  • Spoke Admin

 

The Cisco OnRamp menu inside Viptela vManage was very simple and straightforward and depicted the whole process.

 

 

We provided the service account information to the SD-WAN fabric using the Associate Cloud Account form in this section. We then used the Cloud Global Settings page to provide global settings. Several of these settings could later be changed as needed when a gateway is deployed. We provided a Cloud Gateway BGP Autonomous System Number (ASN) Offset. This offset allows vManage to configure BGP on the gateways. Starting at the offset, vManage will reserve ten ASNs for the Cloud gateways. By securing these ASNs, each gateway can be a hop within BGP. Next, an IP Subnet pool was created with a /16. The pool could be between a /16 and a /21. vManage then allocates a /27 for the transit VPCs made from the pool. The last major decision was to use IPSEC tunnels for site-to-site communications over GRE. This decision is based on performance and security considerations. Later in reviewing the OnRamp feature, GRE was also tested; however, there was no performance difference based on the simple data used during testing.

At this point, vManage could see the GCP VPCs already set up started creating a performance measurement baseline. Each VPC could then be assigned a Tag used for segmentation later. Tags can be applied as new VPCs are created to maintain the configuration standards over time.

The last step to configure vManage for OnRamp is to create policies. In this lab, no special policies were created; therefore, this step will not be covered in detail here outside of the fact that each gateway was to act as a hub for that region, similar to how the manual tunnels were built before.

Deploying Gateways

Now that the configuration has been completed, the deployment of gateways could begin. The first step was to assign a template to two virtual cEdges that OnRamp will deploy. The next step was to go back into OnRamp to create a cloud gateway.

Creating the gateways was nothing more than filling out a handful of fields on a single GUI screen.

  • Google Cloud provider was chosen.
  • Assign a name to the gateway
  • The name of the service account was chosen from a dropdown.
  • The region of the world was chosen that the gateway to be deployed within. Again this was a simple dropdown.
  • The version of the software image from the Google Marketplace was chosen.
  • Two options when presented as a choice of instances within GCP that would run the cEdge software. The n1-standard-4 vCPU was chosen over the  n1-standard-8 vcpu because only lab test traffic would utilize this SD-WAN fabric.
  • The Network Service Tier of Premium was chosen over Standard to get the best performance for traffic crossing the GCP backbone between regions.
  • The last step was to choose which two UUIDs were to be used for the deployment.

After filling in the information above, all that was left was to click the Add button, and within 15 minutes, the gateway was ready.

As a side note, it was even easier to remove a gateway and all of the GCP components. You could choose the gateway you wanted to remove, and within five to ten minutes the gateways were gone, and everything that was created for that gateway within GCP was also removed.

 

The exact process was repeated two more times to create the topology below. 

 

Test Paths

Site to Cloud

The first test that was run sent traffic from a location within each region to the instance in the matching region in GCP. Traffic flows were identical to what was used during the baseline tests, with no difference in performance noted. This was quite impressive compared to the amount of time it took between OnRamp and the manual baseline. Within fifteen minutes, everything was deployed with duel cEdges and was fully operational compared to the hour required to deploy the single manual tunnel.

 

Site to Site

Next, a site-to-site test was done using GCP backbone compared vs. traditional tunnel with internet connectivity. Traffic was sent for multiple days to gather a large sample size and allow for different GCP backbone conditions. Site-to-Site traffic was then configured to connect via traditional tunnels with internet connectivity.   

At the conclusion of the tests, a surprising result was found. The Google Backbone was expected to perform better but had performed slightly slower than going direct. After multiple test reviews, it was concluded that the traffic being sent was very basic and was not fully utilizing the total bandwidth available. The remote branches were in the ATC data center and ATC Equinix location, both containing high-capacity connections to the internet. Based on those two facts, the minor decrease in performance caused by having the additional hops through GCP could not be made up. It is expected that using the GCP backbone with extensive data flows would be an advantage.

Test Results

Testing used a program called Ethr from Microsoft using the command line below.

"ethr -c <Server address> -p tcp -t l -d 0 -I 10 -T <log title>"

The -c option made Ethr run in client mode. The -t l option ran a latency test. The -d 0 option kept the test running until manually stopped. The -I 10 option set Ethr to do 10 round trip measurements for each log entry. The -T option added a log title to each entry. The command was executed in 5 separate SSH sessions at each endpoint to gather data from that point to every other endpoint in the lab.

 

The results to the left of the black line are using the Google Backbone. The numbers to the right are the baseline when using the manually VPN tunnels directly to each VPC, with all traffic flowing across the internet except for the cells labeled "Policy Off." Policy Off results are when the gateways have been deployed, and the SD-WAN is fully meshed.

Cells in Red showed when the latency was noticeably less when using the internet to transfer the test data. As talked about earlier, the results may differ with different traffic loads.  

The Yellow cells are nearly the same results between the two different models.

One of the backbone results, such as the US Branch to London cloud, was a 5 ms delta between the average and the minimum. This was abnormal compared to what had been seen in other examples. The median was calculated to help weed out what may cause this delta. In these cases, the median ran close to the average, which would match if we had an abnormally low minimum and not a larger group in higher results that would require further investigation.

Segmentation

Intent management is the last piece. When discovering the VPC's tags were assigned. These tags are used by vManage to segment the cloud resources. A simple grid design shows what can communicate with what. By having this ability,  vManage can handle segmentation of both the global SD-WAN branches and within CGP from a single pane of glass.

 

 

Overall Experience

Cisco's OnRamp solution makes connecting a Viptela SD-WAN fabric to GCP faster and easier than connecting the two manually. This feature is helpful if an organization has multiple cloud locations or needs rapid gateway deployments/decommissions. Some parts of the configuration that are automated are hidden in vManage. This is by design so that the automation will not fail if an operator starts manually changing pieces of the configuration. 

 

What'sCisco's Next

You can explore all of what is within the WWT Advanced Technology Center, and an excellent place to start is to check out some of the links on the right side of the page or request a briefing with WWT. 

Technologies