?
Multicloud Architecture Cloud Networking
7 minute read

Speed, Performance and Pie: Using Equinix Fabric for Agile Connectivity

Customers and end users expect instant gratification for any services you deliver, because slow is the new down. Learn how to utilize Equinix Fabric for agile connectivity.

In This Article

Who wants to explain the loss of sales because of slow infrastructure? 

Think about how your own life has changed.  

It has become routine for consumers to order a piece of their favorite pecan pie from their neighborhood restaurant via a few touches on their preferred phone-based food delivery app. The pie is then delivered to their doorstep in minutes with minimal fuss or delay.  

Not that many years ago, each of these interactions would have been very difficult. Ordering things online with a cellular device would have demanded some commitment of time. Now, this capability is nearly instantaneous. Most of the business disruption we see today is architected by companies who successfully meet this insatiable demand for instant gratification. This cultural shift in high-speed responsiveness is changing how your customers and employees view the services you deliver. The bar has been set incredibly high. You are no longer being measured solely on availability and uptime. You are measured on the immediacy of your service how quickly you can deliver the desired result. Not meeting these expectations has significant negative consequences. 

Every additional second of latency costs.

Akamai estimates that for every additional second it takes for your web page or service to load, you lose up to 7 percent of conversions. Amazon estimates that every 10th of seconds of additional latency costs them 1 percent in sales. 1 percent doesn’t sound like a lot, but it adds up quickly. For Amazon, that equals $1.7 million every hour, and who wants to explain the loss of sales because of slow infrastructure? 

Now consider your workforce, their satisfaction and productivity. The expectations set in their consumer experiences don’t disappear when your employees come to work. Up to 25 percent of employees have considered moving jobs because of poorly chosen or poorly performing applications. How much does turnover cost your organization? For most organizations, the impact of key players leaving for greener pastures isn’t trivial. Now add the loss of productivity because of slow services. How do turnovers and the loss of productivity impact your bottom line?

copy link

There’s not much question at all. Slow is the new down.  

The good news is that we’ve never had a more robust toolset to solve the challenge of slow application performance than we do right now. Cloud-native infrastructure has made highly resilient, highly distributed and highly performant application deployments accessible to every organization. Building such environments requires a thorough understanding of the challenges at hand, so in this article, we’re going to take a look at the components of your infrastructure that impact performance the most. 

copy link

Where does the slowness originate? 

You can break down the sources of slowness into two distinct categories: 

  • Data access/processing  
  • Data delivery 

Access and processing include operating physical servers, virtualization, operating systems, application specifics, database performance, storage speed and distance. While these all play a contributing role in the performance of your application, ensuring performance out of specific applications requires specific solutions. So while understanding how to make the organization's compute infrastructure performant is essential, we’re not focusing on that here. 

We want to look at the other pillar in application performance, which is data delivery. 

How do you ensure that the path between your users and your applications isn’t introducing unnecessary slowness? 

To assess this, we need to look at what causes slowness in connectivity and individually address these components. 

  • Reliability: One of the traditional methods of measuring the effectiveness of your connectivity is reliability. The circuits between your users and your applications need to be online and passing traffic cleanly, but that isn’t always under your control. Once traffic leaves your perimeter, you have little influence on the reliable delivery of that traffic. Even a minimal amount of packet loss will significantly impact the performance of your traffic, especially on high-speed/low-latency connections.  
  • Capacity: Another traditional metric of network performance is capacity. The concept is simple. If the pipe is full, new traffic entering the link will have to wait its turn to get passed along. If the pipe has far exceeded its capability, the traffic may be dropped altogether —which may cause traffic to be resent compounding the issue. 
  • Latency: Latency is the silent killer of application performance. Latency is the measure of time elapsed between the moment you issue a request and the time you receive a response, and it includes the processing time in the responding server. What may be less obvious is how latency impacts effective throughput. The nature of TCP creates an inverse relationship between the amount of latency on a link and the speed of data transfer possible in a single flow. A rise in latency reduces the transfer speed of a single flow.  
  • Agility: When considering performance characteristics, agility is often overlooked. Just because your connectivity strategy meets today’s needs doesn’t mean it will meet tomorrow’s needs. The more agile and adaptable you are to changing network demands, the more likely it is that you’ll be able to quickly and efficiently respond to changing business requirements.

copy link

What are modern approaches to addressing slowness?  

Make light faster. It’s that simple. Find a method for faster than light communication, you know, wormholes, and your latency issues will go away overnight. Unfortunately, none of us has figured out a method, so let's try the other recommendations below.

Make transit more reliable

SD-WAN has helped considerably in this area. Traditional routing protocols are excellent at detecting outages and routing around them but are less effective at detecting links with low to moderate performance impact. Using sensors and analytics, SD-WAN allows you to identify links that are performing poorly and route around them until you can resolve the issue. SD-WAN also you to take advantage of multiple paths, matching traffic to the links that are best suited to provide the required performance.

Make applications more adjacent

The less distance a user has to traverse to access their data and applications, the less likely latency will impact application performance. Sounds easy, right? SaaS is the easy button for application distribution, but not all applications are SaaS offerings. For those non-SaaS applications, you need to consider what your strategy looks like for connectivity. Some of those considerations are: 

  1. Building hierarchical connectivity models by using backbone network connectivity and regional connectivity points to the backbone network.
  2. Utilize robust interconnectivity fabrics to connect to business partners efficiently.
  3. Utilize cloud (or multicloud) strategies to place applications as close to users as possible.
  4. Diversify your cloud connectivity by regionalizing direct connections from your connectivity fabric.
  5. Do all of the above without introducing unmanageable complexity into your operations. 

Change how you think about security

The traditional security models don’t work when distributing users and applications. “The old way” of doing security, where we route a user’s traffic to a centralized security stack, is no longer adequate. With the adoption of distributed users and distributed applications, some traffic flows never hit our internal networks. Cloud-based security platforms and SASE are exciting new ways of distributing the security stack to meet these needs. 

Improve network agility

Cloud has proven that companies who quickly respond to change will succeed in the future. So how do you “cloudify” network infrastructure and connectivity? Cloud connectivity fabrics, such as the Equinix Cloud Exchange, can help you be more agile in your connectivity strategies. Paired with functions like Network Edge, where you can deploy VNF instances of routers, SD-WAN edges and security devices on-demand, you can dramatically reduce the time it takes to respond to new connectivity needs.

copy link

Final thoughts  

The nature of connectivity is changing. As a business, there is a whole new world of requirements and expectations to navigate. And while vendors will provide much of the technology required to implement these new strategies, no one product will get you where you need to go. 

That’s where we come in. WWT and partners such as Equinix have the resources to help you navigate this quickly evolving landscape, with expert-led briefings, workshops, strategy sessions, consultative services and expert delivery of technology solutions. To find out more, schedule some time with us.

Visit with us to discuss connectivity options.