Primer Series: What Is Edge Computing?
This article is part of our WWT Primer Series, a collection of content focused on the fundamental understanding of complex technologies and solutions.
In This Article
Have you ever been sitting in a meeting when the conversation turns to a technology with which you are unfamiliar? Suddenly, many acronyms are being thrown around, and you do not know what they mean, while everyone else is nodding their heads and seems to know what is being discussed.
We’ve all been there, and to help our valued customers, the engineers at WWT set out to write a series of ‘primer articles’ to provide basic information on various products and technologies. This article will cover the basics of edge computing.
What is edge computing?
Edge computing means putting computing or processing resources near the source of the data. What does this mean, though? Perhaps the best way to explain this is to take a brief look at the history of computer technology. The first computers or servers were called mainframes. These behemoth solutions were specialized environments designed to handle a single workflow. The users completed data entry and data retrieval via terminals that had no computing resources and were simply a keyboard and a monitor. During this time, network technology was just taking shape, so users had to be physically close to the mainframe to access it. They usually had to be in the same building, or at least on the same campus. The next generation of technology introduced us to the client/server model. As computer chips became smaller and more affordable, the model shifted to providing personal computers (PC) to each user’s desk and distributing processing power away from a single mainframe to several smaller servers. During this same time, advancements in networking technology allowed the user to be much further away from the server location.
As the popularity of the above-mentioned solutions advanced, the need for large amounts of data to be stored became a requirement. This need led to the necessity of both SAN (Storage Area Network) and NAS (Network Attached Storage) technology. Most companies were required to build large data centers (or multiple data centers) to house the growing number of servers and storage systems being provisioned.
Fast forward to the present day, and we have seen a predominant shift to running workloads in the cloud. Cloud providers, such as AWS, Azure, and GCP (Google) operate multiple data centers across the globe. Cloud computing and the advancement of networking technology allows users to be on a different continent and still have access to information stored on a server running in the cloud. Because of the increased cost, latency, high data volume, insufficient bandwidth or spare capacity or both, data sovereignty, and compliance associated with cloud computing, the need for edge computing arose.
Some may challenge edge as not having the same dynamics as cloud computing. This challenge can be overcome with the use of composable disaggregated infrastructure. An example of how an edge CDI solution can enable an environment to be as dynamic as the cloud can be applied to a low latency actionable intelligence solution for inferencing where a camera mounted above a scanner at a self-checkout lane needs quick confirmation the item bar code scanned was the item placed in the shopper’s bag. If the video stream of a customer scanning an item is required to be shipped to an inferencing solution in the cloud, then approval of the scanned item be sent back to the station to complete the sale of the item the time to approve each transaction could lead to a poor customer experience. The CDI solution could not only support the more performant edge inferencing solution with the appropriate scalable infrastructure, but also support a model training solution allowing the environment to do both inferencing and model training with the same hardware. Another bonus to a solution like this would be that the data ingested as part of the inferencing and needed for training would be in place and not need to be migrated to either cloud or on-premise environments for the model training.
One of the biggest drivers of edge computing has been the growth of IoT (Internet of Things). IoT is any device that receives and transfers data over networks without human intervention. IoT devices usually have sensors that record information and send data to a central location. In 2019, Gartner released a study calling out that by 2022 75% of all data will need analysis and action delivered by edge technology; this is a change from a previous data locality where 91% of data was created and processed within a centralized data center. Another interesting prediction is that by 2025 there will be 175 zettabytes of data generated, a tenfold increase from 2016 levels. IoT devices will generate 90 zettabytes. An example of this would be motion-activated security cameras.
Edge computing models
Edge computing refers to a location. It is the outer boundary of a network, possibly hundreds of miles from the nearest cloud data center but as close to the data source as possible. The edge is where data collection and low-latency real-time decision-making take place. There are two edge computing models, Far Edge and Near Edge.
Far edge solutions are micro data center infrastructure environments placed most distant from any cloud or centralized data center and support specific applications associated with the location. Examples of this solution would be infrastructure placed at the bottom of a cell phone tower or a retail store. Applications that run on the far edge require extremely low latency, high scalability, and high throughput. Live video streaming is an example.
Near edge places infrastructure between the far edge and cloud / centralized data centers. The near edge supports a more extensive set of applications and can be thought of as a scaled version of a traditional data center. A Content Delivery Network (CDN) is a service that runs in the near edge. A CDN is a group of geographically distributed servers that speed up the delivery of content. An excellent example of this is watching a movie on one’s favorite streaming service. World Wide Technology recently helped a customer develop distributed data centers at various locations in place of on-premises data centers.
The flexibility of edge computing allows for many use cases. Let’s cover a few definitions now.
Device edge — Customers deploy devices to perform specific functions like shop floor motors, X-ray machines, and vending machines. Data from this equipment can be collected and analyzed to ensure safe and seamless operations and to predict maintenance needs in advance. Computing resources are deployed closer to the devices to process these workloads and deliver low latency responses. Small form factor appliances and gateways are commonly used to provide both computing and physical connections to legacy interfaces.
Router edge — The primary function of a router is to forward packets between networks. They act as the demarcation point between the external systems and internal networks. Some enterprise routers provide built-in compute or plug additional computing modules and be used to host applications. In this model, a single router can perform packet routing functions and provide infrastructure to host edge applications.
Branch edge — A branch is a location other than the main office designated to perform a set of functions. Each branch uses various applications to perform its daily operations. With a retail clinic, it might be a Point-of-Sale system, or in a health clinic, it may be an Electronic Medical Record. Such business-critical applications are hosted on edge computing at the branch to provide users with low-latency access and business continuity. The edge computing appliances typically have more capacity than the previous edge compute servers and can host multiple virtual network functions and applications on the same hardware. Sometimes, “branch edge” and “Local Area Network edge” are used interchangeably.
Enterprise edge — In a distributed enterprise environment with many branch locations, computing resources can be shared to drive economies of scale and simplify management. In this model, instead of deploying edge computing instances in each place, the edge computing resources can be implemented in a shared site connected to the enterprise network. In this model, the capacity and capabilities are much higher and can be used for applications that require more processing power and resources.
Datacenter edge — As customers migrate to the cloud from their existing data centers, smaller variants of data centers have emerged to address rapid deployment and portability for special events and disaster management. These can be deployed closer to the customer. The form factors typically vary from suitcase size to shipping container size.
Cloud edge — Cloud service providers have used services built for a specific purpose closer to the users to optimize particular functions such as content delivery. Some refer loosely to Content Delivery Networks (CDN) and caching services as a cloud edge; however, they were not built to host general-purpose workloads. While the initial attempts were focused on caching and content delivery, newer services such as local zones redefine cloud edge. In addition, cloud service providers have created many edge solutions that fit into some of the previous models discussed.
Mobile edge — The wireless service providers provide nationwide service using a distributed network. The service locations are closer to the customer than to the cloud data centers. When these locations are multi-purposed to offer wireless services and host edge computing services, it becomes a unique model for edge computing and has distinct advantages. In the mobile edge computing model, computing resources are deployed in service access point (SAP) locations or other locations in the core. Applications running on these edge computing servers can be accessed through 4G or 5G connections from mobile endpoints.
Challenges of edge computing
One of the biggest challenges of edge computing is remote device management. Most devices in a data center can be remotely accessed using “out-of-band” solutions, such as ILO or iDRAC. ILO and iDRAC are names given to remote access technology integrated into a server for lights-out management. What about equipment that does not have this capability? WWT recently published a lab that allows users to gain hands-on experience with an Avocent Advanced Console server. A console server is a great fit for managing devices in a remote/branch office or a data center. An administrator can log into a device console without traveling to the device location. This feature allows for faster incident resolution times.
Another challenge is operating conditions for some devices. Because of the flexibility of edge computing, some environments require devices placed in less than desirable conditions. A farmer using IoT sensors to monitor crops needs a device that can withstand the elements and needs to likely works with poor network conditions. A device placed on a factory floor must be tough enough to handle those conditions.
Security should always be a top concern in all computing environments, and the edge is no exception. Devices must be hardened to stop unauthorized access. Encryption should protect the data as it travels from the edge device to the data center. An example of security being critical is a self-driving car. A security breach could be catastrophic in this use case.
Hopefully, this article has been beneficial in understanding edge computing. If you’d like more information, I recommend this video by WWT’s Joe Wojtal. If you’d like to speak to an expert, we would love to hear from you, so contact us.