The Flexential Lone Mountain data center near Las Vegas. (Photo: Flexentiel)
In this edition of Voices of the Industry, Jason Carolan, Chief Innovation Officer at Flexential, explores why networking from the edge to the core (and back to the edge) will be essential for IoT, AI, automation and other data-intensive applications.
Edge computing and edge networking are growing in popularity. The emergence of millions of sensors and IoT devices is producing huge amounts of critical data, forcing companies to move heuristic services closer to where the data is generated. The ability to capture more data is made possible by technological developments such as 5G, Wifi6 and the expansion of national fiber networks.
Workloads are increasingly distributed to support next-generation applications and a wider use of machine learning and AI to help make decisions in real time. Ultimately, these services connect to larger-scale data centers, but the immediate data capture and connectivity occurs very close to the end sensor or application. According to IDC, by 2023, more than 50% of new enterprise IT infrastructure deployed will be at the edge. Data can be lost at the edge forever, making the architecture strong to enable advanced technologies to be important considerations.
The edge is critical to local data management and immediate controls, but delivery from the edge to the core (and back to the edge) will be just as critical. Network connectivity will need to scale to huge levels that were unthinkable three to five years ago. The ability to move analytical data and content from large primary data centers to the near or far edge will require 100 Gbps (otherwise Tbps) or more of backbone capacity.
It’s harder than it looks. In fact, it is really complicated and confusing for the average business. Add in the challenges of bringing data from the edge to the core and back, and you will find that a complex ecosystem of multiple actors is required for a successful edge deployment. The impact of latency on applications is also taken into account; what is close enough or not? If only 2-3% of edge data needs to be stored, how do you decide which data is important?
Emerging cutting edge applications are very sensitive to latency. These decisions need to be made locally to save time, such as shutting down manufacturing equipment due to a safety issue that could injure an employee. Or process audio data in a city to detect a gunshot and vector video cameras and emergency responders quickly, collecting critical data at the scene. Or maybe a manufacturing or logistics department is building a profitable local 5G network to enable their IoT services, without the added expense of paying domestic public providers for 5G.
On top of these complexities, edge data centers aren’t meant to be the final solution, just the last part of it. A robust connection to large, high-quality support for high-density services and a highly scalable facility, hyperscaler, or carrier hotel is essential for the two-way movement of data between the edge and the core. There is also a need to consider how the edge data centers should be distributed in order to be as efficient as possible to collect and manage data at the edge. Many edge applications will ultimately need AI or machine learning to be processed, at the edge and beyond. Efficient edge applications will be formed using massive amounts of data (the definition of “data severity”) and computing power, hosted only in large-scale, highly dense facilities capable of handling a network, such as central and regional facilities. This helps sort and define what is important and not, at the limit.
Because Edge is so disparate, partnerships and collaboration are key enablers for edge computing and edge networking. It’s not something that a company or vendor can solve on their own, and with the need for a holistic solution, it takes a combination of multiple partnerships to make it happen. It will take an open ecosystem that allows vendors to work with technology platforms, other data centers, and service providers to enable businesses and governments to easily deploy edge services.
It goes without saying that large colocation data centers will not be built in all Tier II or Tier III markets due to costs and local demands. Smaller edge data centers with substantial power capabilities are emerging instead to drive edge deployments that rely on federated communication to larger colocation environments through content delivery networks and distribution networks. contents.
Flexible and dynamic bandwidth will be needed to compensate for large bursts of traffic coming from the edge, in addition to adapting to unforeseen growth trends, such as the growth of the gaming, precision farming and entertainment industries. ‘automobile. Connectivity to hyperscalers, efficient undersea cables (for international connectivity), and robust metropolitan areas all work hand in hand with advanced computing and backend connectivity.
Jason Carolan is Director of Innovation at Flexential. Flexential provides the managed network services, as well as the relationships, to facilitate secure and scalable edge deployments. The FlexAnywhere ™ platform provides access to other edge and cloud locations, as well as global data centers around the world. Capturing more data at the edge generates more value within Flexential’s partner and service ecosystem, creating a flywheel that attracts more workloads, connected by FlexAnywhere ™ to make it easy and secure, regardless of where “the edge is”.