Editor’s Note: Today, Vapor IO and four other partners, including LinkedIn, Flex, GE Digital and HP Enterprise, announced their collaboration to establish the Open19 Foundation, an open community that will define data centers of the future.
The past decade has been dominated by mega-scale data centers, operated in centralized and regional locations by the likes of Google, Facebook, Amazon, and Microsoft. Sites for these data centers were selected based on ready access to low cost power and land, more so than proximity to users or data.
But that’s all changing.
There’s a new class of applications—including IoT, virtual reality, autonomous and connected vehicles, and smart cities—where the existing model of large, centralized datacenters just won’t work. These applications need compute and storage to be located more closely to the device or application. The round trip back to a centralized data center takes too long and the amount of data that needs to be transferred is too large. We are constrained by the speed of light!
The cloud, in response, is becoming much more decentralized. Thousands of small, regional and micro-regional data centers are emerging and will be added to the cloud. The cloud is becoming a large, global fabric of compute with the capability of pushing workloads all the way out to within yards of the end consumer, whether that’s a sensor on an autonomous vehicle or an AR application on an iPhone.
Building the Edge at the Base of Cell Towers
Look around you. Odds are there are more cell towers in the 10 mile radius around your desk than Amazon has regions around the entire world.
That’s right. By the most generous count, Amazon has a mere 42 availability zones within 16 geographic regions to service the entire globe. The multibillion dollar success that is Amazon Web Services is a testament to how powerful and useful these capacities are, but the next generation of Amazon (or any cloud for that matter) will need thousands of availability zones and regions.
These thousands of edge regions won’t be in large, centralized data centers. They will be located in micro datacenters deeply embedded in the urban fabric. Many of them will be built and operated by Vapor IO at the base of cell towers. This is what we call Vapor Edge Computing, and it is the future.
The industry won’t (and shouldn’t) stop building mega data centers in places like Council Bluffs, Iowa, 425 miles from the nearest top 10 US City. However, we will need to augment these centralized data centers with edge locations at the base of cell towers, at the edge of the wireless network, connected back to centralized and regional centers with high speed fiber. Vapor IO’s Vapor Edge Computing, powered by the Vapor Chamber, Vapor software and Open19, will offer a tier of compute capacity that is zero hops away from the real action on the ground, and it will be tightly integrated into the existing fabric of the cloud. Vapor Edge Computing is how we will power our autonomous cars, our wireless AR goggles, and the billions of IoT devices with low latency services and artificial intelligence.
How Open19 Will Drive Vapor Edge Computing
At the base of each tower will be one or more Vapor Chambers. The Vapor Chamber is the world’s first highly-automated and energy-efficient rack and enclosure system built specifically for edge environments. It’s completely self-contained and is twenty-five percent more space efficient than a typical data center rack. Vapor Chambers thrive in harsh conditions, such as in cell tower and rooftop installations, and are optimized for remote “lights out” operation.
Check out these Open19 Resources
- Visit the Open19 web site
- Read the official LinkedIn blog post
- Follow Open19 on Twitter: @open19in
- Search Twitter for the #open19 hashtag
- Watch the Open19 Launch Livestream
- Follow the official VaporIO (@VaporIO) Twitter account and its CEO Cole Crawford (@ColeInTheCloud) on Twitter
- Follow Vapor IO CMO Matt Trifiro (@mtrifiro), a.k.a “Dr Edge,” as he live tweets the Open19 launch
- Read the official Open19 Foundation launch press release
- Read the other four Open19 founding company blog posts:
- LinkedIn: Taking Open19 from Concept to Industry Standard
- HP Enterprise: Open19: Data Centers Made Easy
- Flex: Flex Unveils First Server and Rack System for Open19 Foundation
- GE Digital: Opening the Power of the Industrial Internet
Building and operating thousands of edge data centers in a Vapor Edge Computing rollout presents a new set of challenges. Open19 will help solve many of these challenges, from the repeatability of delivery and setup of the data center to self-driving applications.
Challenge #1: Standardized Truck Roll
There are approximately 100,000 cell phone towers in the US and, depending on the metro density, they can often be miles apart. Building and servicing micro datacenters in thousands of locations requires field technicians in trucks. There’s no way around it. Truck rolls are expensive. These trucks can stock only a limited amount of equipment, and the amount of time spent at each location determines your economics. You need to be able to dispatch a truck, get it to the location, then install or replace rack equipment in minutes—and not just some vanilla server, but often very customer-specific power units, storage equipment, and networking gear for a multitude of co-location tenants. The only way to match the efficiency of a mega data center is to stock these trucks with modular parts that field technicians can install in minutes.
Open19, at its core, is about modular parts. The first technical contribution, which will land later this year, is a modular chassis that was designed and perfected by LinkedIn. These chassis turn IT equipment—servers, networking gear, storage, and so on—into neatly packaged Lego bricks that can snap into standard 19” racks in minutes. A technician can show up for breakfast at one of our Vapor Edge locations and install 150 Kilowatts of highly customized customer equipment (multi-core servers, storage appliances, GPUs, TPUs—you name it) and be on to the next site in time for lunch. This is unprecedented, and it’s never been possible before Open19
Challenge #2: Lights Out Operations and Sensorization
Large hyperscale companies have comprehensive network operations that are staggering in scale, but they are ill-prepared for managing for the edge cloud. While their teams might collectively manage millions of servers, these servers are concentrated within a few dozen stadium-sized buildings with teams of people, inventories of parts and make ready rooms nearby. Some of the buildings are a quarter mile long and the employees use kick scooters to get around.
Once compute has moved to the edge, where it’s disbursed across thousands of locations as it is with Vapor Edge Computing, network operations need to function in an entirely different way. Instead of 30 to 40 locations to manage, you now have to manage 30,00 to 40,000 locations—maybe as many as 100,000 if you max out the entire US tower footprint. When operating this kind of edge at scale, scooters aren’t fast enough and you’ll never have enough technicians to staff the locations in person.
The modularity of Open19 will make the occasional and unavoidable truck rolls more efficient, but they’re still expensive. The least expensive truck roll is the one you never need.
Fortunately, the vision of the Open19 Foundation encompasses a lot more than modular rack hardware. It also extends to software that helps bring mega datacenter efficiencies to more compact and edge environments. The kinds of software that will work with Open19 will use “smarts, not parts” to remotely mitigate failures and maintain SLAs without people, trucks or scooters.
For example, Vapor IO has been building an open source system, OpenDCRE, which was built for data centers of all types, but is especially pertinent for lights out operations. OpenDCRE connects to all of the devices and sensors on remote IT equipment (servers, power management, networking), as well as environmental sensors (temperature, pressure, humidity, vibration, and so on) and streams out all of the available sensor data via a standard internet RESTful API. Anybody can build tools—open source and commercial—that query against this API for remote “lights out” management of any data center.
From anywhere in the world, OpenDCRE will give you a physical, as well as a logical, view of your IT equipment and its environment. This makes it possible to know exactly, in near real time, how every piece of equipment is behaving, right down to CPU temperature and fan speed. Moreover, you can also control your equipment remotely with the same API, making your infrastructure fully programmatic. OpenDCRE and Open19 will soon let you strap on a pair of VR Goggles in your living room and “walk around” and control a Vapor Edge data center at the base of a cell tower in a remote city.
Vapor IO and its partners have been adapting and optimizing OpenDCRE for Open19 environments and we will be contributing those adaptations, along with some exciting improvements, to the foundation in the near future.
Challenge #3: Self-Driving Applications
In the world of Vapor Edge Computing, latency matters—and reliable latency matters more than anything. Take, for example, autonomous driving. A car going 60 MPH travels a third of a football field every second. In this environment, human lives and property come down to decisions and information that can be made in microseconds and the only way to do that is in an edge environment that is highly reliable and responsive.
If there is some environmental change, a hardware failure or even simple network congestion, there won’t be enough time for a human to detect a situation and intervene. Applications must have autonomy. They must be environmentally aware, capable of ingesting situational data in real time, making lightening-quick assessments, then acting on those decisions without any human intervention.
Instead of building massive redundancy into the edge, you have to use software and artificial intelligence to deliver high availability. For example, if a cooling systems starts to fail and your system detects an inlet temperature of 110 degrees, you can be pretty sure the applications running on those servers are in bad shape; they’re about to burn up. In a self-driving application, the operational system for that app would detect the high inlet temperature and recognize that the servers are going to shut down in order to avoid a fire. Before that happens, the system has a chance to detect the situation and move those workloads before the catastrophe, maintaining a high SLA with intelligent software. More smarts, not parts.
Much of the work we’re doing with OpenDCRE will open up these possibilities in Open19 environments. Open19 with OpenDCRE will make applications on the edge intelligent, reliable and responsive in real time. We will be working with the Open19 community to provide software and tools that make it possible for applications to understand both the physical view as well as the logical view of the data center. It’s an absolute requirement if we are going to be making automated decisions using real time AI in order to achieve a service level agreement or operational level agreement at the edge.
Community Matters
At Vapor IO, we’re proud to be a founding partner of Open19, but we’re even more proud to be working within a community of experts and collaborators who have a shared goal of bringing to life a set of open technologies and standards that will define the next generation of data center. Join us.