I was having a great discussion with another Gartner analyst, Chris Laske, about some new research he’s working on with Mrudula Bangera on the growing challenge for ITSM in on-prem, multicloud and edge environments. This analogy came up, and it’s too good not to post.
Where does edge computing take place? Is it at the edge of the cloud, a regional data center, the edge of the network, in a microdatacenter at the top of the building, in a data center in a closet, in an edge server, a gateway, a smart device? What’s a part of the IT service delivery in an edge computing world?
It’s turtles all the way down.
IoT began as connected, digitized devices. They didn’t need a lot of smarts – just a connection to the Internet, so you could get the data to the cloud, or control it from the cloud or data center. And you didn’t need “edge computing.” Life was simple, but not quite transformed, yet.
What’s driving edge computing isn’t 5G, or some new “edge computing” technology. It’s the growth of those devices, and the use cases that spring up from devices that can share data digitally, and can be controlled digitally. Some of that is between an endpoint and the cloud, but much, much more is about digital interactions between things and each other, or things and people, at the edge. Latency is not simply about the lag time from an endpoint to the cloud and back – it’s about systems of things and people at the edge, interacting digitally. Systems multiply the interactions, and magnify the latency. Digital interactions and digital systems have gravity that pull compute to the edge.
Where do you draw the line for service management when it’s turtles all the way down?
Life was simple when endpoints were simply PCs, then PCs and laptops, but then we added smartphones, and with IoT, we add everything else. Where do you draw the line for unified endpoint management when everything is a digital endpoint?
Likewise, more digital data flowing at the edge means more and more noise, and more and more local, ephemeral data. Some of the data has training value. Much of it simply needs to be filtered, or turned into metadata. Data at the edge is all about sifting sand for gold effectively, destroying the sand as quickly as possible efficiently, and only passing on to higher levels of the hierarchy that data needed to be processed elsewhere. Data at the edge has gravity, and that also pulls compute to the edge. But data will exist…everywhere. Where do you draw the line on data management when data isn’t in the data center, but everywhere?
And compute is growing at all layers. Endpoint devices – things – are highly diverse, with different capabilities and requirements, producing or asking for very unique types of data and commands. As computing gets cheaper, these endpoints are a part of the edge computing architecture, adding their own processing to the topology. Edge computing on location isn’t just the ruggedized edge server or gateway, it’s the cameras, and light bulbs, and industrial drives. Where do you draw the line on application management when applications and parts of applications are everywhere?
It’s turtles all the way down. And that’s why it’s hard.
As I discussed in The Stubborn Immaturity of Edge Computing, the variety of turtles at the edge is tremendous, so while there are turtles everywhere, at all layers, and they are growing like…um, turtles, edge computing continues to be an exercise in customization, and first-of-a-kinds, with a lot of vendors and products vying to dominate one layer of turtles – leaving it to enterprise and systems integrators to get all the turtles to balance on each other.
This doesn’t mean you wait until the perfect turtle-balancing tools and frameworks are ready. Many enterprises can’t afford to not capture these business moments, or can’t afford to not improve automation or quality control. For a bit, we are all going to have to pick and choose the use cases, and work on turtle-balancing skills.
Source: Gartner Hybrid Cloud