Quick Links

"Edge" is a term as vague as "cloud"---really, it's all about moving data processing back on device (or at least physically closer to the end user) to benefit from lower latency connections, all while maintaining central control.

What is Edge Computing?

Edge computing is a new paradigm that shifts processing closer to users. Currently, we're firmly in the cloud computing era, where applications live on servers in a centralized datacenter. We still have personal computers, but they're really all just running Chrome to connect to the cloud and access services like Gmail, Office 365, Slack, Dropbox, and countless others.

And while running SaaS platforms is very profitable, There's one major issue---latency. Nothing can move faster than light, and even with direct fiber optic cables, there's still additionally processing on top of that to connect your computer to the cloud. On the user's end, this delay is perceived as lag, which can make web applications appear sluggish compared to their native counterparts.

On top of the delay, it costs money to use those pipes. Sending tons of data to be processed on the server side is wasteful if that processing could happen on device. And while data collection in itself is very profitable, it's good to be smart about it too.

For example, Internet-of-Things devices generate a lot of data. Wifi-enabled security cameras in particular can use up a ton of bandwidth; streaming all of it to the cloud is expensive, let alone keeping long term storage. Instead, your camera may be smart enough to only transmit footage if there's motion going on, to save on the cost of transmitting hours of the same footage every day.

This is what edge computing is about. The computing is moved as close as possible to the user, whether that's on device processing or edge servers like CDNs, the edge computing model reduces the stress on the central cloud servers.

But you'll still need those central servers---the application provider is still doing the processing, and it's all still interconnected. Before cloud computing, you'd download an application like Excel or Garageband and use it for whatever you needed; now, services are sold as an evergreen subscription with regular updates, with most of the processing happening in the cloud. Edge computing moves the processing back to your device to save some money and fix a few problems the cloud brings along. But central control is maintained; the service provider is just using your device to run their code. In a way, your computer is just part of their "cloud" now.

What Are The Benefits Of Edge Computing?

Edge computing reduces bandwidth use and improves latency of applications that make regular connections to cloud servers. If you can cut down on any of that by moving the processing onto the device, you'll save some money on your monthly AWS bill.

Money isn't the only factor though---many services can see significant speed improvements, and some wouldn't be possible at all without on-device processing. Technologies like self driving cars simply can't make a connection reliable enough to depend on cloud processing. Voice assistants would be significantly slower without on-device processing. Making products like these snappy and responsive is very important.

Edge computing also has some privacy implications. After all, if your data is being processed on device, there's less for the provider to collect about you. For example, FaceID and Apple's other security features are all processed on device through the help of coprocessors like their Neural Engine, which means that high-res scan of your face doesn't have to go any further into their network. That doesn't mean you should trust edge computing completely, but it's a step in the right direction.

Should You Use Edge Computing?

If you're developing applications, it's generally a good idea to process as much as you can on device rather than in the cloud, as you can save money and speed up your app. And if you're working in a field involving IoT devices, edge computing may be a necessity for your networks infrastructure. After all, if the device is smart, why not put it to work?

The term "Edge" is also used to refer to services that run at the edge of the cloud, usually with the help of a content delivery network like Cloudflare. The nature of CDN requires many servers to be located all across the world, which makes their network a perfect candidate for running an edge computing platform without having to process on device. This is exactly what they offer with Cloudflare Workers. AWS has their own CDN, which is used as the backbone of Lambda@Edge.

These services can make use of a distributed computing platform while still keeping your servers under your control in the cloud. Rather than running a monolithic, always-online server, you would instead run small "microservices" that process data without the need of a central server. This Function-as-a-Service (FaaS) model used by Lambda and Cloudflare Workers makes a perfect pairing for networks with many points of presence. If your application benefits from being broken down into microservices, your can run them at the edge to reduce latency and strain on any single server. This model is typically called "Fog Computing."

If you don't require on device processing, your website can still see benefits from a CDN---a service similar to edge computing that serves your website from multiple points of presence all around the globe, lowering the latency to the end users. Origin pull CDNs cache your website every so often, and serve future requests from the edge cache rather than bothering the origin web server. This can dramatically speed up load times, especially on optimized sites where every millisecond counts.