Article

Edge computing: What it is and its impact on organizations

November 17, 2021

“Edge computing” has multiple meanings, depending on who you ask. For some, “edge computing” means running code on Internet-of-things (IoT) devices. Others may associate it with environments provided by companies like Fastly, who offer Content Delivery Network (CDN)-type capabilities for code rather than just static files.

Organizations choose edge computing for many reasons, including optimizing user experience, improving response time, maintaining privacy or minimizing the computational cost ($) or power consumption of calculations that need completion. 

The main benefit that edge computing provides is the ability to deploy software where it needs to be. “Needs to be,” however, depends upon context. 

 

Reducing latency

If the goal is to have a better user experience around the interaction with some service, reducing the latency (delay) of going to the cloud is a necessity. For example, financial traders need market information in real-time to make the best decisions. Another example: the teenager in your basement playing video games may be unhappy with a “laggy” experience. The best games have real-time responsiveness.

Edge computing allows a user to transmit data from one location to another with minimal delay.

 

IP concerns

If the software has intellectual property concerns around it (you don’t want to run it in the user’s browser so they can see the code), then edge computing allows you to get closer to the user without going on the device. 

 

AI inference capabilities

If you need to be on a device at a location, combining 5G and edge devices such as Microsoft’s Azure Sphere or Nvidia’s Jetson Nano will push AI inference capabilities to the sensor's location. Hardware acceleration on the devices (from GPUs, TPUs, FPGAs, ASICs, etc.) will perform well, but the data will not require extraction from the device. 



Edge computing and 5G, AI and the cloud 

5G will provide network connectivity everywhere, which will enable devices to gain network access in environments that might not usually have quality connectivity. 

Imagine, for example, a phone app that detects the onset of dementia or evidence of a stroke. Cloud computing AI would require transferring data from user devices. In contrast, edge computing would allow AI systems to leave potentially sensitive data on the device while still detecting medical conditions. Being able to leverage AI on mobile/edge devices can help reduce the regulatory burden.

However, before an organization can utilize this technology, they will need agile architectures that can “float” between environments. Tools like LLVM, WebAssembly and WASI will facilitate, as well as Docker, K8s, etc.

There is a certain amount of “you-must-be-this-tall-to-ride” before organizations can leverage edge computing. An organization that hasn’t already automated its deployments, broken up its monoliths and modernized its code will have a harder time taking advantage of the benefits of edge computing. 

But, if they have, they will be able to locate computation where it needs to run for privacy, user experience, intellectual property protection or low-cost/low-power reasons.

After all, edge computing is about coming to the users in low-latency and geographically-distributed ways.

 

Submitted by:

Brian Sletten, President of Bosatsu Consulting and Senior Instructor of Edge Computing at DevelopIntelligence

https://www.developintelligence.com/