What are Edge Technologies? Diminishing Cloud Computing’s Latency

Find schools

Meet the Expert: Abhishek Chandra, PhD

Abhishek Chandra

Dr. Abhishek Chandra is a professor in the Department of Computer Science and Engineering at the University of Minnesota. He co-leads the Distributed Computing Systems Group Lab at the University. He received his MS and PhD in computer science from the University of Massachusetts, Amherst.

Dr. Chandra’s research interests are in the areas of operating systems and distributed systems. His research focuses on resource management and performance in large-scale distributed systems to achieve reliability, scalability, and manageability. His recent focus has been on designing and optimizing systems for data-intensive computing applications in cloud and edge computing platforms, especially for data generated near end-users (e.g., mobile and IoT data).

Dr. Chandra is a lifetime member of the Association of Computing Machinery (ACM) and a member of IEEE. He is a recipient of the National Science Foundation’s CAREER Award, the IBM Faculty Award, the Cisco Research Award, and several Best Paper awards and nominations. He is currently serving on the Steering Committees for IEEE IC2E (as co-chair) and ACM HPDC conferences. He has served as an associate editor for the IEEE Transactions on Cloud Computing and IEEE Transactions on Computers, as General Co-Chair for ACM HPDC 2021, and as program co-chair for IEEE ICDCS 2021, ACM HPDC 2018, and IC2E 2018 conferences. In addition, he has served on the organizing and program committees of several conferences in his research area.

The Evolution of Edge Computing

The seeds of edge computing date back to the early days of the internet. The questions it seeks to solve may sound relatively simple: where is data stored, and where is it processed? But as internet and digital services have grown exponentially over time, creating a torrent of data and a ravenous appetite for both content and computation, the answers to those relatively simple questions have changed.

“In the 1990s, content distribution networks (CDNs) came into the picture,” Dr. Chandra says. “With CDNs, news sites like CNN could have servers in several different locations that all served content very quickly to users whenever they were trying to access it. This whole idea of pushing data to the edge really started with CDNs.”

Today, many companies still use some version of a CDN: Netflix and YouTube, for example, rely on geographically distributed CDNs for smooth streaming delivery. And while CDNs primarily focus on pushing data to the edge, not computation, they laid the conceptual and technical frameworks for what would evolve into edge computing.

In the early 2000s, with the proliferation of cellphones and Internet-of-Things (IoT) devices, focus shifted to cloud computing. Cloud computing took data back to centralized warehouses for storage and computation. Centralization has its benefits, but also comes with drawbacks, and cloud computing struggles with issues related to latency, bandwidth.

Edge computing can offer an attractive alternative. Just as caching content closer to the end user reduces latency and improves user experience, so does performing computation closer to the end user.

“We’re seeing computation come back towards the user,” Dr. Chandra says. “The idea is that you can start deploying some computational and storage resources right where the user is or right where the data is.”

The Challenges of Edge Computing

Edge computing comes with several benefits. It can reduce latency, save bandwidth, improve security, and offer added reliability and resilience. It unlocks applications that cloud computing can’t: consider the data-hungry, computation-heavy self-driving car, where low latency isn’t just a benefit but a necessity.

But edge computing also faces significant challenges. As data sources continue to proliferate, from cameras to phones to cars to IoT devices, that data needs to be aggregated in order to be operationalized. Training machine learning algorithms and large language models (LLMs) requires vast amounts of data and enormous computational power. Those challenges seem to necessitate some use of centralized warehousing and computation.

“Data volume and velocity is a big challenge in edge computing,” Dr. Chandra says. “Another challenge is the issue of privacy. How is data being stored, processed, transmitted? We have to ensure that privacy is maintained.”

Privacy issues are a central part of many data discussions, especially as more health data is stored and transmitted via wearables. Government regulations, particularly those in the EU, may require that certain user protections be in place. Privacy and efficiency are often at odds with one another, especially in enterprise-level instances, but applications using edge technologies need to strike a careful balance.

Environmental issues are also a concern. Centralized data centers are already seen as having an impact on the environment. The ramifications of bringing more computational power closer to the user could mean deploying several smaller, energy-inefficient systems. While the initial expectation might have been that edge systems could, overall, help reduce some of the environmental impact that data centers have, it’s now believed that edge systems will not replace but grow alongside existing cloud infrastructure.

“The question is now, what is the environmental impact of all this computing and storage we’re putting all around us?” Dr. Chandra says. “We have to think about that.”

The Future of Edge Computing

The future of edge computing is closely tied to the future of technology as a whole. Especially as more IoT devices are deployed, edge will continue to make applications more efficient and effective. Dr. Chandra is particularly optimistic for the integration of VR and AR devices with edge computing, where the latter can help reduce the former’s latency and improve its users’ privacy.

Edge technology will also be used to enable entirely new applications—ones not yet fully developed. Thanks to the proliferation of wearable tech, responding to healthcare events could become quicker and more precise. And, even in areas with less technological density, new edge deployments can benefit manufacturing, farming, and the connectivity of entire communities.

“In many developing countries, in remote and rural areas, a lot of people now have phones, but they don’t have great connectivity,” Dr. Chandra says. “If you created some very small edge infrastructure there, just small edge servers or clusters, it could go a long way to serving a large community without relying on big companies.”

The future of computing is hybridized and multi-lane. In addition to edge computing, some applications and data must still be centralized and stored in the cloud. And some applications may even harness what’s known as fog computing, which pushes computation all the way to individual devices themselves. Today, transitions between those silos are still bumpy and sometimes inefficient, but the future should see them become smoother.

“Edge is going to continue evolving, and I think part of it may also become more specialized based on the application you’re trying to run,” Dr. Chandra says. “But eventually, there will be a merger of this whole continuum, with applications spanning both edge and cloud. We probably won’t even talk about cloud computing separately from edge computing. They’ll be working in conjunction with one another.”

Related Features

Artificial Intelligence in Job Recruitment: How AI Can Identify Top Talent

Diversity and inclusivity aren’t purely idealistic goals. A growing body of research shows that greater diversity, particularly within executive teams, is closely correlated with greater profitability. Today’s businesses are highly incentivized to identify a diverse pool of top talent, but they’ve still struggled to achieve it. Recent advances in AI could help.

Artificial Intelligence Systems & Specializations: An Interview with Microsoft’s Sha Viswanathan

The ability of a computer to learn and problem solve (i.e., machine learning) is what makes AI different from any other major technological advances we’ve seen in the last century. More than simply assisting people with tasks, AI allows the technology to take the reins and improve processes without any help from humans.

Building Web3: Expert Interview on Non-Fungible Tokens (NFTs)

Unlike fungible items, which are interchangeable and can be exchanged like-for-like, non-fungible tokens (NFTs) are verifiably unique. Broadly speaking, NFTs take what amounts to a cryptographic signature, ascribe it to a particular digital asset, and then log it on a blockchain’s distributed ledger.

Building Web3: Smart Contracts, Solidity, and the Ethereum Network

First proposed by computer scientist Nick Szabo in the 1990s and later pioneered by the Ethereum blockchain in 2010, smart contracts are programs that execute themselves when certain predetermined conditions are met.

Business Systems Analyst – A Day in the Life

This is a role for tech-lovers, for logical thinkers, for those who like being given an answer and then are told to find the question. But it’s also a role for communicators, for relationship builders, for people who enjoy cross-departmental collaboration.