Microsoft is doing it. Google is doing it. And so are IBM and Amazon. They are all introducing technologies, devices and platforms that allow you and I to implement our own artificial intelligence. They do this by creating machine learning engines ‒ most of which are in our pocket right now. This is the “edge.”
Machine learning (ML), or the capability of reacting and working like humans, is not some abstract dance that only occurs in rooms with rows upon rows of servers. Creating a strong artificial intelligence machine at the edge is the future of this field.
What Is the Edge?
The edge includes a wide array of system endpoints, including actuators, sensors and mobile devices that communicate real-time data with enterprises, as well as everyday smart products and services that you and I use.
By 2020, it is projected there will be anywhere from 25 to 50 billion “things” connected to the IoT ‒ that’s about seven connected “things” for every person on this planet. And, of course, these billions of connected things will generate so much data that can’t possibly be processed and analyzed solely in the cloud, due to issues like limited bandwidth and network latency, etc.
How Does the Edge Work?
When we hear the terms artificial intelligence and machine learning, we often think of self-driving cars, facial recognition, natural language processing and robotic control. But AI usually refers to the intelligence exhibited by machines, which no longer needs to reside in a data center alone.
The advent of 5G now allows us the opportunity to deploy virtual compute workloads everywhere, including mobile devices or stationary sensor arrays. Most of these devices run on processing units that can capably handle and compute the data that is collected.
Why Do We Need the Edge?
While consumers are generating more and more terabytes of data each day, companies are learning that they need to avoid and minimize their network latency. This can be accomplished by pushing machine learning and artificial intelligence to the edge.
In addition, since AI is becoming a more crucial aspect of our daily activities, independent software vendors (ISVs) especially need the ability to update their services on a day-to-day basis, if not several times a day, as required. This effectively brings artificial intelligence and machine learning closer to the consumer by allowing real-time improvements and updates at the most remote parts of the web… or the edge.
Large cloud providers have seen the edge effect and are aligning themselves with wireless carriers across the globe. Just over two years ago, Alibaba and 10Cent invested in Chinese wireless telecom just to address their push to the edge where the consumer lives. Other companies, such as Apple, Amazon and Facebook, are investing heavily in this technology and spending billions of dollars to advance their machine learning technologies.
How the Edge Benefits AI and ML
In the past, if a company needed to deploy machine learning capabilities, they would have to use a server in a standard data center. But to detect a pattern, they’d need to go from the end-point device to the cloud in order to generate a result. Machine learning can now look for patterns within a specific set of information; but, in order to recognize a pattern, companies can now use a comparison or what is called an inference engine.
An inference engine is what artificial intelligence uses to output its signal of recognition – or not. Today, you cannot have artificial intelligence without first running machine learning. Consider the toddler and the hot stove: she touches it once, gets information from her pain receptors to her brain, where it is stored as the visual cues of the hot burner being red, the temperature difference, the stove itself and the feeling of pain. Thus, she has now programmed her “inference engine.”
By putting data intelligence on an end-point device, your customer can enjoy the following benefits:
- Minimal latency is achieved, because AI and ML functions aren’t dependent on an internet connection.
- Data communication can be prioritized, filtered or summarized based on communication constraints.
- Data privacy and security can be enhanced by keeping sensitive data on the end-point device.
Arrow and the Edge
We here at Arrow work with all the major hardware manufacturers and cloud service providers to bring our partners the best set of end-to-end AI and ML solutions available today. We even have NVIDIA and HPE edge devices in our Solutions Lab, where you can provide your customers with proof of concepts, demonstrations of smart city capabilities and more.
If you are ready to help your customers live on the edge, contact us today!
TJ Kilgore | email@example.com
Cloud Solution Architect
Arrow Electronics, Inc.
Last modified: September 27, 2019