Yadhu Gopalan is the cofounder and CEO of Esper. Esper provides next-gen device management for company-managed hardware.
getty
AI is more than just a buzzword—it’s a driving force behind major technological advancements. For businesses, early AI adoption is critical, but proper execution today means success tomorrow. Cloud deployments ultimately introduce issues as demand increases—latency stifles real-time decision-making and data throughput plus computational load drive rapidly increasing costs. The solution? Run your AI model where your devices are—on the edge.
Why AI At The Edge Is The Future
Traditional AI is cloud-native, where models run, data is processed and results are formulated in the cloud. This is beneficial for data and resource-heavy AI-processing scenarios where latency and cost aren’t an issue. Edge AI, however, brings that computation to where the data is gathered—on-location edge devices like smartphones, tablets, kiosks, point-of-sale systems, IoT sensors and similar. There are several compelling advantages of running AI at the edge versus in the cloud:
• Lower Latency: Because data is created and processed in the same location, decision-making is done in real time, as data doesn’t have to be transmitted to and from the cloud. This can dramatically reduce latency, which is critical for applications like autonomous vehicles or automated quality assurance.
• Decreased Costs: This is a twofold issue: bandwidth and computing costs. As data is transmitted to the cloud (and, in some cases, back), bandwidth usage increases. And when you run AI models in the cloud, you essentially rent resources. End service providers understand the value of computing power, so this rental comes at a huge premium. When you run models on the edge, you’re using compute power that you already own, so you can significantly reduce bandwidth costs.
• Network Optimizations: Similar to cost considerations for bandwidth, reduced data transmission alleviates the strain on network infrastructure.
• Enhanced Privacy: Transmitting sensitive data always poses at least a small risk, so keeping that data on a single device or restricted to a local network reduces the risk of exposure during transit.
For all of the benefits of running AI on the edge, however, operationalizing can present challenges. The most significant issue comes with AI model deployment. Allow me to explain.
The AI Model Deployment Challenge
Content delivery of all types—files, applications and system updates, for example—is a struggle for many organizations, and AI model deployments only exacerbate this issue. There are several reasons for this.
• Configuration Management: Controlling the environment in which models run at the edge is complex, and you need tooling designed to ensure the application, system and model are configured appropriately. Additionally, having the appropriate run time for models—and the ability to update the runtime for the hardware—is key.
• Hardware Diversity: When you have a variety of devices in the field with different computational capabilities and physical locations, AI model deployment at scale is difficult to manage.
• Model Update Frequency: AI models are updated much more frequently than other types of edge content. If updating monthly or even weekly is already a struggle, daily or hourly updates are simply out of the question.
• Limited Resources: Given the hardware constraints of most edge devices (at least to that of cloud processing), developing reliable AI models for local processing without sacrificing reliability is problematic.
• Reliable Network Infrastructure: Repeatable, scalable software delivery hinges on network reliability, which is challenging for some industries—especially those operating in rural areas.
To overcome these challenges, organizations need a comprehensive strategy that encompasses the entire AI life cycle, starting with the devices.
The Path Forward Starts With The Hardware
Just as AI will continue to impact the way devices are used, your strategy around AI model, app and content distribution also has to evolve. Fortunately, a solution already exists in the world of software development: DevOps.
You might be asking yourself what DevOps has to do with device management. DevOps practices are about alignment between development and ops teams, and extending that concept beyond software development to the edge is where the magic happens. With a DevOps philosophy applied to device management, your development and IT teams can work together to build, test, apply and iterate AI models (or any other type of content).
Using modern tools and technology provided by forward-thinking device management solutions, this isn’t a theoretical conversation, either. With tools like distribution pipelines, testing environments and staged software updates, AI model distribution can become a non-issue. This frees your development team to work on future updates, your IT team to move with agility and your business to focus on what’s important.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
{Categories} _Category: Takes{/Categories}
{URL}https://www.forbes.com/sites/forbestechcouncil/2024/06/07/ai-at-the-edge-enabling-the-future-of-company-managed-hardware/{/URL}
{Author}Yadhu Gopalan, Forbes Councils Member{/Author}
{Image}https://imageio.forbes.com/specials-images/imageserve/666209bd95eeeeab2e41fbda/0x0.jpg?format=jpg&height=600&width=1200&fit=bounds{/Image}
{Keywords}Innovation,/innovation,Innovation,/innovation,technology,standard{/Keywords}
{Source}POV{/Source}
{Thumb}{/Thumb}