Dell Partners with Hugging Face & Meta to Ease Enterprise AI Pain

Generative AI
NAND ResearchGenerative AI (GenAI) sits at the center of the next wave of digital and data transformation. Enterprises are increasingly looking to adopt the power of generative AI, a shift that often requires training models on proprietary data. This raises natural concerns about data sovereignty, security, and cost.

In response, on-premises deployments using open-source large language models like Llama 2 are gaining popularity. They offer predictable costs, control over data, and reduced risks of security and IP leakage, ensuring compliance with regulations. Dell is partnering with Meta to give enterprises better support for Llama 2.

At the same time, the transition from AI proof-of-concept to production can be complex and challenging for many organizations. This is where Dell’s recently announced collaboration with Hugging Face comes into play.

Dell and Hugging Face are working together to this process by providing a platform where businesses can easily select, deploy, and fine-tune AI models for their specific use cases using Dell’s infrastructure.

Hugging Face Collaboration
Dell is working with Hugging Face to simplify creating, fine-tuning, and implementing open-source generative AI models for enterprises. This partnership aims to leverage Dell’s leading infrastructure products and services with the resources of the Hugging Face community.

A critical component of this collaboration is a new Dell portal on the Hugging Face platform. This portal will allow Dell customers to easily select from a library of open-source AI models optimized for performance and specific use cases.

The portal features custom containers and scripts for deploying these models on Dell servers and data storage systems. The portal also offers access to datasets, libraries, and tutorials for training generative AI models, along with templates and models for specific outcomes. Enterprises can integrate these with their proprietary data and fine-tune them to their needs.

The fine-tuning process can be complex and time-consuming, using techniques like retrieval augmented generation (RAG), LoRA, and QLoRA. Dell aims to simplify this process with containerized tools.

Customers will initially have access to various Dell PowerEdge servers designed for AI models. Support for Dell workstations and Dell’s APEX service will be coming soon.

This collaboration represents a significant step in making open-source AI more accessible and secure for enterprise-level applications, combining the strengths of Dell’s infrastructure and Hugging Face’s AI resources.

Meta’s Llama in Dell’s Validated Design for GenAI
Dell also recently unveiled its Dell Validated Design for Generative AI, providing pre-tested hardware and software specifically for GenAI projects. Dell recently extended its validated design through a collaboration with Meta to facilitate easy deployment of Meta’s Llama 2 AI models on-premises using Dell’s infrastructure.

The Dell Validated Design for Generative AI with Llama 2 offers a pre-tested and proven infrastructure, streamlining the deployment and management of on-premises GenAI projects. Llama 2, free for research and commercial use, is tested and verified on this Dell design, offering guidance for deployment and configuration. This enables quick setup and predictable operation of Llama 2, especially for fine-tuning on Dell platforms.

The Dell PowerEdge XE9680 server, ideal for deploying Llama 2, is notable for being the first to ship with eight NVIDIA H100 GPUs and NVIDIA AI software. Llama 2 can also be used with various Dell infrastructure, like the PowerEdge R760xa.

Dell has also explored model customization in GenAI, demonstrating the application of customization techniques to Llama 2 models. This includes deploying the Llama 2 70B model on a PowerEdge XE9680 with Ubuntu 22.0 and NVIDIA GPUs, resulting in efficient and precise AI outputs.

Additionally, Dell integrates Llama 2 models into its internal tools to help customers select the right solutions for their AI needs. This comprehensive approach provides organizations with reliable GenAI solutions, from desktops to data centers and clouds, ensuring effective deployment and management of on-premises GenAI projects.

Analyst’s Take
The relationship with Meta is an important one. Meta’s vice president of AI partnerships, Sy Choudhury, was at a recent Dell AI-focused analyst event in Austin. When asked what is in this relationship for Meta, he replied, “We need to ensure that there’s an open path to generative AI.”

This is similar to how Meta enables an open ecosystem to smooth over bumps in the supply chain for its infrastructure with its heavy participation in the Open Compute Foundation (in which Dell also participates).

The Meta relationship also makes sense for Dell. Nearly every other provider of LLMs competes with Dell in some way. Meta’s partner-friendly approach will ensure an open ecosystem around large language models. Dell’s Hugging Face relationship follows the same path.

Dell’s competitors each take a different approach. With its relentless focus on its GreenLake as-a-service offerings, Hewlett Packard Enterprise delivers an LLM-as-a-Service model with its GreenLake for LLMs. Lenovo is building full-stack AI reference designs around its ThinkSystem offerings. Still, the company is, thus far, going it alone with no announced GenAI model partnerships (apart from its relationship with NVIDIA, which nearly every OEM maintains).

Generative AI is already transforming enterprise IT, but it’s a complex problem. Having a technology partner like Dell provide not just tuned and validated hardware solution stacks but, also the right set of foundation models and optimized tooling, eases the pain of GenAI adoption for businesses of all sizes.

Most of today’s Generative AI efforts occur in the cloud. That only makes sense, given the early stage of experimentation for many GenAI projects, coupled with the scarcity and expense of the GPUs and accelerators required for training and inference. As Generative AI matures, many of these workloads naturally come on-prem. This is the now-normal application lifecycle of the modern cloud era. This is also where Dell is ready.

Dell’s collaborations with Meta and Hugging Face, along with its recent organizational changes to support enterprise AI, tell us that Dell is serious about helping its customers quickly and painlessly embrace generative AI. Everything Dell is doing around GenAI will immediately benefit Dell’s customers, and anything that simplifies life for an IT practitioner is goodness.

Disclosure: Steve McDowell is an industry analyst, and NAND Research an industry analyst firm, that engages in, or has engaged in, research, analysis, and advisory services with many technology companies, which may include those mentioned in this article. Mr. McDowell does not hold any equity positions with any company mentioned in this article.

{Categories} _Category: Takes,*ALL*{/Categories}
{URL}https://www.forbes.com/sites/stevemcdowell/2023/11/15/dell-partners-with-hugging-face–meta-to-ease-enterprise-ai-pain/{/URL}
{Author}Steve McDowell, Contributor{/Author}
{Image}https://imageio.forbes.com/specials-images/imageserve/65550075fe2f23d6e20e4e37/0x0.jpg?format=jpg&height=600&width=1200&fit=bounds{/Image}
{Keywords}Cloud,/cloud,Innovation,/innovation,Cloud,/cloud,AI,/ai,standard{/Keywords}
{Source}All{/Source}
{Thumb}{/Thumb}

Exit mobile version