A Stepping Stone Year For Generative AI: Core Challenges For 2024

Naveen Zutshi is the CIO of Databricks.


In the year since generative artificial intelligence (AI) found its footing in broader public conversation, its application across enterprise technology has only grown—and so has enthusiasm among leaders.

The current period of innovation is just only beginning, and the work the technology industry is doing now will inform how AI will impact our lives and work in the long term. Leaders are currently ironing out their short-term priorities with generative AI, which will bring about several trends and challenges in 2024, including:

Leaders will need to weigh costs and operational considerations.
Before anything material can happen with generative AI at their companies, leaders will need to decide: Buy or build? Large or small model? Open source or proprietary?

Some will stick with large models. Others will likely opt to invest in smaller models, fine-tuned on internal data. Some will find they do need the large language models (LLMs) trained on billions of parameters but still want to fine-tune those models with their data to ensure they are getting the most out of their investment. Some companies will opt for both, and some will take a wait-and-see approach, watching others from the sidelines to see what is most successful.

On the vendor side, an important, still-debated question revolves around pricing. There’s no standard or baseline for pricing AI features, and some vendors are looking to tack on sky-high per-user per-month pricing. As some vendors pass rising GPU and research and development costs on to their customers, CIOs are still being tasked with cutting spend and will be forced to reckon with ancillary charges alongside the main product they’re accustomed to.

To address this concern, more CIOs may opt to build their own models. The market is so nascent that purpose-built models for specific functions or industries may even not be available yet, and security and data protection are key concerns that persuade companies to build or fine-tune their models at least in the short term.

Companies will face data quality, model reliability, accessibility and governance bottlenecks.
Many companies are experimenting with LLMs, but few of those solutions have gone into production. Leaders want to bring data to the masses but haven’t quite figured out how yet. Many software companies—including my company, Databricks and Microsoft, Glean, Replit and GitHub, among others—are enhancing their products by implementing generative AI capabilities. We are also seeing an increasing number of other vertical industries implementing generative AI offerings tailored to their industry use cases.

A core bottleneck to “doing generative AI” revolves around applying the technology in a safe, effective and reliable manner. Leaders are grappling with data issues, specifically access to, and governance of, an organization’s corpus of data.

In a Workday survey of more than 2,300 executives globally, for example, just 4% of respondents described their data as fully accessible—demonstrating a massive gap between what many companies want to do with generative AI and what they’re actually equipped to do.

The accuracy and reliability of what is generated by LLMs is also an important research and industry challenge worth addressing. Departments that require a high degree of accuracy—for example, legal, finance and compliance—will adopt generative AI tools more slowly than other functions, but will still see benefits. However, models must be more reliable to bring these teams true value—consider the degree of accuracy needed for a nontechnical attorney or financial analyst to do their work well with the aid of generative AI.

While prompt engineering, retrieval automated generation (RAG) models and fine-tuning can reduce hallucinations, they don’t eliminate them—so I expect more innovation will materialize to address this core concern.

In the coming year, data leaders will be laser-focused on overcoming these foundational challenges. We’ll likely see more companies consolidating their data tools and doubling down on efforts to improve data quality and controls. A recent MIT Tech Review study produced in partnership with Databricks showed that nearly three-quarters of senior technology executives have adopted a Lakehouse architecture, and many others plan to do so.

Where We Go From Here
The work—some experimental, and some successful—teams are doing right now will inform where companies are going to get the most immediate value from generative AI and, subsequently, where they will invest the most money and time. But none of this will be possible in the near term without a newfound focus on data integrity, quality and governance.

Millions of knowledge workers—both technical and business users—will be using various forms of AI-powered assistants in 2024 and beyond. The sheer number of people using these tools gives the technology industry a large sample size to glean insights about how, and to what extent, AI can successfully aid different functions. This will ultimately help inform future approaches, factors for success, and use cases for generative AI.

Like all great technologies, generative AI will evolve (and even stumble) before it reaches its full potential. We still have a long way to go with generative AI, from boosting its efficiency and accuracy to reconciling necessary regulations with innovation. One thing is for certain: The coming year will be integral in defining what the AI-driven future looks like and how it will ultimately change the way organizations operate.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

{Categories} _Category: Takes,*ALL*{/Categories}
{Author}Naveen Zutshi, CommunityVoice{/Author}
{Source}Forbes – Innovation{/Source}

Exit mobile version