TechInvest News

When consumer GenAI platforms won’t cut it for businesses

Written by Adam Beavis | Dec 16, 2024 1:11:46 AM

By Adam Beavis, Vice President and Country Manager ANZ, Databricks

Since its launch in 2022, ChatGPT and the fleet of other generative artificial intelligence (GenAI) platforms that have made their way into the mainstream have been transformative in how businesses and individuals alike approach certain tasks.

To say that uptake of GenAI and the technology behind it has been healthy would be an understatement. Some 43% of employees surveyed by Deloitte are using GenAI for work purposes, and according to Databricks research, 11 times more AI models were put into production in the year ending 31 March 2024 compared to the previous year.

For everyday consumers like you and me, tapping GenAI for everyday life hacks is a simple affair. All it takes is plugging in a prompt on a publicly available platform like ChatGPT, Gemini or Claude, and waiting a few seconds for a response. For enterprises, however, the work involved in tapping into GenAI is much more involved.

The complexities involved in tapping into AI for corporate purposes are even more notable when it comes to data intelligence. Unlike GenAI, whose models might use existing information to generate something like a travel itinerary or a brief written history of World War I, data intelligence uses AI to extract accurate, relevant and unique insights from proprietary data.

Data intelligence is frequently used by businesses to address operational challenges and create a competitive advantage, whether by identifying new revenue streams, making employees more productive or simply running things more efficiently. But the information needed to power data intelligence is often trapped in applications and systems across the business.

Barriers to enterprise AI

Business data is often sensitive, as is customer data, which is governed by privacy laws. Such data needs to be handled carefully to ensure it doesn’t fall into the wrong hands. This is a problem when companies want to use publicly available AI models. Their data can become learning data for the AI, putting it at risk of being made public.

At the same time, insufficient visibility into the consequences of AI models has become a prevailing challenge. Organisations can grapple with a lack of trust in AI models’ reliability to consistently deliver outcomes that are safe and fair for their users.

Moreover, businesses frequently deploy separate data and AI platforms, creating governance silos that result in limited visibility and explainability of AI models. This leads to a disjointed approach that can result in inadequate cataloguing, monitoring and auditing of AI models, impeding the ability to guarantee their appropriate use.

Protect that data

For enterprises wanting to adopt GenAI technology for business purposes, tapping into a consumer-facing AI chatbot typically won’t cut it in terms of data privacy requirements. Generative AI is particularly problematic, as a lack of security safeguards can allow applications like chatbots to reveal sensitive data and proprietary intellectual property.

One of the key ways enterprises can avoid such risks is by deploying AI models in-house, using infrastructure that is not public. And many are: 71% of CIOs surveyed in an MIT Tech Review report are planning to build their own custom LLMs or other GenAI models. But a lot of organisations still lack the tools needed to effectively develop models trained on their own data.

One key tool required to manage data effectively to power something like a GenAI LLM internally is a data intelligence platform. Such platforms enable organisations to use their own unique enterprise data to build custom GenAI solutions. With such platforms, enterprises gain complete ownership over both the models and the data they’re using.

Aiming for accuracy

One of the main challenges for all AI models is the risk of ‘hallucination’, where they come up with things that aren’t always grounded in reality. While this might be tolerable in some consumer-facing models, for enterprises and their AI use cases, it isn’t acceptable. Enterprises need models grounded in facts.

However, insufficient visibility into the consequences of AI models has become a prevailing challenge for businesses. Companies can grapple with a lack of trust in AI models' reliability to consistently deliver outcomes that are safe and fair for their users without clear insights into how these models function and the potential impacts of their decisions.

Monitoring models in production is crucial for ensuring ongoing quality and reliability. This includes monitoring for fairness and bias in sensitive AI applications like classification models. To gain the visibility needed, enterprises should employ a data intelligence platform that can monitor data and track model prediction quality and drift.

Committing to customisation

Enterprises need something more tailored for their specific purposes. That’s why model evaluation is a critical component of the AI modelling lifecycle at the core of GenAI applications and is highly relevant to meeting applicable AI regulatory obligations. Tools to test and compare LLM responses, such as data intelligence platforms, can help organisations determine which foundation model works best for the specific environment and use case.

The ‘features’ of an AI model are typically paramount to its quality, accuracy, and reliability, as they directly impact risk, which means they are of utmost importance when seeking to meet AI regulatory obligations. To ensure model accuracy, businesses should track feature lineage and facilitate collaboration across teams managing AI models in production.

Defining data mastery

Ultimately, the advancement of AI in the enterprise relies on building trust in intelligent applications by following responsible practices in the development and use of AI. This requires that every organisation has ownership and control over their data and AI models with comprehensive monitoring, privacy controls and governance throughout the AI lifecycle.

Good data governance can help here, as it ensures the ethical and effective use of data and AI models. This can be accomplished through strict policies to manage who can access data and ML models, implementing measures to protect individuals' data rights, and establishing mechanisms to track data and model provenance.

 

These approaches, and the tools that support them, can empower organisations to go beyond the limitations presented by consumer-facing AI platforms by meeting responsible AI objectives that deliver model quality, provide more secure applications and help maintain the standards needed for enterprise applications.