5 minute read

The Promise and Peril of Generative AI for Enterprises

Published on
by
and
Yurts summary

In the early days of customer relationship management (CRM), enterprises faced a choice. They could try to cobble together a "roll-your-own" solution out of spreadsheets, databases, and email utilities. Then, early entrants like Siebel Systems came in and offered on-premises CRM. Siebel took early market share, but it was highly rigid and difficult to customize with existing workflows. 

A final option for CRM solutions was to adopt an integrated platform like Salesforce. Salesforce offered a flexible platform and could expand with the organization and be tuned to existing workflows.  

We all know how that story ended. Enterprises flocked to Salesforce because they urgently needed a holistic CRM platform optimized for overall safety, efficiency, and user experience, not a jumble of parts.

Today, we stand at a similar crossroads with generative AI. The technology has burst onto the scene with startling speed and seemingly limitless potential. Large language models like GPT-4 can engage in human-like dialogue, answer questions, and generate creative content. Executives across industries are racing to harness this transformative enterprise AI technology and gain an edge.

But as enterprises rush to adopt generative AI, many are realizing that throwing an open-source model on a server is a recipe for inefficiency at best, and disaster at worst. Moreover, organizations see many early GenAI solutions that require all workflows to move all their data and processes to their applications, leading to a proliferation of siloed GenAI apps with limited data and ties to the actual workflows of business.

Enterprise search powered by advanced conversational AI technologies can significantly enhance the efficiency and accuracy with which information is retrieved, ensuring that enterprise data is leveraged to its fullest potential. Successfully deploying generative AI at scale demands so much more than just a powerful language model or a siloed solution. It requires an entire platform with robust capabilities around security, data integration, access control, analytics, and model management.

Data security in AI and compliance represent perhaps the biggest challenges. Enterprises have extremely sensitive data — from confidential documents to customer PII. An AI system trained on that data absolutely cannot be allowed to spill its knowledge to unauthorized parties. But preventing data leakage in these models is a hugely complex undertaking.

The unfortunate reality is that generative AI is so new and evolves so rapidly that even expert teams struggle mightily to build secure and compliant systems from scratch. They need the help of a purpose-built AI platform.

Even if an enterprise AI system doesn't crash and burn in a security fiasco, jury-rigged deployments will almost inevitably run off the rails. A model needs to integrate seamlessly with enterprise data stores, content management systems, identity providers, and applications. It needs granular access controls to respect data confidentiality. It needs to gracefully handle huge query volumes without breaking the bank on compute costs. And it needs an intuitive user experience. In short, it requires a massive amount of specialized software wrapped around it.

Moreover, generative AI can't be a "set it and forget it" affair. Enterprises need visibility via analytics into how models are being used, where they're working, and where they're not. They need MLOps capabilities to continuously monitor, improve, and efficiently serve their AI. And they need the agility to experiment with and adopt new models as the field rapidly progresses. A flexible platform provides this crucial maneuverability.

I hear the protests already: What about all the amazing open-source software out there? Can't we assemble the best components into our own bespoke platform?

While I'm a huge believer in open-source, the unfortunate reality is that for something as complex as enterprise AI, piecing together a jumble of open-source projects simply doesn't work. You inevitably end up with a fragile Frankenstein's monster, perpetually behind on integrations and security patches. Supporting it is a nightmare, especially when it’s not a core function of your business.

Moreover, the breakneck pace of generative AI innovation presents a unique challenge. Let's face it — if you have technical folks in your organization, they are almost certainly experimenting with these tools already. This can lead to an explosion of one-off projects that may gain traction and suddenly need support. 

I've seen this firsthand in my own experience. A handful of years ago, when the “New Data Stack” was all the rage, eager engineers on my teams used several open source components to stitch together a solution. Their efforts yielded something that worked in a limited demo environment, but most solutions failed before they hit production. If the solutions were used in a business dependent scenario, they were exceedingly fragile and required deep expertise to manage. As the sunk cost investment continued, the piecemeal solution was incredibly difficult to upgrade and pigeon-holed the company and some great engineers into maintaining something that was not core to the business, usually at a much higher cost than purchasing the capability from a vendor.

On the other end of the spectrum, the siloed approach of Siebel, will also fail. Highly walled off solutions pushing users out of their native workflows will not drive the same productivity benefits, and multiple one-off solutions will create a large surface area of risk and costs.

Alternatively, a well-designed platform, with APIs and user-friendly interfaces, allows these engineers to experiment in a safe, manageable way, with guardrails in place. By providing a secure, performant, and user-friendly vehicle, they let organizations focus on their destination — on all the transformative applications of AI — rather than reinventing wheels. We're in the early days of a profound technological shift, but getting there demands the right foundations. And that means a powerful, unified platform, not a pile of spare parts.

The stakes could not be higher. Generative AI will reshape entire industries and redefine the nature of knowledge work. But realizing that potential, at enterprise scale, is an immense challenge. Those who understand the critical necessity of a comprehensive platform will be positioned to lead this revolution. Those who don't may find themselves left in the dust, wondering what went wrong. In the age of AI, fortune will favor not just the bold, but the prepared.

Frequently asked questions

Stay up to date with enterprise AI
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
written by
Jason Schnitzer
CTO and Founder
5 minute read