auditing
training
consulting

Generative AI: the pros, cons and considerations

Since the launch of ChatGPT, a lot has happened. Large language models (LLMs) have inspired a wave of new tools and technologies to build and expand on the capabilities of AI assistants. In this article, we go over some of the pros, cons and question marks surrounding this emerging technology.

Let’s talk about the pro's

Modern LLMs excel at understanding context and intent without extensive training. This capability is slowly but surely replacing the need for predetermined training phrases and rigid intent structures that characterize conventional chatbot systems. They can produce high-quality text in various styles and formats, from technical documentation to creative content, assisting in everything from drafting responses to content creation.

These models also excel at extracting relevant information from large documents and databases. This is also known as semantic search or when leveraged in an AI assistant as retrieval augmented generation (RAG). Performance can be enhanced even further through fine-tuning and prompt engineering. As a result of that, we are seeing the emergence of all kinds of hybrid solutions that combine the power of curated content with the information retrieval capabilities of LLMs.

What about the con’s

Despite their capabilities, LLMs come with notable limitations. They require human oversight on critical output, as they can generate plausible but incorrect information. We see hesitation to leverage generative capabilities in customer-facing applications, because the biggest requirements for enterprises are reliability and control.

Perhaps, the biggest difference between conventional CAI solutions and large language models is the lack of explainability: it’s hard to see what’s going on under the hood. When the model produces an unsatisfactory answer, it’s virtually impossible to find out why and how that happened. It’s basically a black box, which might raise compliance issues for highly regulated industries like healthcare, financial services, and insurance.

While ​​LLMs provide us with a great deal of flexibility, we have less control over their outputs. This is great for creative purposes, but less so when accuracy is paramount. While LLMs represent a genuine new approach to building chatbots, projects tend to grow in complexity quickly. Latency, cost, scaling challenges, and (cyber)security are other common pain points. While mitigation strategies exist, the solutions to these challenges are not always as clear-cut.

Future considerations

Uncertainties remain around LLM implementation. Data privacy continues to be a key consideration, especially regarding GDPR compliance. While many providers now offer enhanced security features and data processing agreements, organizations must carefully evaluate their risk tolerance and compliance requirements.

At the end of the day, organizations need clear value propositions and validate their return on investment (ROI). If you struggle to do so, an outside perspective can help. This is why many organizations choose to work with CDI to re(assess) their conversational AI strategies and programs.

Success depends on understanding both their potential and limitations. Organizations should focus on developing clear guidelines for responsible AI usage while maintaining a balanced approach to implementation. Ultimately, the key lies in starting with specific use cases where LLMs can add immediate value, and only then gradually expand when you have demonstrated success.