Conversational AI discussed by CTO Rene Altena
Conversational AI is still on the periphery of our emerging technologies radar. We see a lot of potential for applying this technology to your business data. But no such product is currently available on the Dutch market. CTO Rene Altena of Conclusion MBS discusses the emerging technology everyone is talking about in this article.
August 18th, 2023 | Blog | By: Conclusion
What is conversational AI?
Most people probably have two associations with the term conversational AI: chatbots, which are widely used in customer contact, and ChatGPT developed by OpenAI (and also used by Microsoft). While many chatbots rely on a back-end decision tree, ChatGPT uses a generative language model to produce answers. As a result, conversations feel much more natural than discussions with chatbots like those found on many websites.
Conversational AI has four components. The first is NLU: Natural Language Understanding. This is based on a neural network that is trained on lots of texts, so that it understands them better and better. The second step is to extract intent: what exactly do you mean by a particular question? Chatbots based on decision trees are not very good at this, and it was also a challenge for firstgeneration conversational AI. But this is where ChatGPT has made a huge leap forward.
The third component is adding context to the conversation, in other words, the chatbot should be able to infer context from previous questions. For example, if you start by saying ‘I’m cold’ and then ask ‘What temperature is the thermostat set at?’, a contextually trained algorithm can infer from the context that the thermostat probably needs to be turned up a notch (interpreting intent). The final component is NLG: Natural Language Generation – generating answers in a language that is so well articulated that it is as though they were given by a human being.
What are the benefits?
Where ChatGPT uses the Internet as its source, leaving you uncertain as to how it generates its answers – and it sometimes even uses the wrong data to generate an answer –, Microsoft has developed a service that runs OpenAI’s algorithm on your own company data. You can decide which sources you want to make accessible to the program, for example the ERP system, the knowledge database, the product database, and the CRM software. By having access to these four sources, ChatGPT would be able to answer almost all customer queries. In the rare case that it cannot, the bot can forward the question to a specialist in the company, who can provide the correct answer immediately in the chat or later by e-mail. The beta version of this service has been released in the USA. At the time of writing, it is not yet available on the Dutch market, so we have only been able to see an American demo here in the Netherlands.
What makes it so complex?
The main difference between conversational AI and algorithms that rely on a decision tree in the background is that conversational AI will interpret a question and then provide an answer accordingly. After all, AI is simply the application of pattern recognition – in this case, patterns in language. That is where the interpretation aspect comes in, and then the systems get creative: they start to ‘hallucinate’ when giving an answer. They look for patterns of words that ‘logically’ belong together, which can lead to hilarious slip-ups. For some applications, this is not too much of a problem – and can even be a good thing. For example, if you ask: ‘Write a poem for this person, based on these hobbies, to go with this present.’ But for other applications it can be disastrous; for example, when making a medical diagnosis. So, it depends on the application as to whether you use conversational AI or whether you prefer to rely on a decision tree that leaves no room for interpretation. One thing that you often see is that the answers generated by conversational AI are checked by a human. This is widely used in the healthcare sector.
The decision on which type to use largely depends on advances in technology; developments are occurring at a breathtaking speed. As mentioned before, Microsoft will soon be releasing a version of OpenAI that can be deployed on any data source you choose. The system will then no longer have the freedom to invent its own answers, significantly increasing their reliability. A key factor here is that this also places high demands on the quality of the data. Because after all, garbage in is garbage out.
Conversational AI interprets a question and then ‘hallucinates’ when giving an answer. You have to be acutely aware of the situations in which this shouldn’t happen.
Director Strategy & Innovation at Conclusion MBS