Large Language Models (LLMs) are most often associated with the chatbot style interface popularised by ChatGPT. This involves a user prompting a model with questions or instructions and receiving responses in a conversational, back-and-forth manner.
It's very limiting however to think of this as the only interaction model. For instance, one powerful alternative is to use a large language model to passively monitor data or documents in the background. The LLM could process the data or documents either immediately as they become available without any human intervention. Only when a situation of interest is detected within the content would an employee be alerted.
This approach could be used to monitor unstructured content such as customer reviews, contact centre call transcripts, media stories or contracts in real time. It could also be used to analyse more structured data where we would like to rely on the LLM for reasoning or to join up a complex situation made up of many factors.
We could use this approach to passively process thousands of data items per day. If a hard to detect situation of interest such as a damaging customer review or a vulnerable customer is identified amongst a very large volume of content, a human could be alerted and also advised on what to do. This allows you to search for needles in haystacks which could represent opportunity or risk to your business.
An interesting question is also what happens after that processing has taken place.
We could allow the AI to trigger specific actions such as removing a piece of content, placing an order or terminating an account when the situation is detected. Depending on the risk associated with the action and the businesses tolerance for risk, they may be comfortable with letting the AI do this today.
However, in the short term, it is perhaps safer and more realistic to keep a human in the loop to review the findings and approve or a reject a change or action the response. This could happen via an existing channel such as an email or a Slack message, or possibly a customised application.
In this video we demonstrate this concept. A collection of conversation transcripts from a contact centre are uploaded to the system (in this case manually but could also happen via API integration). These are processed by an LLM. Within seconds, the vulnerable customers have been identified and a task has been routed to a specific named individual in the business. The task includes the call transcript and the LLMs analysis as to why the customer is vulnerable.
The task could equally be enhanced with more context about the case and suggestions as to the employees next best action based on the businesses policies.
We are particularly interested in this intersection of LLM based analysis and human workflow. With the right setup, decisions, analysis and work could ping between employees and the AI across an entire business process.
At each step, the AI would present the employee with all pertinent information about the situation, and make a suggestion as to the employees next best action. With a single click, the employee could approve or deny the action before the business process moves forward.
If the employee does not action the change in time, it can be escalated to a manager for intervention. All of this activity would be logged and audited to ensure that nothing "slips through the net". Over time, we will build more confidence in AI and allow it to make more decisions and take actions directly on our behalf, but in the short term these types of back and forth interactions with humans and AI are a safer and more realistic deployment model.
This type of intelligent automation could really help a business to level up their efficiency, and tee up employees to make the right decision every time. It is a huge transformational opportunity if we get this right.
Sticking with the same vulnerable customers example, businesses are exposed to huge regulatory and reputational risk if they do not identify and treat such customers fairly. They will therefore invest time, effort and money in monitoring call transcripts and other communications to identify them as part of their call centre operations.
However, even with the high cost and manual effort of doing this, it is still likely that vulnerable customers slip through the net, or that there are delays or inconsistencies in identifying them. We hear about such situations all of the time in the media.
Using AI, we have the opportunity to transform this process. We can join up subtle indicators and identify the situation of interest seconds after it first emerges. We can use AI to tee up the employee for success and advise them with regards to the best decision. By combining with workflow tools as we have discussed here, we can ensure that no situation "slips through the net", whilst capturing all of the audit and metrics about how the situation was dealt with. We also capture a log of how AI is being used (explainable AI) and keep a human in the loop constantly.
And this is just one of hundreds of business process which could benefit from this type of automation.
At Ensemble AI, we believe that this deployment model is much more valuable and impactful than the Chatbot style interaction that businesses default to when they think about how to leverage LLMs and AI.