Integrating Large Language Models With Real Time Analytics (Video)

Benjamin Wootton

Benjamin Wootton

Follow me on LinkedIn
Integrating Large Language Models With Real Time Analytics (Video)

In this video, we demonstrate how large language models can be integrated with a real time analytical database, in this instance ClickHouse Cloud.

The technique is referred to as retrieval augmented generation, whereby we are taking the language model and augmenting it with information from a third party database.

Video Transcript

In this demo, I wanted to walk through integrating a large language model with a real-time analytical database, in this case, Clickhouse Cloud.

We are going to demonstrate a scenario where we have data which is being captured in real-time about transactions, and that is being streamed into our database.

Then immediately we are going to ask questions using a large language model chat interface to understand the state of the world.

We are then going to demonstrate how we can use Generative AI to make employees and the business overall more efficient.

To demonstrate this, we are going to use a financial crime scenario.

If you imagine that this business, they have an analyst who is responsible for monitoring inbound and outbound payments in real-time.

What we need to do is ask questions about the transactions and their customers in plain English in order to understand the risk and is this a transaction which we need to block.

Then what I would also like to do is explain how they could potentially make use of Generative AI to support their work and be more efficient and effective.

My aim here is to demonstrate how when you have a kind of modern data stack based on real-time streaming data, combined with LLMs, there could be a very significant business impact.

In terms of the technologies we are going to use for this demo, we are going to have Clickhouse Cloud at the centre of that, which is a fully managed cloud hosted database which is very well suited to workloads of this type.

I am going to use ChatGPT and AWS Bedrock for the natural language processing in the Generative AI components.

We will make use of a library called Lama Index which is a framework for connecting these LLMs with data sources, in our case Clickhouse Cloud.

Amigui and user interface you are going to see is built using something called Ensembled.js which we use to build interactive real-time applications with.

There is also a piece of technology called Cube which is kind of a middleware which sits between the front-end and Clickhouse Cloud to glue those two together.

So now getting into the demo.

Here we are looking at the application which we have built for our financial crime analyst.

You can see that there has been approximately 100 transactions have taken place so far, of which 9 have been blocked because they met kind of fraud indicators.

If we refresh we are now at 130 transactions with a total value of $13 million.

I am going to just refresh if it has now jumped up to $15 million.

Now if I go and check the database we can see we are now up to $17 million and if we look at the number of transactions we are now up to $202.

And that is because in the background we are generating test data and pushing them into our transactions table to demonstrate a real-time concept.

You can see we are now up to $20 million of transaction value.

To demonstrate this just a little bit more you can see that our most popular beneficial countries are the US, the UK and Russia with approximately 20 to 30 transactions in each.

And if we do the same aggregation we can see that the US, the UK and Russia are at the top of the list.

So hopefully that has demonstrated that this dashboard is connected to Clickhouse and we have a very real-time view of the data as it is being ingested.

Now the main part of our demo is to demonstrate this kind of large language model and what we call RAG or retrieval or augmented generation.

So I am going to start by asking a relatively simple question about the dataset which is how many transactions were sent to Russia.

We are asking that in plain conversational English and that has gone away in its interrogators' Clickhouse and responded and told us that there were 28 transactions have been sent to Russia.

I can then go a little deeper and I am going to say what was the highest value transaction that was sent to Russia.

And this is effectively, you know, ordering by transaction value and we can see that in this case it was at $198,000.

Transaction was the highest one which has been sent to Russia.

I am then going to validate to check the LLM is telling us correct answers.

So sure enough here is the transaction for $198,000.

It is in Russia and it was sent to a customer called Marisa Garcia.

So it appears to be working correctly against very real-time data immediately as it is ingested.

If you imagine another situation here, so maybe we have a sudden uptick of transactions which are going to Israel and we feel that that is an anomaly.

What we can do is just ask questions in plain English.

So here I am saying what percentage of transactions went to Israel.

The answer is approximately 6.5%.

And maybe that is something which we need to look at more closely or maybe that is typical.

Or maybe we are just monitoring kind of really high value transactions.

So here I am asking a question of the whole population.

How many beneficiaries have sent transactions over $200,000?

And the LLM answered one.

Now again I want to validate that.

So I am going to go back to Clickhouse Cloud.

I am going to query my transactions table for all of the transactions where the value is more than $200,000.

So execute the right query.

And now we can see that we are actually up to two.

And that is because a new event has just been created.

So what I am going to do is ask the same question again.

And this time the LLM is going to tell us that there are now two transactions with a value of over $200,000.

Which shows just how real time this is.

This is data which has just been ingested in the last few seconds.

I can then kind of challenge the LLM a little bit more by asking the question, what are the names of the beneficiaries who sent transactions over $200,000?

And again you can see that we are successful here in deriving the correct query.

So here we actually get three records now.

So it was Robert, Kristen and Mackenzie.

And again this is because new test data is being generated all of the time in the background.

And we can see that this kind of conversation is very real time.

We really, nothing is being cashed.

We are not retraining the model.

We are literally querying the database and synthesizing the responses.

To get a little bit deeper into the kind of financial crime world, so something which you have to do if you are involved in payments is check where you are not sending money to individuals who have been sanctioned for whatever reason.

Here we have a list of transactions and we can see that 94 of these transactions have been sent to sanctioned individuals.

And if you imagine we have some alerting system which has identified that and they have been flagged to our agent who needs to go away and investigate this situation.

So I am going to start by asking some questions about the kind of high level population.

How many transactions are there where sanctioned is true?

I am being a little bit kind there in terms of helping me LLM build the correct SQL query.

But you don't necessarily have to do that.

We can see that there are 108 transactions which have been sent to sanctioned individuals.

And then going to delve one level deeper, so I am going to say of the transactions where sanctioned is true, which beneficiary name had the largest transaction.

So I want to know which transaction should I focus on, which individual should I be most concerned about.

After a moment it is going to respond and we can see that there was a transaction by a Mr.

Sean Cole of $200,000 and he is a sanctioned individual.

So this should be the top of our list from a financial crime perspective.

So I am just going to go and look into that transaction very quickly.

So this is my list of sanctioned transactions and here is Sean Cole for $200,000.

And that appears to be our highest value transaction amongst that subset of people.

So now I am going to demonstrate a kind of slightly different spin and here we are going to use Generative AI to support the agent in their job.

So I am going to use AWS Bedrock Service, which is backed by the Anthropic LLM model, Foundation model.

And I am going to say can you please write me a letter to Sean Cole asking him about his recent high value transaction.

And if you imagine maybe this is something which our analyst has to do all the time as part of their job, they have to send emails, trigger letters, and maybe it takes a lot of time to fill them out.

Here what we are demonstrating is how we are using Generative AI to make that process much more efficient and effective to allow the agent to achieve more.

And what I am going to do is kind of go back and challenge the agent.

And I am going to say can you make the letter more formal, more angry, and explain that we have a regulatory duty to investigate this matter.

Give him seven days to respond else we will terminate his account.

And then we will have a very aggressive stance, but he is sending lots of money and exposing us to lots of risk, so maybe it is justified.

So again, you know, the benefit of doing stuff like this is we have that very real-time view of what is happening in the business.

We are using the LLM to understand what is happening in the business in that very natural back and forth manner.

And then we are using Generative AI to make the business process and the people within our business much more efficient.

So this combines to be a really kind of powerful tool set, I think, where you have got that really real-time view.

It is very interactive and it is really supporting a kind of high-impact business outcome.

So just to kind of summarise those benefits again, I have touched them throughout, but firstly this whole model and this technology stack is very real-time.

A lot of these kind of fraud systems and financial systems generally are often based on batch processing, whereas here we are streaming in data immediately identifying situations of interest and making it available to our agents.

We have the potential to use things like machine learning, so a lot of systems, particularly in this financial crime space, are very based on business rules.

Here we are using kind of machine learning and AI to detect those situations of interest and respond to them, which is a very powerful idea.

All together this helps our employees be much more efficient, so we can do more with less or maybe we don't need quite so many financial crime analysts because we are relying on technology and enabling more ones that we do have.

And ultimately this could really move the needle and help businesses be much more effective in this case in fighting financial crime.

That has a very real and immediate cost saving and it will help with things like improved regulatory compliance.

So hopefully that all made sense.

What we were trying to do is demonstrate the kind of art of the possible when we combine modern real-time streaming analytics with LLMs.

This is just one particular example, even within the financial crime domain, you could take a very different approach and look at different problems and opportunities.

And then obviously this expands to other industries, be that kind of travel, logistics, manufacturing.

It's more about demonstrating this technique.

If you would like to learn about how you could achieve similar outcomes and deploy similar systems in your business, please do reach out to us.

This is very much the area where we focus at this kind of intersection of real-time advanced analytics and AI.

And we'd be very pleased to hear about your particular business.

Thanks.

Join our mailing list for regular insights:

We help enterprise organisations deploy advanced data, analytics and AI enabled systems based on modern cloud-native technology.

© 2024 Ensemble. All Rights Reserved.