AI Chat app in NextJS and Vector DB
In this post we are going to build an AI RAG application. It will be a chat application, which will be trained by uploading PDF files. For Vector DB, we will be using Qdrant DB.
This post have been created by following this awesome YouTube video from Piyush Garg. For this first from a terminal we have created a pdf-rag folder. Inside that folder, we have created a client folder.
In this app we will be using pnpm instead of npm, so given the command pnpm dlx create-next-app@latest to create the next app. Give all the options as in the screenshot.
Now, from the client folder run the next app by giving pnpm dev command.
We will be using clerk for authentication. So, give the command pnpm add @clerk/nextjs for adding clerk.
To use clerk, we need to create a file called middleware.ts inside client folder. Here, give the below code.
Now, we also need to create a new app from clerk dashboard. Since, i already have a clerk application. I have to click on Create application.
In the next screen, we need to give the application a name and click on Google. And then Create application.
Now, we will aldo get the Cleark API from the next screen. Just copy it.
Next, we have added the .env file with the above data inside the client folder. Also, updated the layout.tsx file to use Clerk.
Now, we will install lucide-react for our icons in terminal.
Next, create a components folder inside the app folder. And here create FileUploadComponent.tsx file. Here, we have a simple Upload icon inside the return statement, which calls a function handleFileUploadButtonClick on clicking it.
The function handleFileUploadButtonClick takes the file and calls an API endpoint, which we will create in the future.
Now, in the main page.tsx file we will call the FileUploadComponent.
Now, in http://localhost:3000/ we can see the Upload button with icon.
Now, we will create the server component. So, for it inside the server give the command pnpm init command.
We will add some packages in the server component like express, express type and node type.
Inside the server create an index.js file in which we have a simple express code now. We have also installed cors.
In the package.json file we have made the type of module and also added a dev command. After that in terminal ran pnpm dev.
Next, we have created an uploads folder inside server. Also have added the package of multer and multer types. Next, inside the index.js file imported multer.
Then created a POST call which takes the single pdf file and right now only gives a message.
Next, we have added a storage which will take the file and add it in the uploads folder with an unique name.
Now, in http://localhost:3000/ we have uploaded a pdf and can see a successful POST call in the console.
We can also see the pdf file in the uploads folder with a unique name.
Now, we will use a queuing service called BullMQ.
We will also use valkey in the project and for it we will use a docker container.
Now, create a docker-compose.yml file in the root directory. In it we will use the valkey image and map it to port 6379.
I am using a light-weight docker called colima. But you can use docker desktop also. After starting coliman, i am using the docker-compose up -d command to run the docker-compose.yml file.
Back in index.js file we will import Queue from bullmq and also wrote the code to create a new Queue.
Now, inside the POST api for /upload/pdf we will add the queue and pass the required things.
Now, we will create a worker.js file inside the server folder. Here, we are importing Worker from bullmq. Then creating a new Worker, which have the same name of file-upload-queue as the queue created earlier.
Next, in the package.json file we will create a command to run the worker. And then start it in a new terminal.
Now, we will add some dependencies which we will use in our project next in the server’s package.json file. It includes langchain and openai.
In the worker.js we will add the different function from langchain, openai and qdrant.
We will also add qdrant in the docker-compose.yml file and re-run the docker-compose command.
We have added a .env file in the server folder. Also added the package of dotenv.
Now, in the worker.js file we will get the pdf data. Then with use OpenAIEmbeddings and for it we will require an OpenAI API key. Then we will use these embeddings to store data in Qdrant Vector DB.
Now, we will again upload a PDF and will get the correct console log from the worker.js file.
We can also see the data stored in Qdrant DB by going to http://localhost:6333/dashboard
Here, we can see all of the data from pdf file been broken into small chunks.
Back in index.js file we will create a client from OpenAI.
Now, we will create an new GET endpoint in index.js file. Here, we are taking the message from the chat. Then checking the collection from Qdrant DB.
Then we have a SYSTEM_PROMPT which is required by OpenAI. Next, in the chatResult, we are using a model and passing the messages as SYSTEM_PROMPT and userQuery. We are finally returning the message from chatResult.
In the client package.json we are adding some packages related to styles.
Now, we have created a components folder in the client folder. And inside it an ui folder. Here, we have two files button.tsx and input.tsx. The code for the same can be taken from the github repo at the end of this post.
Now, create a lib folder inside client. And in it utils.ts file.
Now, create a ChatComponent.tsx inside the components folder. Here, we have first imported the required components and also created two interface.
Next, inside the ChatComponent function, we will have the Input taking the user input and when we click on the Button, it calls the GET api for /chat created earlier and send the message in it.
After it receives the messages, it adds them in a array.
Now, we will also call the ChatComponent from the page.tsx file.
Back in the ChatComponent.tsx file, we will loop through the messages array and show each message.
Since, we have uploaded a PDF for Dog training. We will ask “Commands for dog” and got back the result.
Added some more styles in the project for the chat app to look better.
You can find the code for this app here.