Hey dev, long time no see! I’m excited to share a new article about building an AI chatbot. We’ll be using the Ollama manager to fetch an AI model, set it up, and generate responses, then wrap it into React UI, just like your favorite ChatGPT.
This is a great starting point if you want to create your own bot or even deploy it for others to use.
If you’d like the source code, you can find it here.
So, let's get started!
Setting Up Ollama and pulling the model
First, we want to start by installing Ollama on our computer.
Ollama is a platform that makes it easy to get and interact with Large Language Models locally, without using cloud services.
You can download it from the official website and select the version specifically for your operating system, or you can use the terminal. I am going to install it using my terminal, and I will paste the following command in PowerShell as administrator:
$ irm https://ollama.com/install.ps1 | iex
To make sure it was successfully installed, run:
$ ollama -version
If you see the version number, then you are good to go and ready to pull a model. I am going to download the Llama 3 model. To get it, run the following command in the terminal:
$ ollama pull llama3
Once the model finishes downloading, we are ready to start our development.
Create API with Flask
Now we are going to set up our project. I am going to create a new folder named AIChatBot and create two folders inside it: one for the front end and one for the back end.
$ mkdir AIChatBot
$ cd AIChatBot
$ mkdir client
$ mkdir server
Let's start with our backend first. Move to the server folder and create a new file named server.py.
$ cd server
$ touch server.py
We also need to install the Flask framework for quick API prototyping. In addition, we will install flask-cors so our front end can connect to the server and send requests.
Finally, we need to install the ollama library, which allows us to interact with our model using Python.
Make sure you have Python installed before installing Flask!
$ python -m pip install flask flask-cors ollama
After installing all the dependencies, we can create our API and add the logic to call our Ollama model. Inside server.py, add the following code:
# Server.py
from flask import Flask, request, jsonify
from flask_cors import CORS
import ollama
app = Flask(__name__)
CORS(app) # applying CORS
# just a GET endpoint to make sure everything works
@app.route("/chat", methods=["GET"])
def getChat():
return jsonify({"reply": "Hi I am AI ChatBot"})
@app.route("/chat", methods=["POST"])
def chat():
data = request.json # get our request from a client
message = data["message"] # get message property from the payload
print("Received: " + message)
response = ollama.chat(
model="llama3",
messages=[
{"role": "user", "content": message}
]
)
reply = response["message"]["content"]
return jsonify({"reply": reply})
# run our server
if __name__ == "__main__":
app.run(port=8000, debug=True)
Let's walk through the code. First, we import all our dependencies and create a variable named app. This creates a new web application instance that will receive our web requests and define routes.
The CORS line acts as middleware so we can access our backend from the client.
Next, we define two routes for GET and POST requests. The first route simply returns "Hi, I am AI ChatBot" if we go to http://localhost:8000/chat in our browser.
The POST route accepts client data and extracts the message property from the request payload. It then creates a response generated by our Ollama model and sends it back to the client.
Finally, we run our application and make it listen on port 8000
Now, we are ready to create our client side.
Create UI and call our backend
Set Up our React Project
As usual, our first step is to create a new React application. For that, we will use Vite. Navigate to the client folder and run the following command:
$ npm create vite@latest
I used the React + TypeScript configuration, along with Tailwind for easier styling.
Additionaly, We are going to use Material UI components, so we need to install those dependencies as well.
To do that, run the following command:
$ npm install @mui/icons-material @mui/material @emotion/styled @emotion/react
After we have everything installed, let's clean up App.tsx so that it doesn't contain anything except the empty fragment tags <></> It should look like this:
/* App.tsx */
import "./App.css";
function App() {
return (
<>
</>
);
}
export default App;
Set Variables and Call API
Now, we are going to create two React hooks: one to capture the user's query (basically, the question for our chat) and another to store the chat history. This way, we can scroll up and down to see previous messages, just like in a real chat.
We also want to add a section for the initial screen, when no messages have been sent yet and include an icon for it. It should include title and input field.
/* App.tsx */
// Importing icons and components
import AutoAwesomeIcon from "@mui/icons-material/AutoAwesome";
import { useState } from "react";
import { TextField, Button } from "@mui/material";
import SendIcon from "@mui/icons-material/Send";
import "./App.css";
function App() {
const [chatHistory, setChatHistory] = useState<string[]>([]);
const [message, setMessage] = useState("");
return (
<>
<section className="my-10 main">
<AutoAwesomeIcon fontSize="large" color="warning"></AutoAwesomeIcon>
<h1 className="mb-10">Ask me anything</h1>
</section>
<section className="flex justify-center gap-10">
<TextField
id="standard-basic"
label="Type your question here"
variant="standard"
className="my-10 w-2xl"
onChange={(e) => {
setMessage(e.target.value);
}}
value={
message
}
/>
<Button
variant="contained"
className="w-42.5"
color="primary"
endIcon={<SendIcon />}
>
Ask
</Button>
</section>
</>
);
}
If we run our app with npm run dev it should look like this:
We can see our TextField component and Button from the Material UI library. We also applied an onChange event listener to our input, so our message variable will be updated with the user's input.
Let's go ahead and create a new function that will be responsible for sending the message to our service, and call it sendMessage. Additionally, I want to display an error message if the input is too short or empty.
import AutoAwesomeIcon from "@mui/icons-material/AutoAwesome";
import SendIcon from "@mui/icons-material/Send";
import "./App.css";
import { TextField, Button } from "@mui/material";
import { useState } from "react";
function App() {
const [chatHistory, setChatHistory] = useState<string[]>([]);
const [message, setMessage] = useState("");
const [error, setError] = useState("");
/*Function that send our input to backend*/
const sendMessage = async (e: React.MouseEvent<HTMLButtonElement>) => {
e.preventDefault();
// Check if it's empty string
if (message && message.trim().length > 0) {
// Add it to our chat history array
setChatHistory((prev) => [...prev, message]);
// Send to our API
const userMessage = {
message: message,
};
const response = await fetch("http://localhost:8000/chat", {
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify(userMessage),
method: "POST",
});
// Reset message variable back to empty string
setMessage("");
const data = await response.json();
// After we get reply, add it to our chat array
setChatHistory((prev) => [...prev, data.reply]);
} else {
setError("You have not typed anything...");
}
};
return (
<>
<section className="my-10 main">
<AutoAwesomeIcon fontSize="large" color="warning"></AutoAwesomeIcon>
<h1 className="mb-10">Ask me anything</h1>
</section>
<section className="flex justify-center gap-10">
<TextField
id="standard-basic"
label="Type your question here"
variant="standard"
className="my-10 w-2xl"
onChange={(e) => {
setMessage(e.target.value);
}}
value={
message
}
/>
<Button
variant="contained"
className="w-42.5"
color="primary"
endIcon={<SendIcon />}
onClick={sendMessage}
>
Ask
</Button>
</section>
</>
);
}
Our sendMessage function is asynchronous. It first grabs the message variable and performs a short validation to ensure we don't send an empty string. Then, we add the user's message to our chat history.
Next, we construct an object that will be converted to JSON and sent to our backend. After that, we reset the message variable to an empty string to clear the input. When we receive a reply from the server, we add it to our chat as well. If the input is invalid, we display an error message.
Display the chat
Now we can update our return and display our chat info. So, our App.tsx will look like this:
/* Other code is the same*/
return (
<>
<header>
<div className="text-left m-5 font-bold">AI ChatBot</div>
</header>
{chatHistory.length > 0 ? (
<section className="chatHistorySection">
{chatHistory.map((message, index) => (
<p key={index} className={(index+1) % 2 == 0 ? "response" : "query"}>{message}</p>
))}
</section>
) : (
<section className="my-10 main">
<AutoAwesomeIcon fontSize="large" color="warning"></AutoAwesomeIcon>
<h1 className="mb-10">Ask me anything</h1>
</section>
)}
<section className="flex justify-center gap-10">
<TextField
id="standard-basic"
label="Type your question here"
variant="standard"
className="my-10 w-2xl"
onChange={(e) => {
setError("");
setMessage(e.target.value);
}}
value={error ?
error :
message}
/>
<Button
variant="contained"
className="w-42.5"
color="primary"
endIcon={<SendIcon />}
onClick={sendMessage}
>
Ask
</Button>
</section>
</>
);
}
Here, we added a header and created conditional rendering.
If our chatHistory array is empty (meaning neither the user has sent a message nor the chat has replied) we render the greeting screen. Otherwise, we display the chat.
We also want to style user messages and chat messages separately, like in iMessage. To do this, we render each message based on its index: if the message index is odd (since the user sends the first message), it will have the class query, else the class response will be assigned to the array item
With this, we can update our styles so that App.css will look like this:
.main {
margin-top: 10em;
}
.main-btn {
background-color: blueviolet;
}
.chatHistorySection {
height: 750px;
overflow-y: scroll;
overflow-x: hidden;
scrollbar-width: none;
width: 1000px;
margin: 0 auto;
}
.query {
background-color: rgba(47, 47, 241, 0.819);
color: white;
padding: 20px;
margin: 1em;
width:max-content;
max-width: 350px;
border-radius: 30px;
text-align: left;
margin-left: 650px;
}
.response {
background-color: rgba(220, 220, 220, 0.325);
color: rgb(47, 47, 47);
width: 50%;
padding: 20px;
margin: 1em;
border-radius: 30px;
text-align: left;
margin-right: 800px;
}
Let's run our app, and our chat history result should look like this:
Congratulations, you made it through 😎✨
Thank you so much for reading!
Before I let you go, I want to share my YouTube channel where I post video tutorials, as well as a Telegram channel where I’m building a community to learn programming together.
I hope to see you soon and for now, Happy coding!🚀
Comments