This project is a local AI chatbot built with Retrieval-Augmented Generation (RAG) using Ollama.
It allows you to ask questions and get answers based on your own data (for example, your CV stored in cv.json).
Everything runs entirely on your computer β no external API calls are required.
β¨ Key Features
- 100% local AI using Ollama
- Retrieval-Augmented Generation (RAG) from your own JSON data
- Terminal-based chatbot
- Web UI using Streamlit
- Easy setup and fully customizable
How It Works
Hereβs the step-by-step workflow of the chatbot:
1. Prepare Your Data
Your knowledge base is stored in a JSON file, cv.json.
This can contain your CV, notes, FAQs, or any text you want the AI to reference. The chatbot will use this data to answer your questions.
2. Vectorize the Data
Computers canβt understand plain text directly, so we convert your data into vector embeddings. These embeddings are numeric representations of text that capture its meaning. When you ask a question, the AI can search these vectors to find the most relevant information.
3. Ollama AI
Ollama is a local AI framework that runs large language models on your computer. It receives your question and the relevant context from the vector search and generates a natural, grounded response.
4. Terminal Chatbot
You can chat with the AI directly in your terminal using main.py:
python main.py
Steps behind the scenes:
- Your question is converted into a vector.
- The chatbot searches the knowledge base vectors for relevant context.
- Ollama receives the question + context and generates an answer.
- The answer is displayed in your terminal.
5. Streamlit Web Interface
For a graphical interface, run the web app with app.py:
streamlit run app.py
This opens a web interface where you can type questions and receive answers in a chat-like environment. The RAG logic works the same way as the terminal chatbot.
6. Customize Your Data
Replace cv.json with your own documents, notes, or FAQs.
After updating, rebuild the vectors to ensure the AI uses the new data:
python vector.py
Project Structure
main.pyβ Terminal chatbotvector.pyβ Converts JSON data into vector embeddingsapp.pyβ Streamlit web applicationcv.jsonβ Knowledge base (your data)requirements.txtβ Python dependencies
Setup Requirements
- Python 3.9 or higher
- Ollama installed locally (https://ollama.com)
Getting Started
Clone the repository and install dependencies:
git clone https://github.com/rustamdurdyyev/LocalRAG-Chatbot pip install -r requirements.txt
Why This Project is Exciting
This project demonstrates how AI can work entirely locally, maintain privacy, and answer questions based on your own knowledge base. By combining vectors for search and Ollama for language generation, the AI is fast, flexible, and fully controllable.
β¨ Now you can have your own AI assistant that knows your data and runs on your machine!