You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Lightweight Retrieval‑Augmented Generation (RAG) demo with Streamlit UI. Ingest PDF/TXT docs, chunk & embed with Google GenAI or other providers, store in Pinecone, and query via chat. Optional MongoDB logging. Simple backend + front‑end for fast document search, retrieval, and generative answers.
Multilingual campus chatbot that grounds Gemini responses in uploaded PDFs, built with React, FastAPI, Express, MongoDB, LangChain, ChromaDB, and Ollama.
Local AI Question-Answering bot built with Ollama and LangChain using a RAG (Retrieval-Augmented Generation) pipeline. Answers queries from local PDF/text documents with fast, private inference.
Fully local Retrieval-Augmented Generation (RAG) chatbot powered by FAISS vector search and Ollama LLMs. Supports PDF, TXT, and Markdown ingestion, fast similarity search, model switching, and intelligent document-aware Q&A. Everything runs locally with zero external API calls, wrapped in a clean Streamlit interface.