Interactive chat application that allows users to ask questions about corporate documents and receive precise, source-based answers. All AI processing runs locally.
Technical Highlights
- Local LLM (Ollama): No external API calls — full data sovereignty and GDPR compliance
- RAG: Documents indexed as embeddings, most relevant passages as context for the LLM
- Blazor WASM: Reactive chat interface with real-time streaming
- Fluxor state management for chat history and UI state
- MediatR pipeline for document upload, embedding generation, and chat queries
Tech-Stack
Blazor WASMOllamaFluxorMediatRLLMRAG
What this project demonstrates
Practical experience with LLMs, RAG architectures, vector databases, and privacy-compliant AI implementation.