currently breaking things — intentionally
Backend · Applied AI · Node.js · TypeScript · Python
SDE-3 focused on the backend–AI intersection. I build things like RAG pipelines, chatbot systems, hybrid decision engines, and API infrastructure — then I take them apart to understand why they work (or don't).
I care about applied AI, not theoretical AI. If it doesn't run on real data with real edge cases, I'm not that interested. I also teach as I build — workshops, demos, writing — because explaining something badly is usually a sign you don't understand it yet.
over 5 years in. Still wrong about things often enough to keep it interesting.
- RAG systems that don't hallucinate on your specific domain data
- Hybrid architectures — decision trees + LLMs, where determinism matters
- API infrastructure experiments: Kong plugins, mock Direct Line, gateway patterns
- OCR pipelines that handle real document messiness (Tesseract + fuzzy search via Fuse.js)
- Teaching by building — live demos where the code is written in front of people, bugs and all
RAG Pipeline Visualizer
A visual tool to understand how RAG pipelines actually behave — chunking, embeddings, retrieval, and response generation — exposing where things break in real scenarios.
Python LLM RAG Visualization
NLP Expense Tracker (Telegram AI)
Telegram-based AI expense tracker that converts natural language inputs into structured financial data using LLM parsing and automation workflows.
Node.js LLM Telegram API Automation
Hybrid Chatbot — Decision Tree + AI
A chatbot that routes structured, predictable queries through deterministic logic and falls back to an LLM only when needed. Cheaper, faster, and more auditable than pure LLM for constrained domains.
Node.js TypeScript Azure Bot Service Direct Line
Achievement Journal
A minimal system to log and reflect on achievements — designed to combat recency bias and track real progress over time.
Node.js Backend Personal Systems
AI Demystified Starter Kit
A hands-on starter kit for understanding applied AI systems — focused on building intuition through working examples instead of theory-heavy explanations.
Python LLM Learning
- Build first, clean up second. A working prototype beats a clean plan that never ships.
- Break things on purpose. If I don't know where the system fails, I don't know what I built.
- Prefer boring solutions. The clever approach is usually the one you regret maintaining.
- Teach through demos, not slides. Writing code in front of people is the fastest way to find out what you actually understand.
- Comfort with "I don't know" is a feature, not a bug. The interesting work lives past that line.
The gap between demo and production is where the actual engineering is.
I write at whoisnp.me — it's called "brain overflow buffer" for a reason. Mostly experiments, breakdowns of systems I'm working on, and things I figured out the hard way.
- Live workshop demos — code written in the room, no pre-baked scripts
- Learning in public: what I tried, what broke, what I'd do differently
- Occasional deep dives into backend + AI system patterns
No pitches. If you're building something in the AI + backend space, or want to talk systems — reach out.


