You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A drill of scientific methods, processes, algorithms, and systems to build stories & models. An in-depth learning resource for humans.
This is a drill for people who aim to be in the top 1% of Data and AI experts.
You can do the drill by watching video sessions or text content.
I will recommend video sessions and use text content as go-to notes.
You can be in one of the following categories:-
either you are working on a leadership position
or you are working as a professional
or you are a student
No matter what position you are working in currently, you need to put in the same amount of effort to be in the top 1%.
Spoiler alert - There are NO Shortcuts in the tech field.
This is for all humans who want to improve in the field and are courageous enough to take action.
You will find all the topics explained here and whatever is needed to understand it completely.
The drill is all action-oriented.
To be the authority/best in the AI field, I created a routine that includes:
4 hours of deep work sessions every day
Deep work session rules:
no phone/notifications
no talking to anyone
coffee/chai allowed
2 hours of shallow work sessions every day
Shallow work session rules:
phone allowed
talking allowed
include sharing your work online
You can customize the learning sessions according to your time availability.
the path
Prep
Foundations of AI Engineering
Mastering Large Language Models (LLMs)
Retrieval-Augmented Generation (RAG)
Fine-Tuning LLMs
Reinforcement Learning and Ethical AI
Agentic Workflows
Career Acceleration
Bonus
Of course. Here is the final, clean version of the curriculum with the requested formatting changes.
Module 1: Foundations of AI Engineering (Weeks 1-2)
Week 1: Python & Core Software Engineering
Live Sessions
Tools/Frameworks
Assignment / Hands-On
Session 1 (Wed) Python for AI Deep dive into Python constructs essential for building complex applications. - Functions & Higher Order Functions: Mastering map, filter, and lambda expressions. - Object-Oriented Programming (OOPs): Understanding classes, objects, inheritance, and encapsulation. - Modules, Packages & Libraries: Structuring code for reusability.
Python, VS Code
[Hands-On] Implement classes for a simple inventory system, use higher-order functions (map, filter), and organize code into modules.
Session 2 (Thu) SE Toolkit & Data Learn the industry-standard tools for version control, data querying, and testing. - Version Control: Using Git and GitHub for collaborative development. - SQL for AI Engineers: Querying and managing data in relational databases. - Testing with Pytest: Writing automated tests to ensure code quality. - Data Manipulation: Introduction to NumPy and Pandas for data handling.
Git, GitHub, SQL (SQLite), Pytest, NumPy, Pandas
[Hands-On] Create a GitHub repo, practice the git workflow, write basic SQL queries, and perform data cleaning on a CSV with Pandas.
Office Hours (30m) Friday Q&A on Python concepts, Git issues, and setting up the development environment.
-
Get personalized help with your setup and assignments.
Extra Session (60m) Sunday Guest Session: "A Day in the Life of a Senior AI Engineer" - Understanding the role and responsibilities.
-
Discussion on the technology stack of a modern AI Engineer.
Week 2: Building & Containerizing Applications
Live Sessions
Tools/Frameworks
Assignment / Hands-On
Session 3 (Wed) API Development Learn to build and serve models and business logic via a robust API. - API Fundamentals: Understanding REST principles for building web services. - FastAPI for Production: Building high-performance APIs with Python. - Containerization with Docker: Packaging an application into a portable Docker container.
Python, FastAPI, Docker
[Hands-On] Build a simple API endpoint with FastAPI and write a Dockerfile to containerize the application.
Session 4 (Thu) Container Orchestration & AI Landscape Manage multi-service applications and understand the ecosystem you'll be working in. - Multi-Service Applications: Using Docker Compose to run applications with multiple containers. - The AI Ecosystem: Exploring Hugging Face, major cloud providers, and open-source models. - AI-as-a-Service: Interacting with commercial APIs from OpenAI and Anthropic.
Docker Compose, Hugging Face, OpenAI API
[Hands-On] Use Docker Compose to run your FastAPI app. Explore the Hugging Face Hub and make your first call to the OpenAI API.
Office Hours (30m) Friday Q&A on FastAPI, Docker, Docker Compose, and Project Lab 01.
-
Get help debugging containerization issues and API logic.
Extra Session (60m) Sunday Vibe Coding Session: Live coding a simple application from scratch and containerizing it, step-by-step.
-
Follow along, ask questions in real-time, and build alongside the instructor.
Session 5 (Wed) Math for ML An intuitive, code-first approach to the math that powers machine learning. - Linear Algebra: Working with Vectors, Matrices, and Tensors using NumPy. - Calculus: Visually understanding Gradient Descent. - Statistics & Probability: Reviewing key concepts for data analysis.
Python, NumPy, Matplotlib
[Hands-On] Manipulate vectors & matrices with NumPy. Visually plot and implement Gradient Descent from scratch.
Session 6 (Thu) Building ML Pipelines From raw data to a trained model: learn the end-to-end workflow for classical ML. - ML Problem Framing: Classification, Regression, and Clustering. - Feature Engineering & Selection: Transforming raw data into useful features. - ML Pipelines: Using Scikit-learn to chain preprocessing and modeling steps. - Evaluation Metrics: Interpreting Precision, Recall, F1-Score, ROC-AUC, MAE, MSE, R2.
Scikit-learn, Pandas
[Hands-On] Build a complete pipeline including preprocessing, scaling, and training a classification model. Evaluate it using Precision, Recall, and F1-score.
Office Hours (30m) Friday Q&A on Gradient Descent, Scikit-learn pipelines, and feature engineering.
-
Discuss model selection choices and metric interpretation.
Extra Session (60m) Sunday Career Session: Building Your Personal Brand & Optimizing Your GitHub Profile for Recruiters.
-
Get live feedback and peer reviews on GitHub profiles.
Week 4: Deep Learning & MLOps Automation
Live Sessions
Tools/Frameworks
Assignment / Hands-On
Session 7 (Wed) Deep Learning with PyTorch Understand the building blocks of neural networks and train your first model from scratch. - Neural Network Basics: Neurons, Layers, and Activation Functions. - PyTorch Fundamentals: Tensors, Autograd, and nn.Module. - The Training Loop: Forward Pass, Loss Calculation, Backward Pass, and Optimization.
PyTorch
[Hands-On] Build and train a neural network for image classification on the MNIST or CIFAR-10 dataset.
Session 8 (Thu) MLOps Fundamentals Version your data and automate your model training pipelines for reproducibility. - The MLOps Lifecycle: Why MLOps is more than DevOps for ML. - Data Version Control (DVC): Tracking datasets and models with Git-like semantics. - Continuous Integration for ML (CI/ML): Using GitHub Actions to automate model testing.
DVC (Data Version Control), GitHub Actions
[Hands-On] Use DVC to track a dataset. Set up a GitHub Actions workflow to automatically retrain and test your model on every push to your repo.
Office Hours (30m) Friday Q&A on PyTorch training loops, DVC setup, and GitHub Actions syntax.
-
Get help debugging your MLOps pipelines and neural network code.
Extra Session (60m) Sunday Demo Day: Students present Project Lab 01 (Resume Analyzer API) to the cohort.
-
Showcase your project, articulate your design choices, and receive constructive feedback.
Module 3: Mastering Large Language Models (LLMs) (Weeks 5-6)
Week 5: LLM Foundations & Prompt Engineering
Live Sessions
Tools/Frameworks
Assignment / Hands-On
Session 9 (Wed) LLM Architecture & Ecosystem Understand what makes LLMs tick and how to access them. - Transformer Architecture: High-level overview of the model that powers modern LLMs. - LLM Players: Commercial APIs (OpenAI), open-source models (LLaMA), and cloud platforms. - Local LLMs: Setting up and running powerful open-source models on your machine.
Hugging Face Transformers, Ollama, OpenAI API
[Hands-On] Set up and run Mistral 7B locally using Ollama. Make API calls to OpenAI to compare outputs.
Session 10 (Thu) Advanced Prompting with LangChain Engineer prompts that deliver reliable, structured results. - Prompt Engineering Techniques: Zero-shot, Few-shot, and Chain-of-Thought. - LangChain Fundamentals: Schemas, Models, and Chains. - Structured Outputs: Using Output Parsers to force LLMs to generate JSON or XML.
LangChain
[Hands-On] Build your first chain with LangChain. Implement Zero-shot vs. Few-shot prompting and use output parsers to get structured JSON.
Office Hours (30m) Friday Q&A on running local LLMs, LangChain concepts, and prompt design.
-
Get help debugging LangChain code and optimizing your prompts for better results.
Extra Session (60m) Sunday Research Paper Discussion: "Attention Is All You Need" - A guided walkthrough of the paper that introduced the Transformer.
-
Participate in a group discussion to break down the core concepts of the architecture.
Week 6: Deploying LLM Applications to the Cloud
Live Sessions
Tools/Frameworks
Assignment / Hands-On
Session 11 (Wed) Production Deployment Strategy Learn the workflow for taking a containerized application to a global audience. - Container Registries: Docker Hub vs. cloud-native registries (ECR, GCR, ACR). - Deployment Workflow: The end-to-end process of building, tagging, and pushing a Docker image.
Docker, Docker Hub, AWS ECR / GCP GCR
[Hands-On] Package your LangChain application into a Docker container and push the image to a public Docker Hub repository.
Session 12 (Thu) Hands-On Cloud Deployment Deploy your application to a scalable, cost-effective serverless platform. - Platform-as-a-Service (PaaS): Benefits of using managed platforms for deployment. - Serverless Containers: Deploying to Google Cloud Run or AWS Elastic Beanstalk.
Google Cloud Run (or AWS Elastic Beanstalk)
[Hands-On] Deploy your container from Docker Hub to Google Cloud Run, making your API publicly accessible via a URL.
Office Hours (30m) Friday Q&A on Docker Hub, cloud IAM permissions, and Cloud Run configurations.
-
Get help troubleshooting common cloud deployment errors.
Extra Session (60m) Sunday Career Session: Portfolio Building & How to Effectively Share Your Work Online (LinkedIn, X/Twitter, Blogging).
-
Workshop on creating compelling project descriptions and short video demos.
Module 4: Retrieval-Augmented Generation (RAG) in Production (Weeks 7-8)
Week 7: Building RAG Pipelines
Live Sessions
Tools/Frameworks
Assignment / Hands-On
Session 13 (Wed) RAG Fundamentals Learn the complete RAG workflow, from loading documents to retrieving relevant context. - What is RAG?: Understanding the RAG pattern and its core workflows: Indexing and Inference. - The RAG Pipeline Components: Loaders, Text Splitters, Embedders, Vector Stores, and Retrievers.
LangChain, PyPDF
[Hands-On] Build a complete RAG pipeline using LangChain that can answer questions from a PDF document you provide.
Session 14 (Thu) Embeddings & Vector Databases Deep dive into how text is converted to vectors and where to store them. - Vector Embeddings: How models like Word2Vec and BERT create numerical representations of text. - Vector Databases: Comparing local options (ChromaDB, FAISS) with managed cloud services.
ChromaDB, FAISS
[Hands-On] Load a 10-K financial report, chunk it, generate embeddings, and store them in a local ChromaDB instance for querying.
Office Hours (30m) Friday Q&A on text splitting strategies, embedding models, and ChromaDB usage.
-
Discuss the trade-offs between different vector stores and chunking methods.
Extra Session (60m) Sunday Demo Day: Students present Project Lab 02 (Price Predictor) & 03 (HR Bot).
-
Combined demo day to showcase recent work and get community feedback.
Week 8: Orchestration & Monitoring for RAG
Live Sessions
Tools/Frameworks
Assignment / Hands-On
Session 15 (Wed) Orchestrating Data Pipelines Automate the data ingestion and indexing part of your RAG system. - The Need for Automation: Why manual data updates fail in production. - Apache Airflow Core Concepts: DAGs, Operators, Tasks, and Scheduling.
Apache Airflow, Docker Compose
[Hands-On] Set up Airflow using Docker Compose. Create a DAG that periodically fetches new documents and updates the vector database.
Session 16 (Thu) Monitoring AI Systems Learn to monitor the cost, latency, and quality of your AI application. - Observability for AI: Key metrics to track like cost, latency, and response quality. - The Monitoring Stack: Using Prometheus for data collection and Grafana for visualization.
Prometheus, Grafana, FastAPI
[Hands-On] Add a Prometheus exporter to your RAG API. Build a Grafana dashboard to visualize API latency, request rate, and error count.
Office Hours (30m) Friday Q&A on Airflow DAGs, Prometheus metrics, and Grafana dashboards.
-
Get help debugging your Airflow pipelines and monitoring setup.
Extra Session (60m) Sunday Guest Session: An MLOps Engineer on "The Challenges of Maintaining RAG Systems in Production."
-
Q&A on real-world problems like data drift, evaluation, and cost management.
Session 17 (Wed) The Art of Fine-Tuning Understand the "why" and "how" of fine-tuning. - Why Fine-Tune?: Understanding when to use fine-tuning versus RAG or prompt engineering. - Data Preparation: Learning the standard formats (e.g., JSONL) for instruction-following datasets.
Pandas, JSONL
[Hands-On] Prepare a custom dataset in the required JSONL format for instruction fine-tuning a model.
Session 18 (Thu) Efficient Fine-Tuning with LoRA Learn Parameter-Efficient Fine-Tuning (PEFT) to adapt large models on consumer hardware. - Parameter-Efficient Fine-Tuning (PEFT): The theory behind techniques like LoRA. - QLoRA: Combining LoRA with quantization for maximum memory efficiency during training.
Hugging Face TRL, LoRA, bitsandbytes, Accelerate
[Hands-On] Fine-tune a Llama or Mistral model with LoRA for a specific task, such as generating SQL queries from natural language.
Office Hours (30m) Friday Q&A on dataset formatting, fine-tuning hyperparameters, and LoRA concepts.
-
Get help troubleshooting training scripts and evaluating fine-tuned models.
Extra Session (60m) Sunday Career Session: Resume & Interview Prep (Focus on AI/ML roles), with live resume reviews.
-
Practice answering common AI/ML interview questions and get feedback.
Week 10: Model Optimization & Infrastructure as Code
Live Sessions
Tools/Frameworks
Assignment / Hands-On
Session 19 (Wed) Optimization for Production Learn techniques to make your fine-tuned models smaller, faster, and cheaper to run. - The LLM Inference Challenge: Why serving LLMs is difficult (memory, latency, cost). - Model Quantization: Reducing model size with bitsandbytes. - High-Throughput Serving: Using tools like vLLM to serve models efficiently.
bitsandbytes (Quantization), vLLM
[Hands-On] Quantize your fine-tuned model to 4-bit precision. Serve the optimized model locally using vLLM's OpenAI-compatible server.
Session 20 (Thu) Infrastructure as Code (IaC) Manage and provision your cloud infrastructure programmatically. - Benefits of IaC: Why managing cloud resources with code is a best practice. - Terraform Fundamentals: Declarative syntax, providers, resources, and the init, plan, apply workflow.
Terraform, AWS/GCP
[Hands-On] Write a Terraform script to provision a GPU-enabled cloud instance (e.g., AWS EC2 or GCP Compute Engine) for your ML workloads.
Office Hours (30m) Friday Q&A on quantization, vLLM, and Terraform syntax.
-
Get help debugging Terraform plans and cloud provisioning issues.
Extra Session (60m) Sunday Demo Day: Students present Project Lab 04 (Monitored RAG System).
-
Showcase your automated and monitored RAG pipeline and explain your dashboard.
Module 6: Agentic Workflows (Weeks 11-12)
Week 11: Building Your First AI Agent
Live Sessions
Tools/Frameworks
Assignment / Hands-On
Session 21 (Wed) From Models to Agents Understand the core components of an agent and popular frameworks. - Agentic Workflows: The shift from single-shot generation to autonomous systems. - Core Agent Components: The roles of the Planner, Memory, and Tools. - The ReAct Framework: Understanding the foundational Reasoning + Acting loop.
LangChain Agents, CrewAI (overview)
[Hands-On] Build a simple agent with LangChain that can use a calculator tool to solve math problems, demonstrating the ReAct framework.
Session 22 (Thu) Tool Use & Memory Empower agents to interact with the world and remember past interactions. - Tool Use & Function Calling: How modern LLMs can interact with external APIs. - Memory for Agents: Implementing short-term conversational buffers and long-term memory.
LangChain Tools, SerpAPI (Google Search)
[Hands-On] Build a research agent that can search the web. Implement conversational memory in a chatbot that remembers your name.
Office Hours (30m) Friday Q&A on creating custom tools, managing agent state, and memory types.
-
Get help debugging agent loops and tool integration issues.
Extra Session (60m) Sunday Vibe Coding Session: Live-building a creative AI agent from scratch, such as a simple "code writer" agent.
-
Collective brainstorming on agent design and live implementation.
Week 12: Multi-Agent Systems & Responsible AI
Live Sessions
Tools/Frameworks
Assignment / Hands-On
Session 23 (Wed) Multi-Agent Collaboration Learn how to make specialized agents work together to solve complex problems. - Why Multi-Agent Systems?: Using specialized agents for tasks where one agent is not enough. - Role-Based Collaboration: Designing systems with frameworks like CrewAI. - State Machines for Agents: A brief look at frameworks like LangGraph for cyclical workflows.
CrewAI
[Hands-On] Build a two-agent system with CrewAI where a "Researcher" and a "Writer" collaborate to create a blog post on a given topic.
Session 24 (Thu) Responsible Agent Design & Capstone Kick-off Discuss the risks of autonomous systems and how to build guardrails. - Risks of Autonomy: Addressing hallucinations, harmful actions, and security vulnerabilities. - Building Guardrails: Constraining agent actions and implementing Human-in-the-Loop (HITL).
Model Cards
[Hands-On] Create a comprehensive Model Card for your multi-agent system. Brainstorm and scope your capstone project MVP.
Office Hours (30m) Friday Q&A on CrewAI, agent orchestration, and brainstorming capstone project ideas.
-
Get feedback on the feasibility and scope of your proposed capstone project.
Extra Session (60m) Sunday Demo Day & Capstone Launch: Present Project Lab 05 & 06. Session on "Scoping an MVP & System Design for Your Capstone".
-
Final project lab demos and a dedicated workshop to plan the final two weeks.
Module 7: Capstone & Career Sprint (Weeks 13-14)
Week 13: Capstone Project - Build Week
Live Sessions
Tools/Frameworks
Assignment / Hands-On
Structured Co-Working Session (90m) Wednesday A dedicated time for focused work on your capstone project with instructors available for immediate help.
Your Project Stack
[Hands-On] Focus on the core implementation of your capstone project's MVP.
1:1 Mentorship Check-ins Thursday Scheduled 1:1 sessions with mentors to review progress and unblock issues.
-
Get personalized guidance and strategic advice on your project.
Extended Office Hours (60m) Friday Open forum for technical questions and deep-dive problem-solving for capstone projects.
-
Resolve any final technical hurdles before the polish week.
Extra Session (60m) Sunday Guest Session: "From Project to Product: How to Monetize Your Portfolio."
-
Learn strategies for turning personal projects into SaaS products or freelance opportunities.
Week 14: Capstone Project - Polish & Demo Week
Live Sessions
Tools/Frameworks
Assignment / Hands-On
Presentation Dry-Runs (90m) Wednesday Practice your final demo presentation and get constructive feedback from peers and instructors.
-
Refine your project story, demo flow, and presentation skills.
Career Strategy Session (90m) Thursday Final session on job applications, freelancing pitches, and mastering the AI engineering interview.
-
[Hands-On] Craft a winning freelancing pitch or cold outreach message.
Final Office Hours (30m) Friday Last-minute technical support and Q&A before the final demo day.
-
Ensure your deployed application is ready for the final presentation.
Extra Session (120m) Sunday GRAND FINALE DEMO DAY: Present your capstone project to the entire cohort and invited industry guests.
-
Celebrate your 100-day journey from concepts to a deployed, production-ready application!
About
A collection of scientific methods, processes, algorithms, and systems to build stories & models.