Artificial Intelligence: Reinforcement Learning in Python

Complete guide to artificial intelligence and machine learning, prep for deep reinforcement learning

Generative AI
4.7/5
$54.99
$219.99
75% OFF!
  • All levels
  • 179 Lectures
  • 19h 48m
  • English
  • Lifetime access, certificate of completion (shareable on LinkedIn, Facebook, and Twitter), Q&A forum, subtitles in English
Login or signup to
register for this course

Course Description

When people talk about artificial intelligence, they usually don’t mean supervised and unsupervised machine learning.

These tasks are pretty trivial compared to what we think of AIs doing - playing chess and Go, driving cars, and beating video games at a superhuman level.

Reinforcement learning has recently become popular for doing all of that and more.

Much like deep learning, a lot of the theory was discovered in the 70s and 80s but it hasn’t been until recently that we’ve been able to observe first hand the amazing results that are possible.

In 2016 we saw Google’s AlphaGo beat the world Champion in Go.

We saw AIs playing video games like Doom and Super Mario.

Self-driving cars have started driving on real roads with other drivers and even carrying passengers (Uber), all without human assistance.

If that sounds amazing, brace yourself for the future because the law of accelerating returns dictates that this progress is only going to continue to increase exponentially.

Learning about supervised and unsupervised machine learning is no small feat. To date I have over TWENTY FIVE (25!) courses just on those topics alone.

And yet reinforcement learning opens up a whole new world. As you’ll learn in this course, the reinforcement learning paradigm is vastly different from both supervised and unsupervised learning.

It’s led to new and amazing insights both in behavioral psychology and neuroscience. As you’ll learn in this course, there are many analogous processes when it comes to teaching an agent and teaching an animal or even a human. It’s the closest thing we have so far to a true general artificial intelligence.

What’s covered in this course?

  • The multi-armed bandit problem and the explore-exploit dilemma
  • Ways to calculate means and moving averages and their relationship to stochastic gradient descent
  • Markov Decision Processes (MDPs)
  • Dynamic Programming
  • Monte Carlo
  • Temporal Difference (TD) Learning
  • Approximation Methods (i.e. how to plug in a deep neural network or other differentiable model into your RL algorithm)
  • How to use OpenAI Gym, with zero code changes
  • Project: Apply Q-Learning to build a stock trading bot


If you’re ready to take on a brand new challenge, and learn about AI techniques that you’ve never seen before in traditional supervised machine learning, unsupervised machine learning, or even deep learning, then this course is for you.

See you in class!



Suggested Prerequisites:

  • calculus
  • object-oriented programming
  • probability
  • Python coding: if/else, loops, lists, dicts, sets
  • Numpy coding: matrix and vector operations, loading a CSV file
  • linear regression
  • gradient descent

Lectures

  • 22 sections
  • 179 lectures
  • 19h 48m total length
Introduction
Preview
03:14
Course Outline and Big Picture
08:53
Where to get the Code
04:36
How to Succeed in this Course
03:04
Warmup
15:36
Section Introduction: The Explore-Exploit Dilemma
10:17
Applications of the Explore-Exploit Dilemma
08:00
Epsilon-Greedy Theory
07:04
Calculating a Sample Mean (pt 1)
05:56
Epsilon-Greedy Beginner's Exercise Prompt
05:05
Designing Your Bandit Program
04:09
Epsilon-Greedy in Code
07:12
Comparing Different Epsilons
06:02
Optimistic Initial Values Theory
05:40
Optimistic Initial Values Beginner's Exercise Prompt
02:26
Optimistic Initial Values Code
04:18
UCB1 Theory
14:32
UCB1 Beginner's Exercise Prompt
02:14
UCB1 Code
03:28
Bayesian Bandits / Thompson Sampling Theory (pt 1)
12:43
Bayesian Bandits / Thompson Sampling Theory (pt 2)
17:35
Thompson Sampling Beginner's Exercise Prompt
02:50
Thompson Sampling Code
05:03
Thompson Sampling With Gaussian Reward Theory
11:24
Thompson Sampling With Gaussian Reward Code
06:18
Exercise on Gaussian Rewards
01:21
Why don't we just use a library?
05:40
Nonstationary Bandits
07:11
Bandit Summary, Real Data, and Online Learning
06:30
(Optional) Alternative Bandit Designs
10:05
Suggestion Box
03:10
What is Reinforcement Learning?
08:09
On Unusual or Unexpected Strategies of RL
06:10
From Bandits to Full Reinforcement Learning
08:42
Naive Solution to Tic-Tac-Toe
03:51
Components of a Reinforcement Learning System
08:01
Notes on Assigning Rewards
02:42
The Value Function and Your First Reinforcement Learning Algorithm
16:34
Tic Tac Toe Code: Outline
03:17
Tic Tac Toe Code: Representing States
02:57
Tic Tac Toe Code: Enumerating States Recursively
06:15
Tic Tac Toe Code: The Environment
06:37
Tic Tac Toe Code: The Agent
05:49
Tic Tac Toe Code: Main Loop and Demo
06:03
Tic Tac Toe Summary
05:26
Tic Tac Toe: Exercise
03:21
MDP Section Introduction
06:19
Gridworld
12:35
Choosing Rewards
03:58
The Markov Property
06:12
Markov Decision Processes (MDPs)
14:42
Future Rewards
09:34
Value Functions
05:07
The Bellman Equation (pt 1)
08:46
The Bellman Equation (pt 2)
06:42
The Bellman Equation (pt 3)
06:09
Bellman Examples
22:25
Optimal Policy and Optimal Value Function (pt 1)
09:17
Optimal Policy and Optimal Value Function (pt 2)
04:36
MDP Summary
02:58
Dynamic Programming Section Introduction
08:59
Iterative Policy Evaluation
15:36
Designing Your RL Program
05:00
Gridworld in Code
11:37
Iterative Policy Evaluation in Code
12:17
Windy Gridworld in Code
07:47
Iterative Policy Evaluation for Windy Gridworld in Code
07:14
Policy Improvement
11:23
Policy Iteration
07:57
Policy Iteration in Code
08:27
Policy Iteration in Windy Gridworld
08:50
Value Iteration
07:39
Value Iteration in Code
06:36
Dynamic Programming Summary
04:57
Monte Carlo Intro
09:21
Monte Carlo Policy Evaluation
10:52
Monte Carlo Policy Evaluation in Code
07:52
Monte Carlo Control
09:00
Monte Carlo Control in Code
08:51
Monte Carlo Control without Exploring Starts
04:41
Monte Carlo Control without Exploring Starts in Code
05:40
Monte Carlo Summary
01:53
Temporal Difference Introduction
03:55
TD(0) Prediction
05:24
TD(0) Prediction in Code
04:54
SARSA
04:36
SARSA in Code
06:20
Q Learning
04:55
Q Learning in Code
05:02
TD Learning Section Summary
02:27
Approximation Methods Section Introduction
04:19
Linear Models for Reinforcement Learning
08:32
Feature Engineering
10:16
Approximation Methods for Prediction
09:55
Approximation Methods for Prediction Code
08:26
Approximation Methods for Control
04:41
Approximation Methods for Control Code
08:54
CartPole
05:34
CartPole Code
06:00
Approximation Methods Exercise
04:07
Approximation Methods Section Summary
03:05
This Course vs. RL Book: What's the Difference?
07:11
Beginners, halt! Stop here if you skipped ahead
14:10
Stock Trading Project Section Introduction
05:15
Data and Environment
12:23
How to Model Q for Q-Learning
09:38
Design of the Program
06:46
Code pt 1
08:00
Code pt 2
09:41
Code pt 3
04:29
Code pt 4
07:17
Stock Trading Project Discussion
03:39
Problem Setup and The Explore-Exploit Dilemma
03:56
Epsilon-Greedy
01:49
Updating a Sample Mean
01:23
Comparing Different Epsilons
04:07
Optimistic Initial Values
02:57
UCB1
04:57
Bayesian / Thompson Sampling
09:53
Thompson Sampling vs. Epsilon-Greedy vs. Optimistic Initial Values vs. UCB1
05:12
Nonstationary Bandits
04:52
Defining Some Terms
07:02
Gridworld
02:14
The Markov Property
04:37
Defining and Formalizing the MDP
04:11
Future Rewards
03:17
Value Function Introduction
12:04
Value Functions
09:16
Optimal Policy and Optimal Value Function
04:10
MDP Summary
01:36
Intro to Dynamic Programming and Iterative Policy Evaluation
03:07
Gridworld in Code
05:48
Iterative Policy Evaluation in Code
06:25
Policy Improvement
02:52
Policy Iteration
02:01
Policy Iteration in Code
03:47
Policy Iteration in Windy Gridworld
04:58
Value Iteration
03:59
Value Iteration in Code
02:15
Dynamic Programming Summary
05:15
Monte Carlo Intro
03:11
Monte Carlo Policy Evaluation
05:46
Monte Carlo Policy Evaluation in Code
03:36
Policy Evaluation in Windy Gridworld
03:39
Monte Carlo Control
06:00
Monte Carlo Control in Code
04:05
Monte Carlo Control without Exploring Starts
02:59
Monte Carlo Control without Exploring Starts in Code
02:52
Monte Carlo Summary
03:43
Temporal Difference Intro
01:43
TD(0) Prediction
03:47
TD(0) Prediction in Code
02:28
SARSA
05:16
SARSA in Code
03:39
Q Learning
03:06
Q Learning in Code
02:14
TD Summary
02:35
Approximation Intro
04:12
Linear Models for Reinforcement Learning
04:17
Features
04:03
Monte Carlo Prediction with Approximation
01:55
Monte Carlo Prediction with Approximation in Code
02:59
TD(0) Semi-Gradient Prediction
04:23
Semi-Gradient SARSA
03:09
Semi-Gradient SARSA in Code
04:09
Course Summary and Next Steps
08:39
What is the Appendix?
03:47
Pre-Installation Check
04:13
Anaconda Environment Setup
20:21
How to install Numpy, Scipy, Matplotlib, Pandas, PyTorch, and TensorFlow
17:33
How to Code Yourself (part 1)
15:55
How to Code Yourself (part 2)
09:24
Proof that using Jupyter Notebook is the same as not using it
12:29
Python 2 vs Python 3
04:38
How to Succeed in this Course (Long Version)
10:25
Is this for Beginners or Experts? Academic or Practical? Fast or slow-paced?
22:05
What order should I take your courses in? (part 1)
11:19
What order should I take your courses in? (part 2)
16:07
Where to get discount coupons and FREE AI tutorials
05:49
Monte Carlo with Importance Sampling for Reinforcement Learning
Reinforcement Learning Algorithms: Expected SARSA

Reviews

4.7

38 reviews for this course

5 Stars
(57%)
4 Stars
(35%)
3 Stars
(6%)
2 Stars
(1%)
1 Stars
(1%)

Testimonials and Success Stories

student-avatar

H. Z.

Machine Learning Research Scientist
flag-usa
United States

“I am one of your students. Yesterday, I presented my paper at ICCV 2019. You have a significant part in this, so I want to sincerely thank you for your in-depth guidance to the puzzle of deep learning. Please keep making awesome courses that teach us!”

5.0
student-avatar

Wade J.

Data Scientist
flag-usa
United States

“I just watched your short video on “Predicting Stock Prices with LSTMs: One Mistake Everyone Makes.” Giggled with delight.

You probably already know this, but some of us really and truly appreciate you. BTW, I spent a reasonable amount of time making a learning roadmap based on your courses and have started the journey.

Looking forward to your new stuff.”

5.0
student-avatar

Kris M.

Data Scientist
flag-usa
United States

“Thank you for doing this! I wish everyone who call’s themselves a Data Scientist would take the time to do this either as a refresher or learn the material. I have had to work with so many people in prior roles that wanted to jump right into machine learning on my teams and didn’t even understand the first thing about the basics you have in here!!

I am signing up so that I have the easy refresh when needed and the see what you consider important, as well as to support your great work, thank you.”

5.0
student-avatar

Steve M.

Machine Learning Research Scientist
flag-usa
United States

“I have been intending to send you an email expressing my gratitude for the work that you have done to create all of these data science courses in Machine Learning and Artificial Intelligence. I have been looking long and hard for courses that have mathematical rigor relative to the application of the ML & AI algorithms as opposed to just exhibit some 'canned routine' and then viola here is your neural network or logistical regression.

Your courses are just what I have been seeking. I am a retired mathematician, statistician and Supply Chain executive from a large Fortune 500 company in Ohio. I also taught mathematics, statistics and operations research courses at a couple of universities in Northern Ohio.

I have taken many courses and have enjoyed the journey, I am not going to be critical of any of the organizations from whom I have taken courses. However, when I read a review about one of your courses in which the student was complaining that one would need a PhD in Mathematics to understand it, I knew this was the course (or series of courses) that I wanted. (Having advanced degrees in mathematics, I knew that it was highly unlikely that a PhD would actually be required.)”

5.0
student-avatar

Saurabh W.

Data Scientist
flag-india
India

“Hi Sir I am a student from India. I've been wanting to write a note to thank you for the courses that you've made because they have changed my career. I wanted to work in the field of data science but I was not having proper guidance but then I stumbled upon your "Logistic Regression" course in March and since then, there's been no looking back. I learned ANNs, CNNs, RNNs, Tensorflow, NLP and whatnot by going through your lectures. The knowledge that I gained enabled me to get a job as a Business Technology Analyst at one of my dream firms even in the midst of this pandemic. For that, I shall always be grateful to you. Please keep making more courses with the level of detail that you do in low-level libraries like Theano.”

5.0
student-avatar

David P.

Financial Analyst
flag-usa
United States

“I just wanted to reach out and thank you for your most excellent course that I am nearing finishing.

And, I couldn't agree more with some of your "rants", and found myself nodding vigorously!

You are an excellent teacher, and a rare breed.

And, your courses are frankly, more digestible and teach a student far more than some of the top-tier courses from ivy leagues I have taken in the past.

(I plan to go through many more courses, one by one!)

I know you must be deluged with complaints in spite of the best content around That's just human nature.

Also, satisfied people rarely take the time to write, so I thought I will write in for a change. :)”

5.0
student-avatar

P. C.

Deep Learning Research Scientist
flag-china
China

“Hello, Lazy Programmer!

In the process of completing my Master’s at Hunan University, China, I am writing this feedback to you in order to express my deep gratitude for all the knowledge and skills I have obtained studying your courses and following your recommendations.

The first course of yours I took was on Convolutional Neural Networks (“Deep Learning p.5”, as far as I remember). Answering one of my questions on the Q&A board, you suggested I should start from the beginning – the Linear and Logistic Regression courses. Despite that I assumed I had already known many basic things at that time, I overcame my “pride” and decided to start my journey in Deep Learning from scratch.

Course by course, I was renewing the basics and the prerequisites. Thus, in several months, after every day studying under your guidance, I was able to gain enough intuitions and practical skills in order to begin progressing in my research. Having a solid background, it was just a pleasure to read all the relevant papers in the field as well as to make all the experiments needed for achieving my goal – creating a high-performance CNN for offline HCCR.

I believe, the professionalism of any teacher can be estimated by the feedback received from their students, and it’s of the utmost importance for me to thank you, Lazy Programmer!

I want you to know, in spite, that we have never actually met and you haven’t taught me privately, I consider you one of my greatest Teachers.

The most important things I have learned from you (some in the hard way, though) beside many exciting modern Deep Learning/AI techniques and algorithms are:

1) If one doesn’t know how to program something, one doesn’t understand it completely.

2) If one is not honest with oneself about one’s prior knowledge, one will never succeed in studying more advanced things.

3) Developing skills in BOTH Math and Programming is what makes one a good student of this major.

I am still studying your courses, and am certain I will ask you more than just a few technical questions regarding their content, but I already would like to say, that I will remember your contribution to my adventure in the Deep Learning field, and consider it as big as one of such great scientists’ as Andrew Ng, Geoffrey Hinton, and my supervisor.

Thank you, Lazy Programmer! 非常感谢您,Lazy 老师!

If you are interested, you can find my first paper’s preprint here:

https://arxiv.org/abs/xxx”

5.0
student-avatar

Dima K.

Data Scientist
flag-ukraine
Ukraine

“By the way, if you are interested to hear. I used the HMM classification, as it was in your course (95% of the script, I had little adjustments there), for the Customer-Care department in a big known fintech company. to predict who will call them, so they can call him before the rush hours, and improve the service. Instead of a poem, I Had a sequence of the last 24 hours' events that the customer had, like: "Loaded money", "Usage in the food service", "Entering the app", "Trying to change the password", etc... the label was called or didn't call. The outcome was great. They use it for their VIP customers. Our data science department and I got a lot of praise.”

5.0
student-avatar

Andres Lopez C.

Data Engineer
flag-usa
United States

“This course is exactly what I was looking for. The instructor does an impressive job making students understand they need to work hard in order to learned. The examples are clear, and the explanations of the theory is very interesting.”

5.0
student-avatar

Mohammed K.

Machine Learning Engineer
flag-germany
Germany

“Thank you, I think you have opened my eyes. I was using API to implement Deep learning algorithms and each time I felt I was messing out on some things. So thank you very much.”

5.0
student-avatar

Tom P.

Machine Learning Engineer
flag-usa
United States

“I have now taken a few classes from some well-known AI profs at Stanford (Andrew Ng, Christopher Manning, …) with an overall average mark in the mid-90s. Just so you know, you are as good as any of them. But I hope that you already know that.

I wish you a happy and safe holiday season. I am glad you chose to share your knowledge with the rest of us.”

5.0
Start learning today

Join 30 day bootcamp for free

4.7/5 from — 600k+ learners