Reinforcement Learning in the OpenAI Gym (Tutorial) – Monte Carlo w/o exploring starts

Learn Blackjack Video Source & Info:

#OpenAIGym #ReinforcementLearning

If you had to bet your life savings on a game of blackjack, would you end up homeless?

In today’s installment of reinforcement learning in the OpenAI Gym, we’re going to use Monte Carlo control without exploring starts to teach an artificial intelligence to play the game of blackjack.

It works reasonably well, though probably not as well as Q learning or even SARSA would. Nevertheless, Monte Carlo methods are an important part of reinforcement learning and are therefore important to understand to get a full picture of artificial intelligence.

Learn how to turn deep reinforcement learning papers into code:

Deep Q Learning:
https://www.udemy.com/course/deep-q-learning-from-paper-to-code/?couponCode=DQN-JUN-20-1

Actor Critic Methods:
https://www.udemy.com/course/actor-critic-methods-from-paper-to-code-with-pytorch/?couponCode=AC-JUN-1

Reinforcement Learning Fundamentals
https://www.manning.com/livevideo/reinforcement-learning-in-motion

Come hang out on Discord here:
https://discord.gg/Zr4VCdv

Website: https://www.neuralnet.ai
Github: https://github.com/philtabor
Twitter: https://twitter.com/MLWithPhil

Source: YouTube

Share this video:
Reinforcement Learning in the OpenAI Gym (Tutorial) – Monte Carlo w/o exploring starts

4 thoughts on “Reinforcement Learning in the OpenAI Gym (Tutorial) – Monte Carlo w/o exploring starts

  1. For lines 16 – 26 would it be possible to use numpy.zeros_like or numpy.zeros somehow? I really liked what you did with recreating your own argmax to randomly break a tie rather than just using the existing argmax function.

Comments are closed.