Path: blob/master/examples/rl/deep_q_network_breakout.py
3507 views
"""1Title: Deep Q-Learning for Atari Breakout2Author: [Jacob Chapman](https://twitter.com/jacoblchapman) and [Mathias Lechner](https://twitter.com/MLech20)3Date created: 2020/05/234Last modified: 2024/03/175Description: Play Atari Breakout with a Deep Q-Network.6Accelerator: None7"""89"""10## Introduction1112This script shows an implementation of Deep Q-Learning on the13`BreakoutNoFrameskip-v4` environment.1415### Deep Q-Learning1617As an agent takes actions and moves through an environment, it learns to map18the observed state of the environment to an action. An agent will choose an action19in a given state based on a "Q-value", which is a weighted reward based on the20expected highest long-term reward. A Q-Learning Agent learns to perform its21task such that the recommended action maximizes the potential future rewards.22This method is considered an "Off-Policy" method,23meaning its Q values are updated assuming that the best action was chosen, even24if the best action was not chosen.2526### Atari Breakout2728In this environment, a board moves along the bottom of the screen returning a ball that29will destroy blocks at the top of the screen.30The aim of the game is to remove all blocks and breakout of the31level. The agent must learn to control the board by moving left and right, returning the32ball and removing all the blocks without the ball passing the board.3334### Note3536The Deepmind paper trained for "a total of 50 million frames (that is, around 38 days of37game experience in total)". However this script will give good results at around 1038million frames which are processed in less than 24 hours on a modern machine.3940You can control the number of episodes by setting the `max_episodes` variable41to a value greater than 0.4243### References4445- [Q-Learning](https://link.springer.com/content/pdf/10.1007/BF00992698.pdf)46- [Deep Q-Learning](https://www.semanticscholar.org/paper/Human-level-control-through-deep-reinforcement-Mnih-Kavukcuoglu/340f48901f72278f6bf78a04ee5b01df208cc508)47"""48"""49## Setup50"""5152import os5354os.environ["KERAS_BACKEND"] = "tensorflow"5556import keras57from keras import layers5859import gymnasium as gym60from gymnasium.wrappers import AtariPreprocessing, FrameStack61import numpy as np62import tensorflow as tf6364# Configuration parameters for the whole setup65seed = 4266gamma = 0.99 # Discount factor for past rewards67epsilon = 1.0 # Epsilon greedy parameter68epsilon_min = 0.1 # Minimum epsilon greedy parameter69epsilon_max = 1.0 # Maximum epsilon greedy parameter70epsilon_interval = (71epsilon_max - epsilon_min72) # Rate at which to reduce chance of random action being taken73batch_size = 32 # Size of batch taken from replay buffer74max_steps_per_episode = 1000075max_episodes = 10 # Limit training episodes, will run until solved if smaller than 17677# Use the Atari environment78# Specify the `render_mode` parameter to show the attempts of the agent in a pop up window.79env = gym.make("BreakoutNoFrameskip-v4") # , render_mode="human")80# Environment preprocessing81env = AtariPreprocessing(env)82# Stack four frames83env = FrameStack(env, 4)84env.seed(seed)85"""86## Implement the Deep Q-Network8788This network learns an approximation of the Q-table, which is a mapping between89the states and actions that an agent will take. For every state we'll have four90actions, that can be taken. The environment provides the state, and the action91is chosen by selecting the larger of the four Q-values predicted in the output layer.9293"""9495num_actions = 4969798def create_q_model():99# Network defined by the Deepmind paper100return keras.Sequential(101[102layers.Lambda(103lambda tensor: keras.ops.transpose(tensor, [0, 2, 3, 1]),104output_shape=(84, 84, 4),105input_shape=(4, 84, 84),106),107# Convolutions on the frames on the screen108layers.Conv2D(32, 8, strides=4, activation="relu", input_shape=(4, 84, 84)),109layers.Conv2D(64, 4, strides=2, activation="relu"),110layers.Conv2D(64, 3, strides=1, activation="relu"),111layers.Flatten(),112layers.Dense(512, activation="relu"),113layers.Dense(num_actions, activation="linear"),114]115)116117118# The first model makes the predictions for Q-values which are used to119# make a action.120model = create_q_model()121# Build a target model for the prediction of future rewards.122# The weights of a target model get updated every 10000 steps thus when the123# loss between the Q-values is calculated the target Q-value is stable.124model_target = create_q_model()125126127"""128## Train129"""130# In the Deepmind paper they use RMSProp however then Adam optimizer131# improves training time132optimizer = keras.optimizers.Adam(learning_rate=0.00025, clipnorm=1.0)133134# Experience replay buffers135action_history = []136state_history = []137state_next_history = []138rewards_history = []139done_history = []140episode_reward_history = []141running_reward = 0142episode_count = 0143frame_count = 0144# Number of frames to take random action and observe output145epsilon_random_frames = 50000146# Number of frames for exploration147epsilon_greedy_frames = 1000000.0148# Maximum replay length149# Note: The Deepmind paper suggests 1000000 however this causes memory issues150max_memory_length = 100000151# Train the model after 4 actions152update_after_actions = 4153# How often to update the target network154update_target_network = 10000155# Using huber loss for stability156loss_function = keras.losses.Huber()157158while True:159observation, _ = env.reset()160state = np.array(observation)161episode_reward = 0162163for timestep in range(1, max_steps_per_episode):164frame_count += 1165166# Use epsilon-greedy for exploration167if frame_count < epsilon_random_frames or epsilon > np.random.rand(1)[0]:168# Take random action169action = np.random.choice(num_actions)170else:171# Predict action Q-values172# From environment state173state_tensor = keras.ops.convert_to_tensor(state)174state_tensor = keras.ops.expand_dims(state_tensor, 0)175action_probs = model(state_tensor, training=False)176# Take best action177action = keras.ops.argmax(action_probs[0]).numpy()178179# Decay probability of taking random action180epsilon -= epsilon_interval / epsilon_greedy_frames181epsilon = max(epsilon, epsilon_min)182183# Apply the sampled action in our environment184state_next, reward, done, _, _ = env.step(action)185state_next = np.array(state_next)186187episode_reward += reward188189# Save actions and states in replay buffer190action_history.append(action)191state_history.append(state)192state_next_history.append(state_next)193done_history.append(done)194rewards_history.append(reward)195state = state_next196197# Update every fourth frame and once batch size is over 32198if frame_count % update_after_actions == 0 and len(done_history) > batch_size:199# Get indices of samples for replay buffers200indices = np.random.choice(range(len(done_history)), size=batch_size)201202# Using list comprehension to sample from replay buffer203state_sample = np.array([state_history[i] for i in indices])204state_next_sample = np.array([state_next_history[i] for i in indices])205rewards_sample = [rewards_history[i] for i in indices]206action_sample = [action_history[i] for i in indices]207done_sample = keras.ops.convert_to_tensor(208[float(done_history[i]) for i in indices]209)210211# Build the updated Q-values for the sampled future states212# Use the target model for stability213future_rewards = model_target.predict(state_next_sample)214# Q value = reward + discount factor * expected future reward215updated_q_values = rewards_sample + gamma * keras.ops.amax(216future_rewards, axis=1217)218219# If final frame set the last value to -1220updated_q_values = updated_q_values * (1 - done_sample) - done_sample221222# Create a mask so we only calculate loss on the updated Q-values223masks = keras.ops.one_hot(action_sample, num_actions)224225with tf.GradientTape() as tape:226# Train the model on the states and updated Q-values227q_values = model(state_sample)228229# Apply the masks to the Q-values to get the Q-value for action taken230q_action = keras.ops.sum(keras.ops.multiply(q_values, masks), axis=1)231# Calculate loss between new Q-value and old Q-value232loss = loss_function(updated_q_values, q_action)233234# Backpropagation235grads = tape.gradient(loss, model.trainable_variables)236optimizer.apply_gradients(zip(grads, model.trainable_variables))237238if frame_count % update_target_network == 0:239# update the the target network with new weights240model_target.set_weights(model.get_weights())241# Log details242template = "running reward: {:.2f} at episode {}, frame count {}"243print(template.format(running_reward, episode_count, frame_count))244245# Limit the state and reward history246if len(rewards_history) > max_memory_length:247del rewards_history[:1]248del state_history[:1]249del state_next_history[:1]250del action_history[:1]251del done_history[:1]252253if done:254break255256# Update running reward to check condition for solving257episode_reward_history.append(episode_reward)258if len(episode_reward_history) > 100:259del episode_reward_history[:1]260running_reward = np.mean(episode_reward_history)261262episode_count += 1263264if running_reward > 40: # Condition to consider the task solved265print("Solved at episode {}!".format(episode_count))266break267268if (269max_episodes > 0 and episode_count >= max_episodes270): # Maximum number of episodes reached271print("Stopped at episode {}!".format(episode_count))272break273274"""275## Visualizations276Before any training:277278279In early stages of training:280281282In later stages of training:283284"""285286287