Reinforcement Learning, second edition: An Introduction / Edition 2

Reinforcement Learning, second edition: An Introduction / Edition 2

ISBN-10:
0262039249
ISBN-13:
9780262039246
Pub. Date:
11/13/2018
Publisher:
MIT Press
ISBN-10:
0262039249
ISBN-13:
9780262039246
Pub. Date:
11/13/2018
Publisher:
MIT Press
Reinforcement Learning, second edition: An Introduction / Edition 2

Reinforcement Learning, second edition: An Introduction / Edition 2

$100.0 Current price is , Original price is $100.0. You
$93.57 
  • SHIP THIS ITEM
    Not Eligible for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores
$80.50 
  • SHIP THIS ITEM

    Temporarily Out of Stock Online

    Please check back later for updated availability.

    • Condition: Good
    Note: Access code and/or supplemental material are not guaranteed to be included with used textbook.

Overview

The significantly expanded and updated new edition of a widely used text on reinforcement learning, one of the most active research areas in artificial intelligence.

Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives while interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the field's key ideas and algorithms. This second edition has been significantly expanded and updated, presenting new topics and updating coverage of other topics.

Like the first edition, this second edition focuses on core online learning algorithms, with the more mathematical material set off in shaded boxes. Part I covers as much of reinforcement learning as possible without going beyond the tabular case for which exact solutions can be found. Many algorithms presented in this part are new to the second edition, including UCB, Expected Sarsa, and Double Learning. Part II extends these ideas to function approximation, with new sections on such topics as artificial neural networks and the Fourier basis, and offers expanded treatment of off-policy learning and policy-gradient methods. Part III has new chapters on reinforcement learning's relationships to psychology and neuroscience, as well as an updated case-studies chapter including AlphaGo and AlphaGo Zero, Atari game playing, and IBM Watson's wagering strategy. The final chapter discusses the future societal impacts of reinforcement learning.


Product Details

ISBN-13: 9780262039246
Publisher: MIT Press
Publication date: 11/13/2018
Series: Adaptive Computation and Machine Learning series
Edition description: second edition
Pages: 552
Product dimensions: 7.20(w) x 9.10(h) x 1.50(d)
Age Range: 18 Years

About the Author

Richard S. Sutton is Professor of Computing Science and AITF Chair in Reinforcement Learning and Artificial Intelligence at the University of Alberta, and also Distinguished Research Scientist at DeepMind.

Andrew G. Barto is Professor Emeritus in the College of Computer and Information Sciences at the University of Massachusetts Amherst.

Table of Contents

Series Foreword
Preface
I The Problem
1 Introduction
1.1 Reinforcement Learning
1.2 EXamples
1.3 Elements of Reinforcement Learning
1.4 An EXtended EXample: TicTacToe
1.5 Summary
1.6 History of Reinforcement Learning
1.7 Bibliographical Remarks
2 Evaluative Feedback
2.1 An nArmed Bandit Problem
2.2 ActionValue Methods
2.3 SoftmaX Action Selection
2.4 Evaluation Versus Instruction
2.5 Incremental Implementation
2.6 Tracking a Nonstationary Problem
2.7 Optimistic Initial Values
2.8 Reinforcement Comparison
2.9 Pursuit Methods
2.10 Associative Search
2.11 Conclusions
2.12 Bibliographical and Historical Remarks
3 The Reinforcement Learning Problem
3.1 The AgentEnvironment Interface
3.2 Goals and Rewards
3.3 Returns
3.4 Unified Notation for Episodic and Continuing Tasks
3.5 The Markov Property
3.6 Markov Decision Processes
3.7 Value Functions
3.8 Optimal Value Functions
3.9 Optimality and ApproXimation
3.10 Summary
3.11 Bibliographical and Historical Remarks
II Elementary Solution Methods
4 Dynamic Programming
4.1 Policy Evaluation
4.2 Policy Improvement
4.3 Policy Iteration
4.4 Value Iteration
4.5 Asynchronous Dynamic Programming
4.6 Generalized Policy Iteration
4.7 Efficiency of Dynamic Programming
4.8 Summary
4.9 Bibliographical and Historical Remarks
5 Monte Carlo Methods
5.1 Monto Carlo Policy Evaluation
5.2 Monte Carlo Estimation of Action Values
5.3 Monte Carlo Control
5.4 OnPolicy Monte Carlo Control
5.5 Evaluating One Policy While Following Another
5.6 OffPolicy Monte Carlo Control
5.7 IncrementalImplementation
5.8 Summary
5.9 Bibliographical and Historical Remarks
6 TemporalDifference Learning
6.1 TD Prediction
6.2 Advantages of TD Prediction Methods
6.3 Optimality of TD(0)
6.4 Sarsa: OnPolicy TD Control
6.5 QLearning: OffPolicy TD Control
6.6 ActorCritic Methods
6.7 RLearning for Undiscounted Continuing Tasks
6.8 Games, Afterstates, and Other Special Cases
6.9 Summary
6.10 Bibliographical and Historical Remarks
III A Unified View
7 Eligibility Traces
7.1 nStep TD Prediction
7.2 The Forward View of TD ()
7.3 The Backward View of TD ()
7.4 Equivalence of Forward and Backward Views
7.5 Sarsa()
7.6 Q()
7.7 Eligibility Traces for ActorClient Methods
7.8 Replacing Traces
7.9 Implementation Issues
7.10 Variable
7.11 Conclusions
7.12 Bibliographical and Historical Remarks
8 Generalization and Function ApproXimation
8.1 Value Prediction with Function ApproXimation
8.2 GradientDescent Methods
8.3 Linear Methods
8.4 Control with Function ApproXimation
8.5 OffPolicy Bootstrapping
8.6 Should We Bootstrap?
8.7 Summary
8.8 Bibliographical and Historical Remarks
9 Planning and Learning
9.1 Models and Planning
9.2 Integrating Planning, Acting, and Learning
9.3 When the Model Is Wrong
9.4 Prioritized Sweeping
9.5 Full vs. Sample Backups
9.6 Trajectory Sampling
9.7 Heuristic Search
9.8 Summary
9.9 Bibliographical and Historical Remarks
10 Dimensions of Reinforcement Learning
10.1 The Unified View
10.2 Other Frontier Dimensions
11 Case Studies
11.1 TDGammon
11.2 Samuel's Checkers Player
11.3 The Acrobot
11.4 Elevator Dispatching
11.5 Dynamic Channel Allocation
11.6 JobShop Scheduling
References
Summary of Notation
IndeX

What People are Saying About This

Dana Ballard

Reinforcement learning has always been important in the understanding of the driving force behind biological systems, but in the last two decades it has become increasingly important, owing to the development of mathematical algorithms. Barto and Sutton were the prime movers in leading the development of these algorithms and have described them with wonderful clarity in this new text. I predict it will be the standard text.

Wolfram Schultz

The widely acclaimed work of Sutton and Barto on reinforcement learning applies some essentials of animal learning, in clever ways, to artificial learning systems. This is a very readable and comprehensive account of the background, algorithms, applications, and future directions of this pioneering and far-reaching work.

Endorsement

The widely acclaimed work of Sutton and Barto on reinforcement learning applies some essentials of animal learning, in clever ways, to artificial learning systems. This is a very readable and comprehensive account of the background, algorithms, applications, and future directions of this pioneering and far-reaching work.

Wolfram Schultz, University of Fribourg, Switzerland

From the Publisher

“This book is the bible of reinforcement learning, and the new edition is particularly timely given the burgeoning activity in the field. No one with an interest in the problem of learning to act - student, researcher, practitioner, or curious nonspecialist - should be without it.”

Professor of Computer Science, University of Washington, and author of The Master Algorithm

“Generations of reinforcement learning researchers grew up and were inspired by the first edition of Sutton and Barto's book. The second edition is guaranteed to please previous and new readers: while the new edition significantly expands the range of topics covered (new topics covered include artificial neural networks, Monte-Carlo tree search, average reward maximization, and a chapter on classic and new applications), thus increasing breadth, the authors also managed to increase the depth of the presentation by using cleaner notation and disentangling various aspects of this immense topic. At the same time, the new edition retains the simplicity and directness of explanations, thus retaining the great accessibility of the book to readers of all kinds of backgrounds. A fantastic book that I wholeheartedly recommend those interested in using, developing, or understanding reinforcement learning.”

Csaba Szepesvari, Research Scientist at DeepMind and Professor of Computer Science, University of Alberta

"I recommend Sutton and Barto's new edition of Reinforcement Learning to anybody who wants to learn about this increasingly important family of machine learning methods. This second edition expands on the popular first edition, covering today's key algorithms and theory, illustrating these concepts using real-world applications that range from learning to control robots, to learning to defeat the human world-champion Go player, and discussing fundamental connections between these computer algorithms and research on human learning from psychology and neuroscience."

Tom Mitchell, Professor of Computer Science, Carnegie-Mellon University

“Still the seminal text on reinforcement learning - the increasingly important technique that underlies many of the most advanced AI systems today. Required reading for anyone seriously interested in the science of AI!”

Demis Hassabis, Cofounder and CEO, DeepMind

“The second edition of Reinforcement Learning by Sutton and Barto comes at just the right time. The appetite for reinforcement learning among machine learning researchers has never been stronger, as the field has been moving tremendously in the last twenty years. If you want to fully understand the fundamentals of learning agents, this is the textbook to go to and get started with. It has been extended with modern developments in deep reinforcement learning while extending the scholarly history of the field to modern days. I will certainly recommend it to all my students and the many other graduate students and researchers who want to get the appropriate context behind the current excitement for RL.”

Yoshua Bengio, Professor of Computer Science and Operations Research, University of Montreal

Dimitri P. Bertsekas and John N. Tsitsiklis

This is a highly intuitive and accessible introduction to the recent major developments in reinforcement learning, written by two of the field's pioneering contributors.

Toshio Fukuda

This book not only provides an introduction to learning theory but also serves as a tremendous source of ideas for further development and applications in the real world.

From the B&N Reads Blog

Customer Reviews