Handbook of Learning and Approximate Dynamic Programming / Edition 1

Handbook of Learning and Approximate Dynamic Programming / Edition 1

ISBN-10:
047166054X
ISBN-13:
9780471660545
Pub. Date:
08/02/2004
Publisher:
Wiley
ISBN-10:
047166054X
ISBN-13:
9780471660545
Pub. Date:
08/02/2004
Publisher:
Wiley
Handbook of Learning and Approximate Dynamic Programming / Edition 1

Handbook of Learning and Approximate Dynamic Programming / Edition 1

Hardcover

$203.95
Current price is , Original price is $203.95. You
$203.95 
  • SHIP THIS ITEM
    Qualifies for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores
  • SHIP THIS ITEM

    Temporarily Out of Stock Online

    Please check back later for updated availability.


Overview

  • A complete resource to Approximate Dynamic Programming (ADP), including on-line simulation code
  • Provides a tutorial that readers can use to start implementing the learning algorithms provided in the book
  • Includes ideas, directions, and recent results on current research issues and addresses applications where ADP has been successfully implemented
  • The contributors are leading researchers in the field

Product Details

ISBN-13: 9780471660545
Publisher: Wiley
Publication date: 08/02/2004
Series: IEEE Press Series on Computational Intelligence , #2
Pages: 672
Product dimensions: 6.20(w) x 9.30(h) x 1.40(d)

About the Author

JENNIE SI is Professor of Electrical Engineering, Arizona State University, Tempe, AZ. She is director of Intelligent Systems Laboratory, which focuses on analysis and design of learning and adaptive systems. In addition to her own publications, she is the Associate Editor for IEEE Transactions on Neural Networks, and past Associate Editor for IEEE Transactions on Automatic Control and IEEE Transactions on Semiconductor Manufacturing. She was the co-chair for the 2002 NSF Workshop on Learning and Approximate Dynamic Programming.

ANDREW G. BARTO is Professor of Computer Science, University of Massachusetts, Amherst. He is co-director of the Autonomous Learning Laboratory, which carries out interdisciplinary research on machine learning and modeling of biological learning. He is a core faculty member of the Neuroscience and Behavior Program of the University of Massachusetts and was the co-chair for the 2002 NSF Workshop on Learning and Approximate Dynamic Programming. He currently serves as an associate editor of Neural Computation.

WARREN B. POWELL is Professor of Operations Research and Financial Engineering at Princeton University. He is director of CASTLE Laboratory, which focuses on real-time optimization of complex dynamic systems arising in transportation and logistics.

DONALD C. WUNSCH is the Mary K. Finley Missouri Distinguished Professor in the Electrical and Computer Engineering Department at the University of Missouri, Rolla. He heads the Applied Computational Intelligence Laboratory and also has a joint appointment in Computer Science, and is President-Elect of the International Neural Networks Society.

Read an Excerpt

Click to read or download

Table of Contents

Foreword.

1. ADP: goals, opportunities and principles.

Part I: Overview.

2. Reinforcement learning and its relationship to supervised learning.

3. Model-based adaptive critic designs.

4. Guidance in the use of adaptive critics for control.

5. Direct neural dynamic programming.

6. The linear programming approach to approximate dynamic programming.

7. Reinforcement learning in large, high-dimensional state spaces.

8. Hierarchical decision making.

Part II: Technical advances.

9. Improved temporal difference methods with linear function approximation.

10. Approximate dynamic programming for high-dimensional resource allocation problems.

11. Hierarchical approaches to concurrency, multiagency, and partial observability.

12. Learning and optimization - from a system theoretic perspective.

13. Robust reinforcement learning using integral-quadratic constraints.

14. Supervised actor-critic reinforcement learning.

15. BPTT and DAC - a common framework for comparison.

Part III: Applications.

16. Near-optimal control via reinforcement learning.

17. Multiobjective control problems by reinforcement learning.

18. Adaptive critic based neural network for control-constrained agile missile.

19. Applications of approximate dynamic programming in power systems control.

20. Robust reinforcement learning for heating, ventilation, and air conditioning control of buildings.

21. Helicopter flight control using direct neural dynamic programming.

22. Toward dynamic stochastic optimal power flow.

23. Control, optimization, security, and self-healing of benchmark power systems.

What People are Saying About This

From the Publisher

"…highly recommended to researchers, graduate students, engineers, and scientists…" (E-STREAMS, February 2006)

"Clearly, this book is useful for researchers who do or want to do research on ADP." (IIE Transactions-Quality & Reliability Engineering, February 2006)

"…I would like to congratulate the editors, for putting together this wonderful collection of research contributions." (Computing Reviews.com, March 18, 2005)

From the B&N Reads Blog

Customer Reviews