Handbook of Markov Decision Processes: Methods and Applications / Edition 1

Handbook of Markov Decision Processes: Methods and Applications / Edition 1

by Eugene A. Feinberg, Adam Shwartz
ISBN-10:
0792374592
ISBN-13:
9780792374596
Pub. Date:
09/30/2001
Publisher:
Springer US
ISBN-10:
0792374592
ISBN-13:
9780792374596
Pub. Date:
09/30/2001
Publisher:
Springer US
Handbook of Markov Decision Processes: Methods and Applications / Edition 1

Handbook of Markov Decision Processes: Methods and Applications / Edition 1

by Eugene A. Feinberg, Adam Shwartz

Hardcover

$379.99
Current price is , Original price is $379.99. You
$379.99 
  • SHIP THIS ITEM
    Qualifies for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores
  • SHIP THIS ITEM

    Temporarily Out of Stock Online

    Please check back later for updated availability.


Overview

Eugene A. Feinberg Adam Shwartz This volume deals with the theory of Markov Decision Processes (MDPs) and their applications. Each chapter was written by a leading expert in the re­ spective area. The papers cover major research areas and methodologies, and discuss open questions and future research directions. The papers can be read independently, with the basic notation and concepts ofSection 1.2. Most chapters should be accessible by graduate or advanced undergraduate students in fields of operations research, electrical engineering, and computer science. 1.1 AN OVERVIEW OF MARKOV DECISION PROCESSES The theory of Markov Decision Processes-also known under several other names including sequential shastic optimization, discrete-time shastic control, and shastic dynamic programming-studiessequential optimization ofdiscrete time shastic systems. The basic object is a discrete-time shas­ tic system whose transition mechanism can be controlled over time. Each control policy defines the shastic process and values of objective functions associated with this process. The goal is to select a "good" control policy. In real life, decisions that humans and computers make on all levels usually have two types ofimpacts: (i) they cost orsavetime, money, or other resources, or they bring revenues, as well as (ii) they have an impact on the future, by influencing the dynamics. In many situations, decisions with the largest immediate profit may not be good in view offuture events. MDPs model this paradigm and provide results on the structure and existence of good policies and on methods for their calculation.

Product Details

ISBN-13: 9780792374596
Publisher: Springer US
Publication date: 09/30/2001
Series: International Series in Operations Research & Management Science , #40
Edition description: 2002
Pages: 565
Product dimensions: 6.10(w) x 9.25(h) x 0.05(d)

Table of Contents

1 Introduction.- I Finite State and Action Models.- 2 Finite State and Action MDPs.- 3 Bias Optimality.- 4 Singular Perturbations of Markov Chains and Decision Processes.- II Infinite State Models.- 5 Average Reward Optimization Theory for Denumerable State Spaces.- 6 Total Reward Criteria.- 7 Mixed Criteria.- 8 Blackwell Optimality.- 9 The Poisson Equation for Countable Markov Chains: Probabilistic Methods and Interpretations.- 10 Stability, Performance Evaluation, and Optimization.- 11 Convex Analytic Methods in Markov Decision Processes.- 12 The Linear Programming Approach.- 13 Invariant Gambling Problems and Markov Decision Processes.- III Applications.- 14 Neuro-Dynamic Programming: Overview and Recent Trends.- 15 Markov Decision Processes in Finance and Dynamic Options.- 16 Applications of Markov Decision Processes in Communication Networks.- 17 Water Reservoir Applications of Markov Decision Processes.
From the B&N Reads Blog

Customer Reviews