Probabilistic Machine Learning for Civil Engineers

Probabilistic Machine Learning for Civil Engineers

by James-A. Goulet
Probabilistic Machine Learning for Civil Engineers

Probabilistic Machine Learning for Civil Engineers

by James-A. Goulet

eBook

$35.99 

Available on Compatible NOOK devices, the free NOOK App and in My Digital Library.
WANT A NOOK?  Explore Now

Related collections and offers


Overview

An introduction to key concepts and techniques in probabilistic machine learning for civil engineering students and professionals; with many step-by-step examples, illustrations, and exercises.

This book introduces probabilistic machine learning concepts to civil engineering students and professionals, presenting key approaches and techniques in a way that is accessible to readers without a specialized background in statistics or computer science. It presents different methods clearly and directly, through step-by-step examples, illustrations, and exercises. Having mastered the material, readers will be able to understand the more advanced machine learning literature from which this book draws.

The book presents key approaches in the three subfields of probabilistic machine learning: supervised learning, unsupervised learning, and reinforcement learning. It first covers the background knowledge required to understand machine learning, including linear algebra and probability theory. It goes on to present Bayesian estimation, which is behind the formulation of both supervised and unsupervised learning methods, and Markov chain Monte Carlo methods, which enable Bayesian estimation in certain complex cases. The book then covers approaches associated with supervised learning, including regression methods and classification methods, and notions associated with unsupervised learning, including clustering, dimensionality reduction, Bayesian networks, state-space models, and model calibration. Finally, the book introduces fundamental concepts of rational decisions in uncertain contexts and rational decision-making in uncertain and sequential contexts. Building on this, the book describes the basics of reinforcement learning, whereby a virtual agent learns how to make optimal decisions through trial and error while interacting with its environment.


Product Details

ISBN-13: 9780262358019
Publisher: MIT Press
Publication date: 03/16/2020
Sold by: Penguin Random House Publisher Services
Format: eBook
Pages: 304
File size: 14 MB
Note: This product may take a few minutes to download.

About the Author

James-A. Goulet is Associate Professor of Civil Engineering at Polytechnique Montreal.

Table of Contents

List of Figures xi

List of Algorithms xix

Acknowledgments xxi

Nomenclature & Abbreviations xxiii

1 Introduction 1

I Background 7

2 Linear Algebra 9

2.1 Notation 9

2.2 Operations 10

2.3 Norms 12

2.4 Transformations 12

2.4.1 Linear Transformations 12

2.4.2 Eigen Decomposition 13

3 Probability Theory 17

3.1 Set Theory 18

3.2 Probability of Events 19

3.3 Random Variables 22

3.3.1 Discrete Random Variables 22

3.3.2 Continuous Random Variables 23

3.3.3 Conditional Probabilities 24

3.3.4 Multivariate Random Variables 25

3.3.5 Moments and Expectation 27

3.4 Functions of Random Variables 29

3.4.1 Lineax Functions 30

3.4.2 Linearization of Nonlinear Functions 31

4 Probability Distributions 35

4.1 Normal Distribution 35

4.1.1 Univariate Normal 35

4.1.2 Multivariate Normal 36

4.1.3 Properties 37

4.1.4 Example: Conditional Distributions 40

4.1.5 Example: Sum of Normal Random Variables 40

4.2 Log-Normal Distribution 41

4.2.1 Univariate Log-Normal 41

4.2.2 Multivariate Log-Normal 42

4.2.3 Properties 43

4.3 Beta Distribution 44

5 Convex Optimization 47

5.1 Gradient Ascent 48

5.2 Newton-Raphson 50

5.3 Coordinate Ascent 52

5.4 Numerical Derivatives 53

5.5 Parameter-Space Transformation 54

II Bayesian Estimation 57

6 Learning from Data 59

6.1 Bayes 59

6.2 Discrete State Variables 61

6.2.1 Example: Disease Screening 61

6.2.2 Example: Fire Alarm 62

6.2.3 Example: Post-Earthquake Damage Assessment 65

6.3 Continuous State Variables 66

6.3.1 Likelihood: f(D|x) 66

6.3.2 Evidence: f(D) 69

6.3.3 Posterior: f(x|D) 70

6.3.4 Number of Observations and Identifiability 70

6.4 Parameter Estimation 71

6.4.1 Prior: f(θ) 72

6.4.2 Likelihood: f(D|θ) 73

6.4.3 Posterior PDF: f(θ|D) 73

6.5 Monte Carlo 74

6.5.1 Monte Carlo Integration 74

6.5.2 Monte Carlo Sampling: Continuous State Variables 75

6.5.3 Monte Carlo Sampling: Parameter Estimation 78

6.6 Conjugate Priors 79

6.7 Approximating the Posterior 82

6.7.1 Maximum Likelihood and Posterior Estimates 82

6.7.2 Laplace Approximation 83

6.8 Model Selection 85

7 Markov Chain Monte Carlo 89

7.1 Metropolis 90

7.2 Metropolis-Hastings 92

7.3 Convergence Checks 92

7.3.1 Burn-In Phase 92

7.3.2 Monitoring Convergence 93

7.3.3 Estimated Potential Scale Reduction 94

7.3.4 Acceptance Rate 96

7.3.5 Proposal Tuning 97

7.4 Space Transformation 97

7.5 Computing with MCMC Samples 99

III Supervised Learning 105

8 Regression 107

8.1 Linear Regression 107

8.1.1 Mathematical Formulation 108

8.1.2 Overfltting and Cross-Validation 110

8.1.3 Mathematical Formulation >1-D 113

8.1.4 Limitations 114

8.2 Gaussian Process Regression 115

8.2.1 Updating a GP Using Exact Observations 116

8.2.2 Updating a GP Using Imperfect Observations 118

8.2.3 Multiple Covariates 119

8.2.4 Parameter Estimation 120

8.2.5 Example: Soil Contamination Characterization 121

8.2.6 Example: Metamodel 122

8.2.7 Advanced Considerations 124

8.3 Neural Networks 126

8.3.1 Feedforward Neural Networks 127

8.3.2 Parameter Estimation and Backpropagation 132

8.3.3 Regularization 134

8.3.4 Example: Metamodel 136

9 Classification 139

9.1 Generative Classifiers 140

9.1.1 Formulation 140

9.1.2 Example: Post-Earthquake Structural Safety Assessment 143

9.2 Logistic Regression 144

9.3 Gaussian Process Classification 146

9.4 Neural Networks 150

9.5 Regression versus Classification 152

IV Unsupervised Learning 155

10 Clustering and Dimension Reduction 157

10.1 Clustering 157

10.1.1 Gaussian Mixture Models 157

10.1.2 K-Means 162

10.2 Principal Component Analysis 163

11 Bayesian Networks 167

11.1 Graphical Models Nomenclature 169

11.2 Conditional Independence 170

11.3 Inference 171

11.4 Conditional Probability Estimation 173

11.4.1 Fully Observed Bayesian Network 173

11.4.2 Partially Observed Bayesian Network 176

11.5 Dynamic Bayesian Network 177

12 State-Space Models 181

12.1 Linear Gaussian State-Space Models 182

12.1.1 Basic Problem Setup 183

12.1.2 General Formulation 186

12.1.3 Forecasting and Smoothing 190

12.1.4 Parameter Estimation 192

12.1.5 Limitations and Practical Considerations 193

12.2 State-Space Models with Regime Switching 194

12.2.1 Switching Kalman Filter 194

12.2.2 Example: Temperature Data with Regime Switch 197

12.3 Linear Model Structures 198

12.3.1 Generic Components 199

12.3.2 Component Assembly 203

12.3.3 Modeling Dependencies Between Observations 205

12.4 Anomaly Detection 208

13 Model Calibration 213

13.1 Least-Squares Model Calibration 215

13.1.1 Illustrative Examples 215

13.1.2 Limitations of Deterministic Model Calibration 218

13.2 Hierarchical Bayesian Estimation 218

13.2.1 Joint Posterior Formulation 218

13.2.2 Predicting at Unobserved Locations 223

V Reinforcement Learning 227

14 Decisions in Uncertain Contexts 229

14.1 Introductory Example 229

14.2 Utility Theory 230

14.2.1 Nomenclature 230

14.2.2 Rational Decisions 231

14.2.3 Axioms of Utility Theory 231

14.3 Utility Functions 232

14.4 Value of Information 236

14.4.1 Value of Perfect Information 236

14.4.2 Value of Imperfect Information 237

15 Sequential Decisions 241

15.1 Markov Decision Process 244

15.1.1 Utility for an Infinite Planning Horizon 245

15.1.2 Value Iteration 246

15.1.3 Policy Iteration 248

15.1.4 Partially Observable Markov Decision Process 251

15.2 Model-Free Reinforcement Learning 252

15.2.1 Temporal Difference Learning 252

15.2.2 Temporal Difference Q-Learning 254

Bibliography 259

Index 267

What People are Saying About This

From the Publisher

This comprehensive textbook presents basic machine learning methods for civil engineers who do not have a specialized background in statistics or in computer science. In addition to the fundamentals, the book includes several case studies that students and professionals will appreciate.

Yossi Sheffi, Elisha Gray II Professor of Engineering Systems, MIT, and Director of the MIT Center for Transportation & Logistics; author of five books including The Power of Resilience and Balancing Green

From the B&N Reads Blog

Customer Reviews