Machine Learning with TensorFlow, Second Edition

Machine Learning with TensorFlow, Second Edition

by Mattmann A. Chris
Machine Learning with TensorFlow, Second Edition

Machine Learning with TensorFlow, Second Edition

by Mattmann A. Chris

Paperback(2nd ed.)

$49.99 
  • SHIP THIS ITEM
    Qualifies for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores

Related collections and offers


Overview

Updated with new code, new projects, and new chapters, Machine Learning with TensorFlow, Second Edition gives readers a solid foundation in machine-learning concepts and the TensorFlow library.

Summary
Updated with new code, new projects, and new chapters, Machine Learning with TensorFlow, Second Edition gives readers a solid foundation in machine-learning concepts and the TensorFlow library. Written by NASA JPL Deputy CTO and Principal Data Scientist Chris Mattmann, all examples are accompanied by downloadable Jupyter Notebooks for a hands-on experience coding TensorFlow with Python. New and revised content expands coverage of core machine learning algorithms, and advancements in neural networks such as VGG-Face facial identification classifiers and deep speech classifiers.

Purchase of the print book includes a free eBook in PDF, Kindle, and ePub formats from Manning Publications.

About the technology
Supercharge your data analysis with machine learning! ML algorithms automatically improve as they process data, so results get better over time. You don’t have to be a mathematician to use ML: Tools like Google’s TensorFlow library help with complex calculations so you can focus on getting the answers you need.

About the book
Machine Learning with TensorFlow, Second Edition is a fully revised guide to building machine learning models using Python and TensorFlow. You’ll apply core ML concepts to real-world challenges, such as sentiment analysis, text classification, and image recognition. Hands-on examples illustrate neural network techniques for deep speech processing, facial identification, and auto-encoding with CIFAR-10.

What's inside

Machine Learning with TensorFlow
Choosing the best ML approaches
Visualizing algorithms with TensorBoard
Sharing results with collaborators
Running models in Docker

About the reader
Requires intermediate Python skills and knowledge of general algebraic concepts like vectors and matrices. Examples use the super-stable 1.15.x branch of TensorFlow and TensorFlow 2.x.

About the author
Chris Mattmann is the Division Manager of the Artificial Intelligence, Analytics, and Innovation Organization at NASA Jet Propulsion Lab. The first edition of this book was written by Nishant Shukla with Kenneth Fricklas.

Table of Contents

PART 1 - YOUR MACHINE-LEARNING RIG

1 A machine-learning odyssey

2 TensorFlow essentials

PART 2 - CORE LEARNING ALGORITHMS

3 Linear regression and beyond

4 Using regression for call-center volume prediction

5 A gentle introduction to classification

6 Sentiment classification: Large movie-review dataset

7 Automatically clustering data

8 Inferring user activity from Android accelerometer data

9 Hidden Markov models

10 Part-of-speech tagging and word-sense disambiguation

PART 3 - THE NEURAL NETWORK PARADIGM

11 A peek into autoencoders

12 Applying autoencoders: The CIFAR-10 image dataset

13 Reinforcement learning

14 Convolutional neural networks

15 Building a real-world CNN: VGG-Face ad VGG-Face Lite

16 Recurrent neural networks

17 LSTMs and automatic speech recognition

18 Sequence-to-sequence models for chatbots

19 Utility landscape

Product Details

ISBN-13: 9781617297717
Publisher: Manning
Publication date: 02/02/2021
Edition description: 2nd ed.
Pages: 456
Product dimensions: 7.38(w) x 9.25(h) x 0.90(d)

About the Author

Chris Mattmann is the Deputy Chief Technology and Innovation Officer at NASA Jet Propulsion Lab, where he has been recognised as JPL's first Principal Scientist in the area of Data Science. Chris has applied TensorFlow to challenges he’s faced at NASA, including building an implementation of Google’s Show & Tell algorithm for image captioning using TensorFlow. He contributes to open source as a former Director at the Apache Software Foundation, and teaches graduate courses at USC in Content Detection and Analysis, and in Search Engines and Information Retrieval.

Table of Contents

Foreword xvii

Preface xix

Acknowledgments xxi

About this book xxiii

About the author xxviii

About the cover illustration xxix

Part 1 Your Machine-Learning Rig 1

1 A machine-learning odyssey 3

1.1 Machine-learning fundamentals 5

Parameters 7

Learning and inference 8

1.2 Data representation and features 9

1.3 Distance metrics 15

1.4 Types of learning 17

Supervised learning 17

Unsupervised learning 19

Reinforcement learning 19

Meta-learning 20

1.5 Tensor Flow 22

1.6 Overview of future chapters 24

2 TensorFlow essentials 27

2.1 Ensuring that TensorFlow works 29

2.2 Representing tensors 30

2.3 Creating operators 33

2.4 Executing operators within sessions 35

2.5 Understanding code as a graph 36

Setting session configurations 38

2.6 Writing code in Jupyter 39

2.7 Using variables 42

2.8 Saving and loading variables 43

2.9 Visualizing data using TensorBoard 45

Implementing a moving average 45

Visualizing the moving average 47

2.10 Putting it all together: The TensorFlow system architecture and API 49

Part 2 Core Learning Algorithms 53

3 Linear regression and beyond 55

3.1 Formal notation 56

How do you know the regression algorithm is working? 58

3.2 Linear regression 60

3.3 Polynomial model 63

3.4 Regularization 65

3.5 Application of linear regression 70

4 Using regression for call-center volume prediction 72

4.1 What is 311? 75

4.2 Cleaning the data for regression 76

4.3 What's in a bell curve? Predicting Gaussian distributions 81

4.4 Training your call prediction regressor 82

4.5 Visualizing the results and plotting the error 83

4.6 Regularization and training test splits 86

5 A gentle introduction to classification 89

5.1 Formal notation 90

5.2 Measuring performance 92

Accuracy 93

Precision and recall 93

Receiver operating characteristic curve 95

5.3 Using lineal' regression for classification 96

5.4 Using logistic regression 100

Solving ID logistic regression 101

Solving 2D regression 104

5.5 Multiclass classifier 107

One-versus-all 108

One-versus-one 108

Softmax regression 108

5.6 Application of classification 112

6 Sentiment classification: Large movie-review dataset 114

6.1 Using the Bag of Words model 116

Applying the Bag of Words model to movie reviews 117

Cleaning all the movie reviews 119

Exploratory data analysis on your Bag of Words 121

6.2 Building a sentiment classifier using logistic regression 122

Setting up the training for your model 123

Performing the training for your model 124

6.3 Making predictions using your sentiment classifier 125

6.4 Measuring the effectiveness of your classifier 129

6.5 Creating the softmax-regression sentiment classifier 132

6.6 Submitting your results to Kaggle 140

7 Automatically clustering data 143

7.1 Traversing files in TensorFlow 144

7.2 Extracting features from audio 146

7.3 Using k-means clustering 151

7.4 Segmenting audio 154

7.5 Clustering with a self-organizing map 156

7.6 Applying clustering 161

8 Inferring user activity from Android accelerometer data 163

8.1 The User Activity from Walking dataset 165

Creating the dataset 167

Computing jerk and extracting the feature vector 168

8.2 Clustering similar participants based on jerk magnitudes 171

8.3 Different classes of user activity for a single participant 174

9 Hidden Markov models 178

9.1 Example of a not-so-inter pre table model 179

9.2 Markov model 180

9.3 Hidden Markov model 182

9.4 Forward algorithm 183

9.5 Viterbi decoding 186

9.6 Uses of HMMs 187

Modeling a video 187

Modeling DNA 188

Modeling an image 188

9.7 Application of HMMs 188

10 Part-of-speech tagging and word-sense disambiguation 190

10.1 Review of HMM example: Rainy or Sunny 192

10.2 PoS tagging 195

The big picture: Training and predicting PoS with HMMs 199

Generating the ambiguity PoS tagged dataset 202

10.3 Algorithms for building the HMM for PoS disambiguation 204

Generating the emission probabilities 208

10.4 Running the HMM and evaluating its output 212

10.5 Getting more training data from the Brown Corpus 215

10.6 Defining error bars and metrics for PoS tagging 221

Part 3 The Neural Network Paradigm 225

11 A peek into autoencoders 227

11.1 Neural networks 228

11.2 Autoencoders 231

11.3 Batch training 235

11.4 Working with images 236

11.5 Application of autoencoders 240

12 Applying autoencoders: The CIFAR-10 image dataset 241

12.1 What is CIFAR-10? 242

Evaluating your CIFAR-10 autoencoder 244

12.2 Autoencoders as classifiers 247

Using the autoencoder as a classifier via loss 250

12.3 Denoising autoencoders 252

12.4 Stacked deep autoencoders 256

13 Reinforcement learning 261

13.1 Formal notions 262

Policy 263

Utility 264

13.2 Applying reinforcement learning 265

13.3 Implementing reinforcement learning 267

13.4 Exploring other applications of reinforcement learning 274

14 Convolutional neural networks 276

14.1 Drawback of neural networks 277

14.2 Convolutional neural networks 278

14.3 Preparing the image 279

Generating filters 282

Convolving-using filters 283

Max pooling 286

14.4 Implementing a CNN in TensorFlow 288

Measuring performance 290

Training the classifier 291

14.5 Tips and tricks to improve performance 292

14.6 Application of CNNs 293

15 Building a real-world CNN: VGG-Face and VGG-Face Lite 294

15.1 Making a real-world CNN architecture for CIFAR-10 297

Loading and preparing the CIFAR-10 image data 298

Performing data augmentation 300

15.2 Building a deeper CNN architecture for CIFAR-10 302

CAW optimizations for increasing learned parameter resilience 306

15.3 Training and applying a better CIFAR-10 CNN 307

15.4 Testing and evaluating your CNN for CIFAR-10 309

CIFAR-10 accuracy results and ROC curves 312

Evaluating the softmax predictions per class 314

15.5 Building VGG-Face for facial recognition 317

Picking a subset of VGG-Face for training VGG-Face Lite 319

TensorFlow's Dataset API and data augmentation 320

Creating a TensorFlow dataset 322

Training using TensorFlow datasets 324

VGG-Face Lite, model and training 325

Training and evaluating VGG-Face Lite 328

Evaluating and predicting with VGG-Face Lite 330

16 Recurrent neural networks 334

16.1 Introduction to RNNs 335

16.2 Implementing a recurrent neural network 336

16.3 Using a predictive model for time-series data 339

16.4 Applying RNNs 342

17 LSTMs and automatic speech recognition 343

17.1 Preparing the LibriSpeech corpus 344

Downloading, cleaning, and preparing LibriSpeech OpenSLR data 345

Converting the audio 346

Generating per-audio transcripts 347

Aggregating audio and transcripts 348

17.2 Using the deep-speech model 349

Preparing the input audio data for deep speech 351

Preparing the text transcripts as character-level numerical data 354

The deep-speech model in TensorFlow 356

Connectionist temporal classification in TensorFlow 360

17.3 Training and evaluating deep speech 361

18 Sequence-to-sequence models for chatbots 365

18.1 Building on classification and RNNs 366

18.2 Understanding seq2seq architecture 368

18.3 Vector representation of symbols 373

18.4 Putting it all together 374

18.5 Gathering dialogue data 382

19 Utility landscape 384

19.1 Preference model 386

19.2 Image embedding 390

19.3 Ranking images 394

What's next 399

Appendix Installation instructions 401

Index 411

From the B&N Reads Blog

Customer Reviews