Large Sample Techniques for Statistics
In a way, the world is made up of approximations, and surely there is no exception in the world of statistics. In fact, approximations, especially large sample approximations, are very important parts of both theoretical and - plied statistics. The Gaussi and is tribution, also known as the normaldistri- tion,is merelyonesuchexample,dueto thewell-knowncentrallimittheorem. Large-sample techniques provide solutions to many practical problems; they simplify our solutions to difficult, sometimes intractable problems; they j- tify our solutions; and they guide us to directions of improvements. On the other hand, just because large-sample approximations are used everywhere, and every day, it does not guarantee that they are used properly, and, when the techniques are misused, there may be serious consequences. 2 Example 1 (Asymptotic distribution). Likelihood ratio test (LRT) is one of the fundamental techniques in statistics. It is well known that, in the 2 “standard” situation, the asymptotic null distribution of the LRT is?,with the degreesoffreedomequaltothe difierencebetweenthedimensions,defined as the numbers of free parameters, of the two nested models being compared (e.g., Rice 1995, pp. 310). This might lead to a wrong impression that the 2 asymptotic (null) distribution of the LRT is always? . A similar mistake 2 might take place when dealing with Pearson’s? -test—the asymptotic distri- 2 2 bution of Pearson’s? -test is not always? (e.g., Moore 1978).
1140409478
Large Sample Techniques for Statistics
In a way, the world is made up of approximations, and surely there is no exception in the world of statistics. In fact, approximations, especially large sample approximations, are very important parts of both theoretical and - plied statistics. The Gaussi and is tribution, also known as the normaldistri- tion,is merelyonesuchexample,dueto thewell-knowncentrallimittheorem. Large-sample techniques provide solutions to many practical problems; they simplify our solutions to difficult, sometimes intractable problems; they j- tify our solutions; and they guide us to directions of improvements. On the other hand, just because large-sample approximations are used everywhere, and every day, it does not guarantee that they are used properly, and, when the techniques are misused, there may be serious consequences. 2 Example 1 (Asymptotic distribution). Likelihood ratio test (LRT) is one of the fundamental techniques in statistics. It is well known that, in the 2 “standard” situation, the asymptotic null distribution of the LRT is?,with the degreesoffreedomequaltothe difierencebetweenthedimensions,defined as the numbers of free parameters, of the two nested models being compared (e.g., Rice 1995, pp. 310). This might lead to a wrong impression that the 2 asymptotic (null) distribution of the LRT is always? . A similar mistake 2 might take place when dealing with Pearson’s? -test—the asymptotic distri- 2 2 bution of Pearson’s? -test is not always? (e.g., Moore 1978).
22.49 In Stock
Large Sample Techniques for Statistics

Large Sample Techniques for Statistics

by Jiming Jiang
Large Sample Techniques for Statistics

Large Sample Techniques for Statistics

by Jiming Jiang

eBook2nd ed. 2022 (2nd ed. 2022)

$22.49  $29.99 Save 25% Current price is $22.49, Original price is $29.99. You Save 25%.

Available on Compatible NOOK devices, the free NOOK App and in My Digital Library.
WANT A NOOK?  Explore Now

Related collections and offers


Overview

In a way, the world is made up of approximations, and surely there is no exception in the world of statistics. In fact, approximations, especially large sample approximations, are very important parts of both theoretical and - plied statistics. The Gaussi and is tribution, also known as the normaldistri- tion,is merelyonesuchexample,dueto thewell-knowncentrallimittheorem. Large-sample techniques provide solutions to many practical problems; they simplify our solutions to difficult, sometimes intractable problems; they j- tify our solutions; and they guide us to directions of improvements. On the other hand, just because large-sample approximations are used everywhere, and every day, it does not guarantee that they are used properly, and, when the techniques are misused, there may be serious consequences. 2 Example 1 (Asymptotic distribution). Likelihood ratio test (LRT) is one of the fundamental techniques in statistics. It is well known that, in the 2 “standard” situation, the asymptotic null distribution of the LRT is?,with the degreesoffreedomequaltothe difierencebetweenthedimensions,defined as the numbers of free parameters, of the two nested models being compared (e.g., Rice 1995, pp. 310). This might lead to a wrong impression that the 2 asymptotic (null) distribution of the LRT is always? . A similar mistake 2 might take place when dealing with Pearson’s? -test—the asymptotic distri- 2 2 bution of Pearson’s? -test is not always? (e.g., Moore 1978).

Product Details

ISBN-13: 9783030916954
Publisher: Springer-Verlag New York, LLC
Publication date: 04/04/2022
Series: Springer Texts in Statistics
Sold by: Barnes & Noble
Format: eBook
File size: 37 MB
Note: This product may take a few minutes to download.

About the Author

Jiming Jiang is a Professor of Statistics at the University of California, Davis. He is a Fellow of the American Statistical Association and a Fellow of the Institute of Mathematical Statistics. He is the author of another Springer book, Linear and Generalized Linear Mixed Models and Their Applications (2007). Jiming Jiang is a prominent researcher in the fields of mixed effects models, small area estimation and model selection. Most of his research papers have involved large sample techniques. He is currently an Associate Editor of the Annals of Statistics.

Table of Contents

Preface VII

1 The ε-δ Arguments 1

1.1 Introduction 1

1.2 Getting used to the ε-δ arguments 2

1.3 More examples 5

1.4 Case study: Consistency of MLE in the i.i.d. case 8

1.5 Some useful results 11

1.5.1 Infinite sequence 11

1.5.2 Infinite series 12

1.5.3 Topology 13

1.5.4 Continuity, differentiation, and integration 14

1.6 Exercises 16

2 Modes of Convergence 19

2.1 Introduction 19

2.2 Convergence to probability 20

2.3 Almost sure convergence 23

2.4 Convergence in distribution 26

2.5 Lp convergence and related topics 31

2.6 Case study: X2-test 37

2.7 Summary and additional results 43

2.8 Exercises 45

3 Big O, Small o, and the Unspecified c 51

3.1 Introduction 51

3.2 Big O and small o for sequences and functions 52

3.3 Big O and small o for vectors and matrices 55

3.4 Big O and small o for random quantities 58

3.5 The unspecified c and other similar methods 62

3.6 Case study: The baseball problem 67

3.7 Case study: Likelihood ratio for a clustering problem 70

3.8 Exercises 76

4 Asymptotic Expansions 81

4.1 Introduction 81

4.2 Taylor expansion 83

4.3 Edgeworth expansion; method of formal derivation 89

4.4 Other related expansions 94

4.4.1 Fourier series expansion 94

4.4.2 Cornish-Fisher expansion 98

4.4.3 Two time series expansions 101

4.5 Some elementary expansions 102

4.6 Laplace approximation 106

4.7 Case study: Asymptotic distribution of the MLE 111

4.8 Case study: The Prasad-Rao method 115

4.9 Exercises 121

5 Inequalities 127

5.1 Introduction 127

5.2 Numerical inequalities 128

5.2.1 The convex function inequality 128

5.2.2 Hölder's and related inequalities 131

5.2.3 Monotone functions and related inequalities 134

5.3 Matrix inequalities 138

5.3.1 Nonnegative definite matrices 138

5.3.2 Characteristics of matrices 141

5.4 Integral/moment inequalities 145

5.5 Probability inequalities 152

5.6 Case study: Some problems on existence of moments 158

5.7 Exercises 163

6 Sums of Independent Random Variables 173

6.1 Introduction 173

6.2 The weak law of large numbers 174

6.3 The strong law of large numbers 178

6.4 The central limit theorem 182

6.5 The law of the iterated logarithm 188

6.6 Further results 192

6.6.1 Invariance principles in CLT and LIL 192

6.6.2 Large deviations 197

6.7 Case study: The least squares estimators 202

5.7 Exercises 206

7 Empirical Processes 215

7.1 Introduction 215

7.2 Glivenko-Cantelli theorem and statistical functionals 217

7.3 Weak convergence of empirical processes 220

7.4 LIL and strong approximation 223

7.5 Bounds and large deviations 225

7.6 Non-i.i.d. observations 228

7.7 Empirical processes indexed by functions 231

7.8 Case study: Estimation of ROC curve and ODC 233

7.9 Exercises 235

8 Martingales 239

8.1 Introduction 239

8.2 Examples and simple properties 241

8.3 Two important theorems of martingales 247

8.3.1 The optional stopping theorem 247

8.3.2 The martingale convergence theorem 250

8.4 Martingale laws of large numbers 252

8.4.1 A weak law of large numbers 252

8.4.2 Some strong laws of large numbers 254

8.5 A martingale central limit theorem and related topic 257

8.6 Convergence rate in SLLN and LIL 262

8.7 Invariance principles for martingales 265

8.8 Case study; CLTs for quadratic forms 267

8.9 Case study: Martingale approximation 273

8.10 Exercises 276

9 Time and Spatial Series 283

9.1 Introduction 283

9.2 Autocovariances and autocorrelations 287

9.3 The information criteria 290

9.4 ARMA model identification 294

9.5 Strong limit theorems for i.i.d. spatial series 299

9.6 Two-parameter martingale differences 301

9.7 Sample ACV and ACR for spatial series 304

9.8 Case study: Spatial AR models 307

9.9 Exercises 311

10 Stochastic Processes 317

10.1 Introduction 317

10.2 Markov chains 319

10.3 Poisson processes 326

10.4 Renewal theory 330

10.5 Brownian motion 334

10.6 Stochastic integrals and diffusions 340

10.7 Case study: GARCH models and financial SDE 347

10.8 Exercises 351

11 Nonparametric Statistics 357

11.1 Introduction 357

11.2 Some classical nonparametric tests 360

11.3 Asymptotic relative efficiency 364

11.4 Goodness-of-fit tests 369

11.5 U-statistics 373

11.6 Density estimation 382

11.7 Exercises 387

12 Mixed Effects Models 393

12.1 Introduction 393

12.2 REML: Restricted maximum likelihood 397

12.3 Linear mixed model diagnostics 405

12.4 Inference about GLMM 412

12.5 Mixed model selection 420

12.6 Exercises 428

13 Small-Area Estimation 433

13.1 Introduction 433

13.2 Empirical best prediction with binary data 435

13.3 The Fay-Herriot model 443

13.4 Nonparametric small-area estimation 452

13.5 Model selection for small-area estimation 459

13.6 Exercises 468

14 Jackknife and Bootstrap 471

14.1 Introduction 471

14.2 The jackknife 474

14.3 Jackknifing the MSPE of EBP 480

14.4 The bootstrap 490

14.5 Bootstrapping time series 498

14.6 Bootstrapping mixed models 508

14.7 Exercises 517

15 Markov-Chain Monte Carlo 523

15.1 Introduction 523

15.2 The Gibbs sampler 526

15.3 The Metropolis Hastings algorithm 532

15.4 Monte Carlo EM algorithm 536

15.5 Convergence rates of Gibbs samplers 541

15.6 Exercises 547

A Appendix 553

A.1 Matrix algebra 553

A.1.1 Numbers associated with a matrix 553

A.1.2 Inverse of a matrix 554

A.1.3 Kronecker products 555

A.l.4 Matrix differentiation 555

A.1.5 Projection 556

A.l.6 Decompositions of matrices and eigenvalues 557

A.2 Measure and probability 558

A.2.1 Measures 558

A.2.2 Measurable functions 560

A.2.3 Integration 562

A.2.4 Distributions and random variables 564

A.2.5 Conditional expectations 567

A.2.6 Conditional distributions 568

A.3 Some results in statistics 569

A.3.1 The multivariate normal distribution 569

A.3.2 Maximum likelihood 571

A.3.3 Exponential family and generalized linear models 573

A.3.4 Bayesian inference 574

A.3.5 Stationary processes 576

A.4 List of notation and abbreviations 578

References 585

Index 603

From the B&N Reads Blog

Customer Reviews