Pub. Date:
Statistical Reasoning in the Behavioral Sciences / Edition 6

Statistical Reasoning in the Behavioral Sciences / Edition 6

by Bruce M. KingBruce M. King
Current price is , Original price is $192.75. You

Temporarily Out of Stock Online

Please check back later for updated availability.

This item is available online through Marketplace sellers.


Cited by more than 300 scholars, Statistical Reasoning in the Behavioral Sciences continues to provide streamlined resources and easy-to-understand information on statistics in the behavioral sciences and related fields, including psychology, education, human resources management, and sociology.

The sixth edition includes new information about the use of computers in statistics and offers screenshots of IBM SPSS (formerly SPSS) menus, dialog boxes, and output in selected chapters without sacrificing any of the conceptual logic and the statistical formulas needed to facilitate understanding. The example problems have been updated to reflect more current topics (e.g., text messaging while driving, violence in the media). The latest research and new photos have been integrated throughout the text to make the material more accessible. With these changes, students and professionals in the behavioral sciences will develop an understanding of statistical logic and procedures, the properties of statistical devices, and the importance of the assumptions underlying statistical tools.

Product Details

ISBN-13: 9781118532638
Publisher: Wiley
Publication date: 10/28/2012
Edition description: 6th Edition (Reprint)
Pages: 496
Product dimensions: 7.90(w) x 9.90(h) x 0.70(d)

About the Author

Bruce M. King is a psychologist and professor at Clemson University. In 1978 King received a Ph.D. in biopsychology from the University of Chicago. Since 1981 he has taught human sexuality to over 40,000 students at the University of New Orleans, and later at Clemson.

Table of Contents

Introduction     1
Descriptive Statistics     3
Inferential Statistics     3
Our Concern: Applied Statistics     4
Variables and Constants     6
Scales of Measurement     7
Scales of Measurement and Problems of Statistical Treatment     10
Do Statistics Lie?     11
Point of Controversy: Are Statistical Procedures Necessary?     13
Some Tips on Studying Statistics     14
Summary     15
Frequency Distributions, Percentiles, and Percentile Ranks     19
Organizing Qualitative Data     21
Grouped Scores     21
How to Construct a Grouped Frequency Distribution     23
Apparent versus Real Limits     24
The Relative Frequency Distribution     26
The Cumulative Frequency Distribution     27
Percentiles and Percentile Pranks     28
Computing Percentiles from Grouped Data     29
Computation of Percentile Rank     32
Summary     32
Graphic Representation of Frequency Distributions     37
Basic Procedures     38
The Histogram     39
The Frequency Polygon     40
Choosing between aHistogram and a Polygon     41
The Bar Diagram and the Pie Chart     43
The Cumulative Percentage Curve     45
Factors Affecting the Shape of Graphs     47
Shape of Frequency Distributions     50
Summary     50
Central Tendency     55
The Mode     56
The Median     56
The Mean     58
Properties of the Mode     59
Properties of the Mean     60
Point of Controversy: Is It Permissible to Calculate the Mean for Tests in the Behavioral Sciences?     61
Properties of the Median     62
Measures of Central Tendency in Symmetrical and Asymmetrical Distributions     64
The Effects of Score Transformations     65
Summary     65
Variability and Standard [z] Scores     69
The Range and Semi-Interquartile Range     71
Deviation Scores     72
Deviational Measures: The Variance     73
Deviational Measures: The Standard Deviation     74
Calculation of the Variance and Standard Deviation: Raw-Score Method     75
Properties of the Range and Semi-Interquartile Range     76
Point of Controversy: Calculating the Sample Variance: Should We Divide by n or (n - 1)?     77
Properties of the Standard Deviation     78
How Big Is a Standard Deviation?     78
Score Transformations and Measures of Variability     79
Standard Scores (z Scores)     80
A Comparison of z Scores and Percentile Ranks     83
Summary     83
Standard Scores and the Normal Curve     89
Historical Aspects of the Normal Curve     90
The Nature of the Normal Curve     92
Standard Scores and the Normal Curve     94
The Standard Normal Curve: Finding Areas When the Score Is Known     94
The Standard Normal Curve: Finding Scores When the Area Is Known     97
The Normal Curve as a Model for Real Variables     99
The Normal Curve as a Model for Sampling Distributions     100
Summary     100
Point of Controversy: How Normal Is the Normal Curve?     101
Correlation     105
Some History     107
Graphing Bivariate Distributions: The Scatter Diagram     107
Correlation: A Matter of Direction     111
Correlation: A Matter of Degree     112
Understanding the Meaning of Degree of Correlation     113
Formulas for Pearson's Coefficient of Correlation      115
Calculating r from Raw Scores     118
Spearman's Rank-Order Correlation Coefficient     120
Correlation Does Not Prove Causation     121
The Effects of Score Transformations     124
Cautions Concerning Correlation Coefficients     124
Summary     128
Prediction     133
The Problem of Prediction     134
The Criterion of Best Fit     136
Point of Controversy: Least-Squares Regression versus the Resistant Line     137
The Regression Equation: Standard-Score Form     138
The Regression Equation: Raw-Score Form     139
Error of Prediction: The Standard Error of Estimate     141
An Alternative (and Preferred) Formula for S[subscript YX]     143
Error in Estimating Y from X     143
Cautions Concerning Estimation of Predictive Error     145
Summary     146
Interpretive Aspects of Correlation and Regression     151
Factors Influencing r: Degree of Variability in Each Variable     152
Interpretation of r: The Regression Equation I     152
Interpretation of r: The Regression Equation II     154
Interpretation of r: Proportion of Variation in Y Not Associated with Variation in X      156
Interpretation of r: Proportion of Variation in Associated with Variation in X     158
Interpretation of r: Proportion of Correct Placements     160
Summary     161
Probability     165
Defining Probability     166
A Mathematical Model of Probability     168
Two Theorems in Probability     168
An Example of a Probability Distribution: The Binomial     170
Applying the Binomial     172
Are Amazing Coincidences Really That Amazing?     174
Summary     175
Random Sampling and Sampling Distributions     179
Random Sampling     181
Using a Table of Random Numbers     182
The Random Sampling Distribution of the Mean: An Introduction     183
Characteristics of the Random Sampling Distribution of the Mean     186
Using the Sampling Distribution of X to Determine the Probability for Different Ranges of Values of X     189
Random Sampling Without Replacement     192
Summary     192
Introduction to Statistical Inference Testing Hypotheses about Single Means (z and t)     195
Testing a Hypothesis about a Single Mean     197
The Null and Alternative Hypotheses     197
When Do We Retain and When Do We Reject the Null Hypothesis?     198
Review of the Procedure for Hypothesis Testing     199
Dr. Brown's Problem: Conclusion     199
The Statistical Decision     201
Choice of H[subscript A] One-Tailed and Two-Tailed Tests     203
Review of Assumptions in Testing Hypotheses about a Single Mean     205
Estimating the Standard Error of the Mean When [sigma] Is Unknown     205
Point of Controversy: The Single-Subject Research Design     206
The t Distribution     209
Characteristics of Student's Distribution of t     210
Degrees of Freedom and Student's Distribution of t     212
An Example: Professor Dyett's Question     213
Computing t from Raw Scores     215
Levels of Significance versus p-Values     218
Summary     219
Interpreting the Results of Hypothesis Testing: Effect Size, Type I and Type II Errors, and Power     225
A Statistically Significant Difference versus a Practically Important Difference     226
Point of Controversy: The Failure to Publish "Nonsignificant" Results     227
Effect Size     228
Errors in Hypothesis Testing     230
The Power of a Test     233
Factors Affecting Power: Discrepancy between the True Population Mean and the Hypothesized Mean (Size of Effect)     233
Factors Affecting Power: Sample Size     234
Factors Affecting Power: Variability of the Measure     235
Factors Affecting Power: Level of Significance [(alpha)]     235
Factors Affecting Power: One-Tailed versus Two-Tailed Tests     236
Calculating the Power of a Test     237
Point of Controversy Meta-Analysis     239
Estimating Power and Sample Size for Tests of Hypotheses about Means     240
Problems in Selecting a Random Sample and in Drawing Conclusions     242
Summary     243
Testing Hypotheses about the Difference between Two Independent Groups     247
The Null and Alternative Hypotheses     248
The Random Sampling Distribution of the Difference between Two Sample Means     249
Properties of the Sampling Distribution of the Difference between Means     251
Determining a Formula for t     252
Testing the Hypothesis of No Difference between Two Independent Means: The Dyslexic Children Experiment     255
Use of a One-Tailed Test     257
Sample Size in Inference about Two Means     258
Effect Size     258
Point of Controversy: Testing for Equivalence between Two Experimental Groups      259
Estimating Power and Sample Size for Tests of Hypotheses about the Difference between Two Independent Means     263
Assumptions Associated with Inference about the Difference between Two Independent Means     265
The Random-Sampling Model versus the Random-Assignment Model     266
Random Sampling and Random Assignment as Experimental Controls     267
Summary     268
Testing for a Difference between Two Dependent (Correlated) Groups     273
Determining a Formula for t     274
Degrees of Freedom for Tests of No Difference between Dependent Means     275
An Alternative Approach to the Problem of Two Dependent Means     276
Testing a Hypothesis about Two Dependent Means     277
Effect Size     280
Power     281
Assumptions When Testing a Hypothesis about the Difference between Two Dependent Means     282
Problems with Using the Dependent-Samples Design     282
Summary     283
Inference about Correlation Coefficients     287
The Random Sampling Distribution of r     288
Testing the Hypothesis that [rho] = 0     289
Fisher's z' Transformation     290
Strength of Relationship     292
A Note about Assumptions      292
Inference When Using Spearman's r[subscript s]     292
Summary     293
An Alternative to Hypothesis Testing Confidence Intervals     295
Examples of Estimation     297
Confidence Intervals for [Mu subscript X]     297
The Relation between Confidence Intervals and Hypothesis Testing     300
The Advantages of Confidence Intervals     301
Random Sampling and Generalizing Results     302
Evaluating a Confidence Interval     303
Point of Controversy: Objectivity and Subjectivity in Inferential Statistics: Bayesian Statistics     304
Confidence Intervals for [Mu subscript X] - [Mu subscript Y]     306
Sample Size Required for Confidence Intervals of [Mu subscript X] and [Mu subscript X] - [Mu subscript Y]     309
Confidence Intervals for [rho]     311
Summary     312
Chi-Square and Inference about Frequencies     315
The Chi-Squre Test for Goodness of Fit     316
Chi-Square (X[superscript 2]) as a Measure of the Difference between Expected and Observed Frequencies     318
The Logic of the Chi-Square Test     319
Interpretation of the Outcome of a Chi-Square Test     320
Different Hypothesized Proportions in the Test for Goodness of Fit      321
Effect Size for Goodness-of-Fit Problems     322
Assumptions in the Use of the Theoretical Distribution of Chi-Square     322
Chi-Square as a Test for Independence between Two Variables     323
Finding Expected Frequencies in a Contingency Table     325
Calculation of X[subscript 2] and Determination of Significance in a Contingency Table     326
Measures of Effect Size (Strength of Association) for Tests of Independence     327
Point of Controversy: Yates Correction for Continuity     328
Power and the Chi-Square Test of Independence     329
Summary     330
Testing for Differences among Three or More Groups: One-Way Analysis of Variance (and Some Alternatives)     335
The Null Hypothesis     336
The Basis of One-Way Analysis of Variance: Variation within and between Groups     337
Partition of the Sums of Squares     338
Degrees of Freedom     341
Variance Estimates and the F Ratio     342
The Summary Table     344
Example     344
Comparison of t and F     347
Raw-Score Formulas for Analysis of Variance     347
Assumptions Associated with ANOVA     350
Effect Size     350
ANOVA and Power      352
Post Hoc Comparisons     352
Some Concerns about Post Hoc Comparisons     354
An Alternative to the F Test: Planned Comparisons     354
How to Construct Planned Comparisons     355
Analysis of Variance for Repeated Measures     358
Point of Controversy: Analysis of Variance versus A Priori Comparisons     359
Summary     365
Factorial Analysis of Variance: The Two-Factor Design for Independent Groups     369
Main Effects     371
Interaction     373
The Importance of Interaction     375
Partition of the Sums of Squares for Two-Way ANOVA     376
Degrees of Freedom     381
Variance Estimates and F Tests     381
Studying the Outcome of Two-Factor Analysis of Variance     383
Effect Size     384
Planned Comparisons     385
Assumptions of the Two-Factor Design and the Problem of Unequal Numbers of Scores     386
Summary     386
Some (Almost) Assumption-Free Tests     389
The Null Hypothesis in Assumption-Freer Tests     390
Randomization Tests     390
Rank-Order Tests     393
An Assumption-Freer Alternative to the t Test of a Difference between Two Independent Groups: The Mann-Whitney U Test     394
Point of Controversy: A Comparison of the t Test and Mann-Whitney U Test with Real-World Distributions     398
An Assumption-Freer Alternative to the t Test of a Difference between Two Dependent Groups: The Sign Test     399
Another Assumption-Freer Alternative to the t Test of a Difference between Two Dependent Groups: The Wilcoxon Signed-Ranks Test     401
An Assumption-Freer Alternative to One-Way ANOVA for Independent Groups: The Kruskal-Wallis Test     403
An Assumption-Freer Alternative to ANOVA for Repeated Measures: Friedman's Rank Test for Correlated Samples     406
Summary     408
Epilogue: The Realm of Statistics     411
Review of Basic Mathematics     415
List of Symbols     425
Answers to Problems     429
Statistical Tables     445
Areas under the Normal Curve Corresponding to Given Values of z     446
The Binomial Distribution     451
Random Numbers     455
Student's t Distribution     458
The F Distribution     460
The Studentized Range Statistic     464
Values of the Correlation Coefficient Required for Different Levels of Significance When H[subscript o]: p = 0     465
Values of Fisher's z' for Values of r      467
The X[superscript 2] Distribution     468
Critical One-Tail Values of[Sigma]R[subscript X] for the Mann-Whitney U Test     469
Critical Values for the Smaller of R[subscript +] or R[subscript -] for the Wilcoxon Signed-Ranks Test     471
References     473
Index     481

Customer Reviews