5
1
9781506368436
Statistics: A Gentle Introduction / Edition 4 available in Paperback, eBook
Statistics: A Gentle Introduction / Edition 4
by Frederick L. Coolidge
Frederick L. Coolidge
- ISBN-10:
- 1506368433
- ISBN-13:
- 9781506368436
- Pub. Date:
- 02/10/2020
- Publisher:
- SAGE Publications
- ISBN-10:
- 1506368433
- ISBN-13:
- 9781506368436
- Pub. Date:
- 02/10/2020
- Publisher:
- SAGE Publications
Statistics: A Gentle Introduction / Edition 4
by Frederick L. Coolidge
Frederick L. Coolidge
$151.0
Current price is , Original price is $151.0. You
Buy New
$151.00Buy Used
$95.88
$151.00
-
SHIP THIS ITEMShips in 6-10 daysPICK UP IN STORE
Your local store may have stock of this item.
Available within 2 business hours
$95.88$151.00Save 37% Current price is $95.88, Original price is $151. You Save 37%.-
SHIP THIS ITEM
Temporarily Out of Stock Online
Please check back later for updated availability.
151.0
In Stock
Overview
The Fourth Edition of Statistics: A Gentle Introduction shows students that an introductory statistics class doesn’t need to be difficult or dull. Author Fred Coolidge minimizes students’ anxieties about math by explaining the concepts of statistics in plain language first, before addressing the math. Each formula within the text has a step-by-step example to demonstrate the calculation so students can follow along. Only those formulas that are important for final calculations are included in the text so students can focus on the concepts, not the numbers. A wealth of real-world examples and applications gives a context for statistics in the real world and how it helps us solve problems and make informed choices.
New to the Fourth Edition are sections on working with big data, new coverage of alternative non-parametric tests, beta coefficients, and the "nocebo effect," discussions of p values in the context of research, an expanded discussion of confidence intervals, and more exercises and homework options under the new feature "Test Yourself. "
Product Details
ISBN-13: | 9781506368436 |
---|---|
Publisher: | SAGE Publications |
Publication date: | 02/10/2020 |
Edition description: | Fourth Edition |
Pages: | 536 |
Product dimensions: | 7.38(w) x 9.12(h) x (d) |
About the Author
Frederick L. Coolidge (Ph.D.) received his B.A., M.A., and Ph.D. in Psychology at the University of Florida. He completed a two-year postdoctoral fellowship in clinical neuropsychology at Shands Teaching Hospital in Gainesville, Florida. He has been awarded three Fulbright Fellowships to India (1987, 1992, and 2005). He has also won three teaching awards at the University of Colorado (1984, 1987, and 1992), including the lifetime title of University of Colorado Presidential Teaching Scholar. In 2005, he received the University of Colorado at Colorado Springs College of Letters, Arts, and Sciences’ Outstanding Research and Creative Works award. Dr. Coolidge conducts research in behavioral genetics and has established the strong heritability of gender identity and gender identity disorder. He also conducts research in lifespan personality assessment and has established the reliability of posthumous personality evaluations, and also applies cognitive models of thinking and language to explain evolutionary changes in the archaeological record.
Table of Contents
PrefaceAcknowledgmentsAbout the AuthorChapter 1: A Gentle IntroductionHow Much Math Do I Need to Do Statistics?The General Purpose of Statistics: Understanding the WorldWhat Is a Statistician?Liberal and Conservative StatisticiansDescriptive and Inferential StatisticsExperiments Are Designed to Test Theories and HypothesesOddball TheoriesBad Science and MythsEight Essential Questions of Any Survey or StudyOn Making Samples Representative of the PopulationExperimental Design and Statistical Analysis as ControlsThe Language of StatisticsOn Conducting Scientific ExperimentsThe Dependent Variable and MeasurementOperational DefinitionsMeasurement ErrorMeasurement Scales: The Difference Between Continuous and Discrete VariablesTypes of Measurement ScalesRounding Numbers and Rounding ErrorStatistical SymbolsSummaryHistory Trivia: Achenwall to NightingaleKey TermsChapter 1 Practice ProblemsChapter 1 Test Yourself QuestionsSPSS Lesson 1Chapter 2: Descriptive Statistics: Understanding Distributions of NumbersThe Purpose of Graphs and Tables: Making Arguments and DecisionsA Summary of the Purpose of Graphs and TablesGraphical CautionsFrequency DistributionsShapes of Frequency DistributionsGrouping Data Into IntervalsAdvice on Grouping Data Into IntervalsThe Cumulative Frequency DistributionCumulative Percentages, Percentiles, and QuartilesStem-and-Leaf PlotNon-normal Frequency DistributionsOn the Importance of the Shapes of DistributionsAdditional Thoughts About Good Graphs Versus Bad GraphsHistory Trivia: De Moivre to TukeyKey TermsChapter 2 Practice ProblemsChapter 2 Test Yourself QuestionsSPSS Lesson 2Chapter 3: Statistical Parameters: Measures of Central Tendency and VariationMeasures of Central TendencyChoosing Among Measures of Central TendencyKlinkers and OutliersUncertain or Equivocal ResultsMeasures of VariationCorrecting for Bias in the Sample Standard DeviationHow the Square Root of x2 Is Almost Equivalent to Taking the Absolute Value of xThe Computational Formula for Standard DeviationThe VarianceThe Sampling Distribution of Means, the Central Limit Theorem, and the Standard Error of the MeanThe Use of the Standard Deviation for PredictionPractical Uses of the Empirical Rule: As a Definition of an OutlierPractical Uses of the Empirical Rule: Prediction and IQ TestsSome Further CommentsHistory Trivia: Fisher to EelsKey TermsChapter 3 Practice ProblemsChapter 3 Test Yourself QuestionsSPSS Lesson 3Chapter 4: Standard Scores, the z Distribution, and Hypothesis TestingStandard ScoresThe Classic Standard Score: The z Score and the z DistributionCalculating z ScoresMore Practice on Converting Raw Data Into z ScoresConverting z Scores to Other Types of Standard ScoresThe z DistributionInterpreting Negative z ScoresTesting the Predictions of the Empirical Rule With the z DistributionWhy Is the z Distribution So Important?How We Use the z Distribution to Test Experimental HypothesesMore Practice With the z Distribution and T ScoresSummarizing Scores Through PercentilesHistory Trivia: Karl Pearson to Egon PearsonKey TermsChapter 4 Practice ProblemsChapter 4 Test Yourself QuestionsSPSS Lesson 4Chapter 5: Inferential Statistics: The Controlled Experiment, Hypothesis Testing, and the z DistributionHypothesis Testing in the Controlled ExperimentHypothesis Testing: The Big DecisionHow the Big Decision Is Made: Back to the z DistributionThe Parameter of Major Interest in Hypothesis Testing: The MeanNondirectional and Directional Alternative HypothesesA Debate: Retain the Null Hypothesis or Fail to Reject the Null HypothesisThe Null Hypothesis as a Nonconservative BeginningThe Four Possible Outcomes in Hypothesis TestingSignificance LevelsSignificant and Nonsignificant FindingsTrends, and Does God Really Love the .05 Level of Significance More Than the .06 Level?Directional or Nondirectional Alternative Hypotheses: Advantages and DisadvantagesDid Nuclear Fusion Occur?Baloney DetectionConclusions About Science and PseudoscienceThe Most Critical Elements in the Detection of Baloney in Suspicious Studies and Fraudulent ClaimsCan Statistics Solve Every Problem?ProbabilityHistory Trivia: Egon Pearson to Karl PearsonKey TermsChapter 5 Practice ProblemsChapter 5 Test Yourself QuestionsSPSS Lesson 5Chapter 6: An Introduction to Correlation and RegressionCorrelation: Use and AbuseA Warning: Correlation Does Not Imply CausationAnother Warning: Chance Is LumpyCorrelation and PredictionThe Four Common Types of CorrelationThe Pearson Product–Moment Correlation CoefficientTesting for the Significance of a Correlation CoefficientObtaining the Critical Values of the t DistributionIf the Null Hypothesis Is RejectedRepresenting the Pearson Correlation Graphically: The ScatterplotFitting the Points With a Straight Line: The Assumption of a Linear RelationshipInterpretation of the Slope of the Best-Fitting LineThe Assumption of HomoscedasticityThe Coefficient of Determination: How Much One Variable Accounts for Variation in Another Variable—The Interpretation of r2Quirks in the Interpretation of Significant and Nonsignificant Correlation CoefficientsLinear RegressionReading the Regression LineFinal Thoughts About Multiple Regression Analyses: A Warning About the Interpretation of the Significant Beta CoefficientsSpearman’s CorrelationSignificance Test for Spearman’s rTies in RanksPoint-Biserial CorrelationTesting for the Significance of the Point-Biserial Correlation CoefficientPhi (F) CorrelationTesting for the Significance of PhiHistory Trivia: Galton to FisherKey TermsChapter 6 Practice ProblemsChapter 6 Test Yourself QuestionsSPSS Lesson 6Chapter 7: The t Test for Independent GroupsThe Statistical Analysis of the Controlled ExperimentOne t Test but Two DesignsAssumptions of the Independent t TestThe Formula for the Independent t TestYou Must Remember This! An Overview of Hypothesis Testing With the t TestWhat Does the t Test Do? Components of the t Test FormulaWhat If the Two Variances Are Radically Different From One Another?A Computational ExampleMarginal SignificanceThe Power of a Statistical TestEffect SizeThe Correlation Coefficient of Effect SizeAnother Measure of Effect Size: Cohen’s dConfidence IntervalsEstimating the Standard ErrorHistory Trivia: Gosset and Guinness BreweryKey TermsChapter 7 Practice ProblemsChapter 7 Test Yourself QuestionsSPSS Lesson 7Chapter 8: The t Test for Dependent GroupsVariations on the Controlled ExperimentAssumptions of the Dependent t TestWhy the Dependent t Test May Be More Powerful Than the Independent t TestHow to Increase the Power of a t TestDrawbacks of the Dependent t Test DesignsOne-Tailed or Two-Tailed Tests of SignificanceHypothesis Testing and the Dependent t Test: Design 1Design 1 (Same Participants or Repeated Measures): A Computational ExampleDesign 2 (Matched Pairs): A Computational ExampleDesign 3 (Same Participants and Balanced Presentation): A Computational ExampleHistory Trivia: Fisher to PearsonKey TermsChapter 8 Practice ProblemsChapter 8 Test Yourself QuestionsSPSS Lesson 8Chapter 9: Analysis of Variance (ANOVA): One-Factor Completely Randomized DesignA Limitation of Multiple t Tests and a SolutionThe Equally Unacceptable Bonferroni SolutionThe Acceptable Solution: An Analysis of VarianceThe Null and Alternative Hypotheses in ANOVAThe Beauty and Elegance of the F Test StatisticThe F RatioHow Can There Be Two Different Estimates of Within-Groups Variance?ANOVA DesignsANOVA AssumptionsPragmatic OverviewWhat a Significant ANOVA IndicatesA Computational ExampleDegrees of Freedom for the NumeratorDegrees of Freedom for the DenominatorDetermining Effect Size in ANOVA: Omega Squared (w2)Another Measure of Effect Size: Eta (h)History Trivia: Gosset to FisherKey TermsChapter 9 Practice ProblemsChapter 9 Test QuestionsChapter 9 Test Yourself QuestionsSPSS Lesson 9Chapter 10: After a Significant ANOVA: Multiple Comparison TestsConceptual Overview of Tukey’s TestComputation of Tukey’s HSD TestWhat to Do If the Number of Error Degrees of Freedom Is Not Listed in the Table of Tukey’s q ValuesDetermining What It All MeansWarning!On the Importance of Nonsignificant Mean DifferencesFinal Results of ANOVAQuirks in InterpretationTukey’s With Unequal NsKey TermsChapter 10 Practice ProblemsChapter 10 Test Yourself QuestionsSPSS Lesson 10Chapter 11: Analysis of Variance (ANOVA): One-Factor Repeated-Measures DesignThe Repeated-Measures ANOVAAssumptions of the One-Factor Repeated-Measures ANOVAComputational ExampleDetermining Effect Size in ANOVAKey TermsChapter 11 Practice ProblemsChapter 11 Test Yourself QuestionsSPSS Lesson 11Chapter 12: Factorial ANOVA: Two-Factor Completely Randomized DesignFactorial DesignsThe Most Important Feature of a Factorial Design: The InteractionFixed and Random Effects and In Situ DesignsThe Null Hypotheses in a Two-Factor ANOVAAssumptions and Unequal Numbers of ParticipantsComputational ExampleKey TermsChapter 12 Practice ProblemsChapter 12 Test Yourself ProblemsSPSS Lesson 12Chapter 13: Post Hoc Analysis of Factorial ANOVAMain Effect Interpretation: GenderWhy a Multiple Comparison Test Is Unnecessary for a Two-Level Main Effect, and When Is a Multiple Comparison Test Necessary?Main Effect: Age LevelsMultiple Comparison Test for the Main Effect for AgeWarning: Limit Your Main Effect Conclusions When the Interaction Is SignificantMultiple Comparison TestsInterpretation of the Interaction EffectFinal SummaryWriting Up the Results Journal StyleLanguage to AvoidExploring the Possible Outcomes in a Two-Factor ANOVADetermining Effect Size in a Two-Factor ANOVAHistory Trivia: Fisher and SmokingKey TermsChapter 13 Practice ProblemsChapter 13 Test Yourself QuestionsSPSS Lesson 13Chapter 14: Factorial ANOVA: Additional DesignsThe Split-Plot DesignOverview of the Split-Plot ANOVAComputational ExampleTwo-Factor ANOVA: Repeated Measures on Both Factors DesignOverview of the Repeated-Measures ANOVAComputational ExampleKey Terms and DefinitionsChapter 14 Practice ProblemsChapter 14 Test Yourself QuestionsSPSS Lesson 14Chapter 15: Nonparametric Statistics: The Chi-Square Test and Other Nonparametric TestsOverview of the Purpose of Chi-SquareOverview of Chi-Square DesignsChi-Square Test: Two-Cell Design (Equal Probabilities Type)The Chi-Square DistributionAssumptions of the Chi-Square TestChi-Square Test: Two-Cell Design (Different Probabilities Type)Interpreting a Significant Chi-Square Test for a NewspaperChi-Square Test: Three-Cell Experiment (Equal Probabilities Type)Chi-Square Test: Two-by-Two DesignWhat to Do After a Chi-Square Test Is SignificantWhen Cell Frequencies Are Less Than 5 RevisitedOther Nonparametric TestsHistory Trivia: Pearson and BiometrikaKey TermsChapter 15 Practice ProblemsChapter 15 Test Yourself QuestionsSPSS Lesson 15Chapter 16: Other Statistical Topics, Parameters, and TestsBig DataHealth Science StatisticsAdditional Statistical Analyses and Multivariate StatisticsA Summary of Multivariate StatisticsCodaKey TermsChapter 16 Practice ProblemsChapter 16 Test Yourself QuestionsAppendix A: z DistributionAppendix B: t DistributionAppendix C: Spearman’s CorrelationAppendix D: Chi-Square χ2 DistributionAppendix E: F DistributionAppendix F: Tukey’s TableAppendix G: Mann–Whitney U Critical ValuesAppendix H: Wilcoxon Signed-Rank Test Critical ValuesAppendix I: Answers to Odd-Numbered Test Yourself QuestionsGlossaryReferencesIndexFrom the B&N Reads Blog
Page 1 of