Applied Statistics I 3rd edition by Rebecca Warner – Ebook PDF Instant Download/Delivery: 1506352804, 978-1506352800
Full download Applied Statistics I 3rd edition after payment

Product details:
ISBN 10: 1506352804
ISBN 13: 978-1506352800
Author: Rebecca Warner
Rebecca M. Warner’s bestselling Applied Statistics: From Bivariate Through Multivariate Techniques has been split into two volumes for ease of use over a two-course sequence. Applied Statistics I: Basic Bivariate Techniques, Third Editionis an introductory statistics text based on chapters from the first half of the original book.
The author′s contemporary approach reflects current thinking in the field, with its coverage of the “new statistics” and reproducibility in research. Her in-depth presentation of introductory statistics follows a consistent chapter format, includes some simple hand-calculations along with detailed instructions for SPSS, and helps students understand statistics in the context of real-world research through interesting examples. Datasets are provided on an accompanying website.
Applied Statistics I 3rd Table of contents:
Chapter 1 • Evaluating Numerical Information
1.1 Introduction
1.2 Guidelines for Numeracy
1.3 Source Credibility
1.3.1 Self-Interest or Bias
1.3.2 Bias and “Cherry-Picking”
1.3.3 Primary, Secondary, and Third-Party Sources
1.3.4 Communicator Credentials and Skills
1.3.5 Track Record for Truth-Telling
1.4 Message Content
1.4.1 Anecdotal Versus Numerical Information
1.4.2 Citation of Supporting Evidence
1.5 Evaluating Generalizability
1.6 Making Causal Claims
1.6.1 The “Post Hoc, Ergo Propter Hoc” Fallacy
1.6.2 Correlation (by Itself) Does Not Imply Causation
1.6.3 Perfect Correlation Versus Imperfect Correlation
1.6.4 “Individual Results Vary”
1.6.5 Requirements for Evidence of Causal Inference
1.7 Quality Control Mechanisms in Science
1.7.1 Peer Review
1.7.2 Replication and Accumulation of Evidence
1.7.3 Open Science and Study Preregistration
1.8 Biases of Information Consumers
1.8.1 Confirmation Bias (Again)
1.8.2 Social Influence and Consensus
1.9 Ethical Issues in Data Collection and Analysis
1.9.1 Ethical Guidelines for Researchers: Data Collection
1.9.2 Ethical Guidelines for Statisticians: Data Analysis and Reporting
1.10 Lying With Graphs and Statistics
1.11 Degrees of Belief
1.12 Summary
Chapter 2 • Basic Research Concepts
2.1 Introduction
2.2 Types of Variables
2.2.1 Overview
2.2.2 Categorical Variables
2.2.3 Quantitative Variables
2.2.4 Ordinal Variables
2.2.5 Variable Type and Choice of Analysis
2.2.6 Rating Scale Variables
2.2.7 Scores That Represent Counts
2.3 Independent and Dependent Variables
2.4 Typical Research Questions
2.4.1 Are X and Y Correlated?
2.4.2 Does X Predict Y?
2.4.3 Does X Cause Y?
2.5 Conditions for Causal Inference
2.6 Experimental Research Design
2.7 Nonexperimental Research Design
2.8 Quasi-Experimental Research Designs
2.9 Other Issues in Design and Analysis
2.10 Choice of Statistical Analysis (Preview)
2.11 Populations and Samples: Ideal Versus Actual Situations
2.11.1 Ideal Definition of Population and Sample
2.11.2 Two Real-World Research Situations Similar to the Ideal Population and Sample Situation
2.11.3 Actual Research Situations That Are Not Similar to Ideal Situations
2.12 Common Problems in Interpretation of Results
Appendix 2A: More About Levels of Measurement
Appendix 2B: Justification for the Use of Likert and Other Rating Scales as Quantitative Variables (in Some Situations)
Chapter 3 • Frequency Distribution Tables
3.1 Introduction
3.2 Use of Frequency Tables for Data Screening
3.3 Frequency Tables for Categorical Variables
3.4 Elements of Frequency Tables
3.4.1 Frequency Counts (n or f)
3.4.2 Total Number of Scores in a Sample (N)
3.4.3 Missing Values (if Any)
3.4.4 Proportions
3.4.5 Percentages
3.4.6 Cumulative Frequencies or Cumulative Percentages
3.5 Using SPSS to Obtain a Frequency Table
3.6 Mode, Impossible Score Values, and Missing Values
3.7 Reporting Data Screening for Categorical Variables
3.8 Frequency Tables for Quantitative Variables
3.8.1 Ungrouped Frequency Distribution
3.8.2 Evaluation of Score Location Using Cumulative Percentage
3.8.3 Grouped or Binned Frequency Distributions
3.9 Frequency Tables for Categorical Versus Quantitative Variables
3.10 Reporting Data Screening for Quantitative Variables
3.11 What We Hope to See in Frequency Tables for Categorical Variables
3.11.1 Categorical Variables That Represent Naturally Occurring Groups
3.11.2 Categorical Variables That Represent Treatment Groups
3.12 What We Hope to See in Frequency Tables for Quantitative Variables
3.13 Summary
Appendix 3A: Getting Started in IBM SPSS® Version 25
3.A.1 The Bare Minimum: Using an Existing SPSS Data File to Obtain, Print, and Save Results
3.A.2 Moving Between Windows in SPSS
3.A.3 Creating a File and Entering Data
3.A.4 Defining Variable Names and Properties of Variables
Appendix 3B: Missing Values in Frequency Tables
Appendix 3C: Dividing Scores Into Groups or Bins
Chapter 4 • Descriptive Statistics
4.1 Introduction
4.2 Questions About Quantitative Variables
4.3 Notation
4.4 Sample Median
4.5 Sample Mean (M)
4.6 An Important Characteristic of M: The Sum of Deviations From M = 0
4.7 Disadvantage of M: It Is Not Robust Against Influence of Extreme Scores
4.8 Behavior of Mean, Median, and Mode in Common Real-World Situations
4.8.1 Example 1: Bell-Shaped Distribution
4.8.2 Example 2: Bimodal or Polarized Distribution
4.8.3 Example 3: Skewed Distribution
4.8.4 Example 4: No Clear Mode
4.9 Choosing Among Mean, Median, and Mode
4.10 Using SPSS to Obtain Descriptive Statistics for a Quantitative Variable
4.11 Minimum, Maximum, and Range: Variation Among Scores
4.12 The Sample Variance s2
4.12.1 Step 1: Deviation of Each Score From the Mean
4.12.2 Step 2: Sum of Squared Deviations
4.12.3 Step 3: Degrees of Freedom
4.12.4 Putting the Pieces Together: Computing a Sample Variance
4.13 Sample Standard Deviation (S or SD)
4.14 How a Standard Deviation Describes Variation Among Scores in a Frequency Table
4.15 Why Is There Variance?
4.16 Reports of Descriptive Statistics in Journal Articles
4.17 Additional Issues in Reporting Descriptive Statistics
4.18 Summary
Appendix 4A: Order of Arithmetic Operations
Appendix 4B: Rounding
Chapter 5 • Graphs: Bar Charts, Histograms, and Boxplots
5.1 Introduction
5.2 Pie Charts for Categorical Variables
5.3 Bar Charts for Frequencies of Categorical Variables
5.4 Good Practice for Construction of Bar Charts
5.5 Deceptive Bar Graphs
5.6 Histograms for Quantitative Variables
5.7 Obtaining a Histogram Using SPSS
5.8 Describing and Sketching Bell-Shaped Distributions
5.9 Good Practices in Setting Up Histograms
5.10 Boxplot (Box and Whiskers Plot)
5.10.1 How to Set Up a Boxplot by Hand
5.10.2 How to Obtain a Boxplot Using SPSS
5.11 Telling Stories About Distributions
5.12 Uses of Graphs in Actual Research
5.13 Data Screening: Separate Bar Charts or Histograms for Groups
5.14 Use of Bar Charts to Represent Group Means
5.15 Other Examples
5.15.1 Scatterplots
5.15.2 Maps
5.15.3 Historical Example
5.16 Summary
Chapter 6 • The Normal Distribution and z Scores
6.1 Introduction
6.2 Locations of Individual Scores in Normal Distributions
6.3 Standardized or z Scores
6.3.1 First Step in Finding a z Score for X: The Distance of X From M
6.3.2 Second Step: Divide the (X – M) Distance by SD to Obtain a Unit-Free or Standardized Distance of Score From the Mean
6.4 Converting z Scores Back Into X Units
6.5 Understanding Values of z
6.6 Qualitative Description of Normal Distribution Shape
6.7 More Precise Description of Normal Distribution Shape
6.8 Areas Under the Normal Distribution Curve Can Be Interpreted as Probabilities
6.9 Reading Tables of Areas for the Standard Normal Distribution
6.10 Dividing the Normal Distribution Into Three Regions: Lower Tail, Middle, and Upper Tail
6.11 Outliers Relative to a Normal Distribution
6.12 Summary of First Part of Chapter
6.13 Why We Assess Distribution Shape
6.14 Departure From Normality: Skewness
6.15 Another Departure From Normality: Kurtosis
6.16 Overall Normality
6.17 Practical Recommendations for Preliminary Data Screening and Descriptions of Scores for Quantitative Variables
6.18 Reporting Information About Distribution Shape, Missing Values, Outliers, and Descriptive Statistics for Quantitative Variables
6.19 Summary
Appendix 6A: The Mathematics of the Normal Distribution
Appendix 6B: How to Select and Remove Outliers in SPSS
Appendix 6C: Quantitative Assessments of Departure From Normality
6.C.1 Index for Skewness
6.C.2 Index for Kurtosis
6.C.3 Test for Overall Departure From Normal Distribution Shape
Appendix 6D: Why Are Some Real-World Variables Approximately Normally Distributed?
Appendix 6E: Saving z Scores for All Cases
Chapter 7 • Sampling Error and Confidence Intervals
7.1 Descriptive Versus Inferential Uses of Statistics
7.2 Notation for Samples Versus Populations
7.3 Sampling Error and the Sampling Distribution for Values of M
7.3.1 What Is Sampling Error?
7.3.2 Sampling Error in a Classroom Demonstration
7.3.3 Sampling Error in Monte Carlo Simulations
7.4 Prediction Error
7.5 Sample Versus Population (Revisited)
7.5.1 Representative Samples
7.5.2 Convenience Samples
7.6 The Central Limit Theorem: Characteristics of the Sampling Distribution of M
7.7 Factors That Influence Population Standard Error (σM)
7.8 Effect of N on Value of the Population Standard Error
7.9 Describing the Location of a Single Outcome for M Relative to Population Sampling Distribution (Setting Up a z Ratio)
7.10 What We Do When σ Is Unknown
7.11 The Family of t Distributions
7.12 Tables for t Distributions
7.13 Using Sampling Error to Set Up a Confidence Interval
7.14 How to Interpret a Confidence Interval
7.15 Empirical Example: Confidence Interval for Body Temperature
7.16 Other Applications for Confidence Intervals
7.16.1 CIs Can Be Obtained for Other Sample Statistics (Such as Proportions)
7.16.2 Margin of Error in Political Polls
7.17 Error Bars in Graphs of Group Means
7.18 Summary
Chapter 8 • The One-Sample t Test: Introduction to Statistical Significance Tests
8.1 Introduction
8.2 Significance Tests as Yes/No Questions About Proposed Values of Population Means
8.3 Stating a Null Hypothesis
8.4 Selecting an Alternative Hypothesis
8.5 The One-Sample t Test
8.6 Choosing an Alpha (Α) Level
8.7 Specifying Reject Regions on the Basis of α, Halt, and df
8.8 Questions for the One-Sample t Test
8.9 Assumptions for the Use of the One-Sample t Test
8.10 Rules for the Use of NHST
8.11 First Analysis of Mean Driving Speed Data (Using a Nondirectional Test)
8.12 SPSS Analysis: One-Sample t Test for Mean Driving Speed (Using a Nondirectional or Two-Tailed Test)
8.13 “Exact” p Values
8.14 Reporting Results for a Two-Tailed One-Sample t Test
8.15 Second Analysis of Driving Speed Data Using a One-Tailed or Directional Test
8.16 Reporting Results for a One-Tailed One-Sample t Test
8.17 Advantages and Disadvantages of One-Tailed Tests
8.18 Traditional NHST Versus New Statistics Recommendations
8.19 Things You Should Not Say About p Values
8.20 Summary
Chapter 9 • Issues in Significance Tests: Effect Size, Statistical Power, and Decision Errors
9.1 Beyond p Values
9.2 Cohen’s d: An Effect Size Index
9.3 Factors That Affect the Size of t Ratios
9.4 Statistical Significance Versus Practical Importance
9.5 Statistical Power
9.6 Type I and Type II Decision Errors
9.7 Meanings of “Error”
9.8 Use of NHST in Exploratory Versus Confirmatory Research
9.9 Inflated Risk for Type I Decision Error for Multiple Tests
9.10 Interpretation of Null Outcomes
9.11 Interpretation of Statistically Significant Outcomes
9.11.1 Sampling Error
9.11.2 Human Error
9.11.3 Misleading p Values
9.12 Understanding Past Research
9.13 Planning Future Research
9.14 Guidelines for Reporting Results
9.15 What You Cannot Say
9.16 Summary
Appendix 9A: Further Explanation of Statistical Power
Chapter 10 • Bivariate Pearson Correlation
10.1 Research Situations Where Pearson’s r Is Used
10.2 Correlation and Causal Inference
10.3 How Sign and Magnitude of r Describe an X, Y Relationship
10.4 Setting Up Scatterplots
10.5 Most Associations Are Not Perfect
10.6 Different Situations in Which r = .00
10.7 Assumptions for Use of Pearson’s r
10.7.1 Sample Must Be Similar to Population of Interest
10.7.2 X, Y Association Must Be Reasonably Linear
10.7.3 No Extreme Bivariate Outliers
10.7.4 Independent Observations for X and Independent Observations for Y
10.7.5 X and Y Must Be Appropriate Variable Types
10.7.6 Assumptions About Distribution Shapes
10.8 Preliminary Data Screening for Pearson’s r
10.9 Effect of Extreme Bivariate Outliers
10.10 Research Example
10.11 Data Screening for Research Example
10.12 Computation of Pearson’s r
10.13 How Computation of Correlation Is Related to Pattern of Data Points in the Scatterplot
10.14 Testing the Hypothesis That ρ0 = 0
10.15 Reporting Many Correlations and Inflated Risk for Type I Error
10.15.1 Call Results Exploratory and De-emphasize or Avoid Statistical Significance Tests
10.15.2 Limit the Number of Correlations
10.15.3 Replicate or Cross-Validate Correlations
10.15.4 Bonferroni Procedure: Use More Conservative Alpha Level for Tests of Individual Correlations
10.15.5 Common Bad Practice in Reports of Numerous Significance Tests
10.15.6 Summary: Reporting Numerous Correlations
10.16 Obtaining Confidence Intervals for Correlations
10.17 Pearson’s r and r2 as Effect Sizes and Partition of Variance
10.18 Statistical Power and Sample Size for Correlation Studies
10.19 Interpretation of Outcomes for Pearson’s r
10.19.1 When r Is Not Statistically Significant
10.19.2 When r Is Statistically Significant
10.19.3 Sources of Doubt
10.19.4 The Problem of Spuriousness
10.20 SPSS Example: Relationship Survey
10.21 Results Sections for One and Several Pearson’s r Values
10.22 Reasons to Be Skeptical of Correlations
10.23 Summary
Appendix 10A: Nonparametric Alternatives to Pearson’s r
10.A.1 Spearman’s r
Appendix 10B: Setting Up a 95% CI for Pearson’s r by Hand
Appendix 10C: Testing Significance of Differences Between Correlations
Appendix 10D: Some Factors That Artifactually Influence Magnitude of r
Appendix 10E: Analysis of Nonlinear Relationships
Appendix 10F: Alternative Formula to Compute Pearson’s r
Chapter 11 • Bivariate Regression
11.1 Research Situations Where Bivariate Regression Is Used
11.2 New Information Provided by Regression
11.3 Regression Equations and Lines
11.4 Two Versions of Regression Equations
11.4.1 Raw-Score Regression Equation
11.4.2 Standardized Regression Equation
11.4.3 Comparing the Two Forms of Regression
11.5 Steps in Regression Analysis
11.6 Preliminary Data Screening
11.7 Formulas for Bivariate Regression Coefficients
11.8 Statistical Significance Tests for Bivariate Regression
11.9 Confidence Intervals for Regression Coefficients
11.10 Effect Size and Statistical Power
11.11 Empirical Example Using SPSS: Salary Data
11.12 SPSS Output: Salary Data
11.13 Results Section: Hypothetical Salary Data
11.14 Plotting the Regression Line: Salary Data
11.15 Using a Regression Equation to Predict Score for Individual (Joe’s Heart Rate Data)
11.16 Partition of Sums of Squares in Bivariate Regression
11.17 Why Is There Variance (Revisited)?
11.18 Issues in Planning a Bivariate Regression Study
11.19 Plotting Residuals
11.20 Standard Error of the Estimate
11.21 Summary
Appendix 11A: Review: How to Graph a Line From Two Points Obtained From an Equation
Appendix 11B: OLS Derivation of Equation for Regression Coefficients
Appendix 11C: Alternative Formula for Computation of Slope
Appendix 11D: Fully Worked Example: Deviations and SS
Chapter 12 • The Independent-Samples t Test
12.1 Research Situations Where the Independent-Samples t Test Is Used
12.2 A Hypothetical Research Example
12.3 Assumptions for Use of Independent-Samples t Test
12.3.1 Y Scores Are Quantitative
12.3.2 Y Scores Are Independent of Each Other Both Between and Within Groups
12.3.3 Y Scores Are Sampled From Normally Distributed Populations With Equal Variances
12.3.4 No Outliers Within Groups
12.3.5 Relative Importance of Violations of These Assumptions
12.4 Preliminary Data Screening: Evaluating Violations of Assumptions and Getting to Know Your Data
12.5 Computation of Independent-Samples t Test
12.6 Statistical Significance of Independent-Samples t Test
12.7 Confidence Interval Around M1 – M2
12.8 SPSS Commands for Independent-Samples t Test
12.9 SPSS Output for Independent-Samples t Test
12.10 Effect Size Indexes for t
12.10.1 M1 – M2
12.10.2 Eta Squared (η2)
12.10.3 Point Biserial r (rpb)
12.10.4 Cohen’s d
12.10.5 Computation of Effect Sizes for Heart Rate and Caffeine Data
12.10.6 Summary of Effect Sizes
12.11 Factors That Influence the Size of t
12.11.1 Effect Size and N
12.11.2 Dosage Levels for Treatment, or Magnitudes of Differences for Participant Characteristics, Between Groups
12.11.3 Control of Within-Group Error Variance
12.11.4 Summary for Design Decisions
12.12 Results Section
12.13 Graphing Results: Means and CIs
12.14 Decisions About Sample Size for the Independent-Samples t Test
12.15 Issues in Designing a Study
12.15.1 Avoiding Potential Confounds
12.15.2 Decisions About Type or Dosage of Treatment
12.15.3 Decisions About Participant Recruitment and Standardization of Procedures
12.15.4 Decisions About Sample Size
12.16 Summary
Appendix 12A: A Nonparametric Alternative to the Independent-Samples t Test
Chapter 13 • One-Way Between-Subjects Analysis of Variance
13.1 Research Situations Where One-Way ANOVA Is Used
13.2 Questions in One-Way Between-S ANOVA
13.3 Hypothetical Research Example
13.4 Assumptions and Data Screening for One-Way ANOVA
13.5 Computations for One-Way Between-S ANOVA
13.5.1 Overview
13.5.2 SSbetween: Information About Distances Among Group Means
13.5.3 SSwithin: Information About Variability of Scores Within Groups
13.5.4 SStotal: Information About Total Variance in Y Scores
13.5.5 Converting Each SS to a Mean Square and Setting Up an F Ratio
13.6 Patterns of Scores and Magnitudes of SSbetween and SSwithin
13.7 Confidence Intervals for Group Means
13.8 Effect Sizes for One-Way Between-S ANOVA
13.9 Statistical Power Analysis for One-Way Between-S ANOVA
13.10 Planned Contrasts
13.11 Post Hoc or “Protected” Tests
13.12 One-Way Between-S ANOVA in SPSS
13.13 Output From SPSS for One-Way Between-S ANOVA
13.14 Reporting Results From One-Way Between-S ANOVA
13.15 Issues in Planning a Study
13.16 Summary
Appendix 13A: ANOVA Model and Division of Scores Into Components
Appendix 13B: Expected Value of F When H0 Is True
Appendix 13C: Comparison of ANOVA and t Test
Appendix 13D: Nonparametric Alternative to One-Way Between-S ANOVA: Independent-Samples Kruskal-Wallis Test
Chapter 14 • Paired-Samples t Test
14.1 Independent- Versus Paired-Samples Designs
14.2 Between-S and Within-S or Paired-Groups Designs
14.3 Types of Paired Samples
14.3.1 Naturally Occurring Pairs (Different but Related Persons in the Two Samples)
14.3.2 Creation of Matched Pairs
14.4 Hypothetical Study: Effects of Stress on Heart Rate
14.5 Review: Data Organization for Independent Samples
14.6 New: Data Organization for Paired Samples
14.7 A First Look at Repeated-Measures Data
14.8 Calculation of Difference (d) Scores
14.9 Null Hypothesis for Paired-Samples t Test
14.10 Assumptions for Paired-Samples t Test
14.11 Formulas for Paired-Samples t Test
14.12 SPSS Paired-Samples t Test Procedure
14.13 Comparison Between Results for Independent-Samples and Paired-Samples t Tests
14.14 Effect Size and Power
14.15 Some Design Problems in Repeated-Measures Analyses
14.15.1 Order Effects
14.15.2 Counterbalancing to Control for Order Effects
14.15.3 Carryover Effects
14.15.4 Problems Due to Outside Events and Changes in Participants Across Time
14.16 Results for Paired-Samples t Test: Stress and Heart Rate
14.17 Further Evaluation of Assumptions
14.18 Summary
Appendix 14A: Nonparametric Alternative to Paired-Samples t: Wilcoxon Signed Rank Test
Chapter 15 • One-Way Repeated-Measures Analysis of Variance
15.1 Introduction
15.2 Null Hypothesis for Repeated-Measures ANOVA
15.3 Preliminary Assessment of Repeated-Measures Data
15.4 Computations for One-Way Repeated-Measures ANOVA
15.5 Use of SPSS Reliability Procedure for One-Way Repeated-Measures ANOVA
15.6 Partition of SS in Between-S Versus Within-S ANOVA
15.7 Assumptions for Repeated-Measures ANOVA
15.7.1 Scores on Outcome Variables Are Quantitative and Approximately Normally Distributed Without Extreme Outliers
15.7.2 Relationships Among the Repeated-Measures Variables Should Be Linear Without Bivariate Outliers
15.7.3 Population Variances of Contrasts Should Be Equal (Sphericity Assumption)
15.7.4 Assumption of No Person-by-Treatment Interaction
15.8 Choices of Contrasts in GLM Repeated Measures
15.8.1 Simple Contrasts
15.8.2 Repeated Contrasts
15.8.3 Polynomial Contrasts
15.8.4 Other Contrasts Available in the SPSS GLM Procedure
15.9 SPSS GLM Procedure for Repeated-Measures ANOVA
15.10 Output of GLM Repeated-Measures ANOVA
15.11 Paired-Samples t Tests as Follow-Up
15.12 Results
15.13 Effect Size
15.14 Statistical Power
15.15 Counterbalancing in Repeated-Measures Studies
15.16 More Complex Designs
15.17 Summary
Appendix 15A: Test for Person-by-Treatment Interaction
Appendix 15B: Nonparametric Analysis for Repeated Measures (Friedman Test)
Chapter 16 • Factorial Analysis of Variance
16.1 Research Situations Where Factorial Design Is Used
16.2 Questions in Factorial ANOVA
16.3 Null Hypotheses in Factorial ANOVA
16.3.1 First Null Hypothesis: Test of Main Effect for Factor A
16.3.2 Second Null Hypothesis: Test of Main Effect for Factor B
16.3.3 Third Null Hypothesis: Test of the A × B Interaction
16.4 Screening for Violations of Assumptions
16.5 Hypothetical Research Situation
16.6 Computations for Between-S Factorial ANOVA
16.7 Computation of SS and df in Two-Way Factorial ANOVA
16.8 Effect Size Estimates for Factorial ANOVA
16.9 Statistical Power
16.10 Follow-Up Tests
16.10.1 Nature of a Two-Way Interaction
16.10.2 Nature of Main Effect Differences
16.11 Factorial ANOVA Using the SPSS GLM Procedure
16.12 SPSS Output
16.13 Results
16.14 Design Decisions and Magnitudes of SS Terms
16.14.1 Distances Between Group Means (Magnitudes of SSA and SSB)
16.14.2 Number of Scores Within Each Group or Cell
16.14.3 Variability of Scores Within Groups or Cells (Magnitude of MSwithin)
16.15 Summary
Appendix 16A: Fixed Versus Random Factors
Appendix 16B: Weighted Versus Unweighted Means
Appendix 16C: Unequal Cell n’s in Factorial ANOVA: Computing Adjusted Sums of Squares
16.C.1 Partition of Variance in Orthogonal Factorial ANOVA
16.C.2 Partition of Variance in Nonorthogonal Factorial ANOVA
Appendix 16D: Model for Factorial ANOVA
Appendix 16E: Computation of Sums of Squares by Hand
Chapter 17 • Chi-Square Analysis of Contingency Tables
17.1 Evaluating Association Between Two Categorical Variables
17.2 First Example: Contingency Tables for Titanic Data
17.3 What Is Contingency?
17.4 Conditional and Unconditional Probabilities
17.5 Null Hypothesis for Contingency Table Analysis
17.6 Second Empirical Example: Dog Ownership Data
17.7 Preliminary Examination of Dog Ownership Data
17.8 Expected Cell Frequencies If H0 Is True
17.9 Computation of Chi Squared Significance Test
17.10 Evaluation of Statistical Significance of χ2
17.11 Effect Sizes for Chi Squared
17.12 Chi Squared Example Using SPSS
17.13 Output From Crosstabs Procedure
17.14 Reporting Results
17.15 Assumptions and Data Screening for Contingency Tables
17.15.1 Independence of Observations
17.15.2 Minimum Requirements for Expected Values in Cells
17.15.3 Hypothetical Example: Data With One or More Values of E < 5
17.15.4 Four Ways to Handle Tables With Small Expected Values
17.15.5 How to Remove Groups
17.15.6 How to Combine Groups
17.16 Other Measures of Association for Contingency Tables
17.17 Summary
Appendix 17A: Margin of Error for Percentages in Surveys
Appendix 17B: Contingency Tables With Repeated Measures: McNemar Test
Appendix 17C: Fisher Exact Test
Appendix 17D: How Marginal Distributions for X and Y Constrain Maximum Value of ϕ
Appendix 17E: Other Uses of χ2
Chapter 18 • Selection of Bivariate Analyses and Review of Key Concepts
18.1 Selecting Appropriate Bivariate Analyses
18.2 Types of Independent and Dependent Variables (Categorical Versus Quantitative)
18.3 Parametric Versus Nonparametric Analyses
18.4 Comparisons of Means or Medians Across Groups (Categorical IV and Quantitative DV)
18.5 Problems With Selective Reporting of Evidence and Analyses
18.6 Limitations of Statistical Significance Tests and p Values
18.7 Statistical Versus Practical Significance
18.8 Generalizability Issues
18.9 Causal Inference
18.10 Results Sections
18.11 Beyond Bivariate Analyses: Adding Variables
18.11.1 Factorial ANOVA and Repeated-Measures ANOVA
18.11.2 Control Variables
18.11.3 Moderator Variables
18.11.4 Too Many Variables?
18.12 Some Multivariable or Multivariate Analyses
18.13 Degree of Belief
People also search for Applied Statistics I 3rd:
rebecca m. warner applied statistics i (basic bivariate techniques)
applied statistics warner pdf
rebecca warner applied statistics
applied statistics i
applied statistics 1 rebecca warner


