Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Parametric Statistics in SPSS: ANOVA and ANCOVA, Thesis of Statistics

A step-by-step guide to performing various parametric comparative statistical tests using spss, including independent samples t-test, paired samples t-test, one-way anova, two-way anova, and ancova. It covers the procedures for requesting each test, interpreting the results, and presenting them in apa style. The document also includes examples and data sets to illustrate the concepts and procedures.

Typology: Thesis

2024/2025

Available from 04/04/2025

rachelle-jane-baladad
rachelle-jane-baladad 🇵🇭

3 documents

1 / 29

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Dagdag, Januard D. (2023) Page 78
Isabela State University - Roxas Training cum Workshop on Quantitative Data Analysis using SPSS
Parametric Comparative Statistics
Introduction
In this chapter you will perform parametric comparative statistical tests
using SPSS. A step by step SPSS procedure with pictures for each test
was provided.
Intended Learning Outcomes
At the end of this chapter, you will be able to:
1. demonstrate understanding of the comparative statistical tests;
2. perform comparative statistical analysis using SPSS; and
3. present and interpret the results of the analysis in APA style.
1. Independent samples t-test
Independent samples t-test tells you whether there is a statistically
significant difference in the mean scores for two groups. In statistical
terms, you are testing the probability that the two sets of scores came from
the same population. In this test, you need one independent variable with
two categories (e. g. Class types: Science class or Regular class) and one
continuous dependent variable (e. g. Math test scores). The nonparametric
alternative for Independent samples t-test is the Mann Whitney U-test.
Example 7.1
Suppose you want to explore whether there is a significant difference
between the academic performance of Science class and Regular class
students. You administered the same test to 14 randomly selected
students of each group. The test scores of the students are shown below:
Science Class
32
38
37
36
36
34
39
36
37
42
38
38
36
35
Regular Class
30
36
32
34
31
30
34
33
34
35
35
36
32
30
How to create SPSS data file for Independent samples t-test?
1. Open SPSS. Click Variable View. Encode the independent variable
(e. g. Type of Class) and the dependent variable (e. g. Test scores) in
the column Label. Code the categories of your independent variable
in the Values column. In this example, we let 1.00 = Science Class
and 2.00 = Regular Class.
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d

Partial preview of the text

Download Parametric Statistics in SPSS: ANOVA and ANCOVA and more Thesis Statistics in PDF only on Docsity!

Parametric Comparative Statistics

Introduction

In this chapter you will perform parametric comparative statistical tests

using SPSS. A step by step SPSS procedure with pictures for each test

was provided.

Intended Learning Outcomes

At the end of this chapter, you will be able to:

  1. demonstrate understanding of the comparative statistical tests;
  2. perform comparative statistical analysis using SPSS; and
  3. present and interpret the results of the analysis in APA style. 1. Independent samples t-test

Independent samples t-test tells you whether there is a statistically

significant difference in the mean scores for two groups. In statistical

terms, you are testing the probability that the two sets of scores came from

the same population. In this test, you need one independent variable with

two categories (e. g. Class types: Science class or Regular class) and one

continuous dependent variable (e. g. Math test scores). The nonparametric

alternative for Independent samples t-test is the Mann Whitney U-test.

Example 7.

Suppose you want to explore whether there is a significant difference

between the academic performance of Science class and Regular class

students. You administered the same test to 14 randomly selected

students of each group. The test scores of the students are shown below:

Science Class 32 38 37 36 36 34 39 36 37 42 38 38 36 35

Regular Class 30 36 32 34 31 30 34 33 34 35 35 36 32 30

How to create SPSS data file for Independent samples t-test?

  1. Open SPSS. Click Variable View. Encode the independent variable

(e. g. Type of Class) and the dependent variable (e. g. Test scores) in

the column Label. Code the categories of your independent variable

in the Values column. In this example, we let 1.00 = Science Class

and 2.00 = Regular Class.

  1. Go to Data View. Remember that there are 14 Science class

students (coded 1) and 14 Regular class students (coded 2). So we

input fourteen 1’s and fourteen 2’s in column ‘ClassType’ with their

corresponding test scores in column ‘TestScores’.

Procedures in Requesting for Independent samples t-test

  1. Click on Analyze , Compare Means , and then Independent

Samples T Test.

Interpretation of the Outputs from Independent samples t-test

  1. The Group Statistics table shows the results of the descriptive

statistics. The two means we are comparing, 36.71 and 33.00, differs

by 3.71. But do we have sufficient evidence to say that the difference

of 3.71 is significant when the standard deviations are considered?

This question is answered in the next table.

Group Statistics

Type of Class N Mean Std. Deviation Std. Error Mean

Test Scores Science Class 14 36.7143 2.36736.

Regular Class 14 33.0000 2.18386.

  1. The Independent Samples Test table contains the results of the

Levene’s test and the t-test. The Levene’s test checks whether the

assumption for homogeneity of variances is met. Since the sig. value

of .777 is greater than .05, then the assumption is not violated.

Therefore, we will use the t-test results for the row of ‘Equal

variances assumed’. As shown, the sig. (probability) value of the t

value (4.315) is less than. 01 which suggests that the mean

difference of 3.71 is highly significant. Statistically speaking, we

have 99% confidence level (sufficient evidence) in claiming that the

two groups differ in terms of their test performance.

Independent Samples Test

Levene's Test for Equality of Variances t-test for Equality of Means

F Sig. t df

Sig. (2- tailed)

Mean Difference

Std. Error Difference

95% Confidence Interval of the Difference

Lower Upper

Test Scores

Equal variances assumed

.082 .777 4.315 26 .000 3.7142 .860 1.944 5.

Equal variances not assumed

4.315 25.83 .000 3.7142 .860 1.944 5.

  1. The effect size is not available in both tables. But we can use the

Partial Eta Squared formula and calculate for the effect size as

follows:

2

2

Based on the Cohen’s guidelines, an eta squared of .01 is small ,

.06 is moderate , and .14 is large. Hence, the above-computed 0.

is a very large effect size.

Presenting the Results in APA style

Independent samples t-test was conducted to compare the test scores for

Science class and Regular class. Results shown in Table 1 indicated a

significant difference between the performance of Science class students

( M = 36.71; SD = 2.36) and Regular class students ( M = 33; SD = 218), t

(26) = 4.32, p < .01, two-tailed. The magnitude of the mean difference (3.71;

95% CI: 1.94 to 5.48) was very large,

2

Table 1

Test performance of Science and Regular class students

Type of Class n M SD t(26) p ^2 95% CI

Science 14 36.71 2.36 4.315 .00 0.42 [1.94,

5.48]

Regular 14 33.00 2.

Activity 17

In a Statistics training program, 12 trainees were taught in data analysis

using Statistical Package 1 (SP1) and another group of 13 trainees were

exposed to data analysis through the use of a different Statistical Package,

SP2. The same data analysis questions were administered to measure their

achievement and their scores are shown in the following table. Is there a

significant difference between the achievements of the two groups of

trainees? Report the results in APA style.

SP 1 81 71 79 83 76 75 84 90 83 78 84 83

SP 2 69 75 72 69 67 74 70 66 76 72 88 84 86

Procedures for requesting Paired samples t-test

  1. Click on Analyze , Compare Means , and then Paired Samples T

Test.

  1. Click on the two variables (e. g. Pretest and Posttest) and move

them into the box labeled Paired Variables. Then click OK.

Interpreting the Outputs generated

  1. The Paired Samples Statistics table shows the results of

descriptive statistics. The difference between their Mean Pretest

score ( M = 51.30) and Mean Posttest score ( M = 57.60) is −6.30.

Paired samples t-test will detect if this difference is significant or

not.

Paired Samples Statistics

Mean N Std. Deviation Std. Error Mean

Pair 1 Pretest 51.30 10 15.853 5.

Posttest 57.60 10 17.589 5.

  1. The Paired Samples Test table shows a probability (sig.) value of

.016 which means that the mean difference is significant at .05 level.

Statistically speaking, there is 95% confidence level that the

trainees’ pretest and posttest scores are significantly different.

Paired Samples Test

Paired Differences

Mean SD Std. Error Mean T df Sig. (2-tailed)

95% Confidence Interval of the Difference

Lower Upper

Pair 1 Pretest-Posttest - 6.30 6.733 2.129 - 11.117 - 1.482 - 2.959 9.

  1. The effect size can be calculated using Partial Eta Squared as

follows:

2

2 2   

Based on the guidelines proposed by Cohen (1988), the effect size

of 0.49 is very large.

Presenting the results in APA style

Paired samples t-test was run to compare the writing proficiency scores of

the trainees before and after they were exposed to academic writing

training. Results showed a significant increase in their writing proficiency

from pretest ( M = 51.30; SD = 15.85) to posttest ( M = 57.60; SD = 17.58), t

(9) = −2.95, p < .05, two-tailed. The mean increase of 6.30 was large,

2  =

0.49, 95% CI from −11.11 to −1.48.

Table 2

Pretest and posttest performance of the trainees

Test M SD t (9) p

2

 95% CI

Pretest 51.30 15.85 - 2.95 .016 0.49 [-11.11, - 1.48]

Posttest 57.60 17.

Activity 18

The following data represent the pre-test and post-test scores of 10

randomly selected individuals in entrepreneurship training. Do the

differences in the scores of the participants suggest that the training was

effective? Present the results in APA style.

Pre-test 40 30 48 45 70 65 30 50 60 75

  1. Go to Data View. For organization purpose, encode first the data of

the 6 students exposed to Method A, followed by those 6 students to

Method B, and then to Method C.

Procedure in Requesting for One-way ANOVA

  1. Click on Analyze , Compare Means , and then One-way ANOVA.
  1. Click on your dependent (continuous) variable (e. g. English

Proficiency) and move it into the box marked Dependent list. Click

on your independent (categorical) variable (e. g. Teaching Method

with subgroups Method A, Method B, and Method C). Move it into

the box labeled Factor. Click on Options button.

  1. Tick Descriptive and Homogeneity of variance test. Click on

Continue and then OK.

Interpreting the Results of the ANOVA

  1. The Descriptives table apparently shows that there are three

different mean scores. However, it does not show whether the

difference between these means is significant or not.

 LSD is not conservative and too statistically powerful whose results

tend to commit Type I error (rejection of the null hypothesis when it

should be accepted).

 Bonferonni is a simple procedure but in most cases not powerful

(Day & Quinn, 1989). Results from both Scheffe and Bonferonni

likely lead to Type II error (acceptance of the null hypothesis when

it should be rejected).

 Tukey’s procedure is the best for all possible pairwise comparisons

for any kind of sample sizes (equal/unequal sample sizes

with/without confidence intervals). Both Stevens (1999) and Day

and Quinn (1989) agree that Tukey is the procedure of choice when

all means are being compared.

Procedure for requesting Tukey

  1. Click again on Analyze , Compare Means , and One way ANOVA.

Then click Post Hoc button.

  1. Tick Tukey and then Continue. Finally, click OK.

Interpreting the result of Post hoc analysis using Tukey

  1. The Multiple Comparisons table shows the detailed comparison of all

pairs of groups. It gives you an idea about the probability (sig.) value

and confidence interval for each comparison. The sig. or probability

value of .324 shows that Method A and Method B are not

significantly different; p = .001 shows that Method A and Method C

are significantly different; while p = .010 denotes that Method B and

Method C are significantly different. However, the next table gives

you a quicker way of determining which groups are significantly

different (and not significantly different).

Multiple Comparisons

English Proficiency Score Tukey HSD

(I) Method (J) Method

Mean Difference (I-J) Std. Error Sig.

95% Confidence Interval

Lower Bound Upper Bound

Method A Method B 4.83333 3.24665 .324 - 3.5997 13.

Method C 16.00000*^ 3.24665 .001 7.5669 24.

Method B Method A - 4.83333 3.24665 .324 - 13.2664 3.

Method C 11.16667*^ 3.24665 .010 2.7336 19.

Method C Method A - 16.00000*^ 3.24665 .001 - 24.4331 - 7.

Method B - 11.16667*^ 3.24665 .010 - 19.5997 - 2.

*. The mean difference is significant at the 0.05 level.

  1. The following table on English Proficiency Score summarizes the

results of the Multiple Comparisons table. It directly shows which

methods are significantly different and which are not. Methods that

belong to the same column are not significantly different while

Methods that belong to different columns are significantly different.

As shown in the table, Method A and Method B are the not

significantly different groups but both are significantly different to

Method C.

English Proficiency Score

Tukey HSD

Method N

Subset for alpha = 0.

1 2

Method C 6 77.

Method B 6 88.

Method A 6 93.

Sig. 1.000.

Means for groups in homogeneous subsets are displayed.

4. Two-way Analysis of Variance

Two-way ANOVA allows you to simultaneously test for the effect of each of

your independent variables on the dependent variable and also identifies

any interaction effect between the independent variables. To do Two-way

ANOVA, you need two categorical independent variables and one

continuous dependent variable.

Example 7.

Suppose you wanted to investigate the impact of exposure to reach-out

activity (exposed or not exposed) and socio-economic status (high, average,

or low) on social responsibility scores of 24 randomly selected teachers.

The data you collected are given in the table below:

Exposure to

reach-out activity

Socio-Economic Status

High Average Low

Exposed 15 8 8

Not Exposed 15 8 2

6 10 4

How to create the SPSS data file?

  1. Open SPSS. Go to Variable View. Include ID (identification) to check

for the data of each subject. Encode the three variables in the Name

and Label column.

  1. Code the two categorical variables ( SES and Exposure ) in the Values

section.

  1. Go to Data View and input the data. There must be 24 rows as there

are 24 subjects. Encode all the data per subject. Utilize the values

used in coding the categories of the categorical variables.

Procedures for Two-way ANOVA

  1. Click on Analyze , General Linear Model , and then on Univariate.

Interpreting the results of Two-way ANOVA

  1. The Between-Subjects Factors table shows the number of cases

of every subgroup of the variables. It shows that there are 8 cases

for each of the three SES subgroups and 12 cases for each of the

two subgroups of Exposure to Reach-out Activity.

  1. The Descriptive Statistics table gives information about the Mean

(social responsibility) scores being compared. It appears in the table

that those who were both exposed to reach-out activities and belong

to high SES had higher Mean social responsibility scores. You must

see the result of the ANOVA to validate whether the differences of

the mean scores are really significant.

  1. The Levene’s Test of Equality of Error Variances checks whether

the assumption on the homogeneity of variances is violated or met.

Since the test statistic F = 1.33 was not significant, p > .29, the data

met the said parametric assumption.

  1. The Tests of Between-Subjects Effects table shows the result of

the ANOVA. Proceed directly to the probability (sig.) values of the F

values of SES , Exposure and SESExposure*. For SES , the F

statistic of 9.16 is significant at .01 level which means that there is

99% confidence level in claiming that at least one of the three groups

of SES had significantly different Mean social responsibility score.

For Exposure , notice that F = 8.40 is significant at .05 (but not at

.01 level as there are still smaller decimals not shown in the table)

which suggests a significant difference in the mean social

responsibility scores between those exposed to reach-out activities

and not exposed to reach-out activities. Moreover, for

SESExposure* , the F = .229 is not significant (p > .79) which

indicates that there is no interaction between the SES and Exposure

to reach-out activities in terms social responsibility scores.

  1. The Multiple Comparisons table shows which pair(s) of SES groups

had significantly different social responsibility scores. The results

showed that there is a significant difference between the social

responsibility scores between Low SES and Average SES ( p < .05)

and also between Low SES and High SES ( p < .01). Note that there

is no Multiple Comparisons table for Exposure variable because it

has only two subgroups (exposed and not exposed).

  1. The results of the Multiple Comparisons table, however, can be

more easily visualized in the Homogenous table below. As shown,

only the Low SES belongs to the Subset 1 which means that it has

significantly lower mean social responsibility score as compared to

mean social responsibility scores of Average SES and High SES

which are both in Subset 2.