《社会科学研究方法》课程教学资源:SPSS Software_Statistics-Spss Guide For Dummies

articlopedia.gigcities.com for more please visit http://articlopedia.gigcities.com file:///DVimportant.html9/13/20068:50:19 PM
articlopedia.gigcities.com for more please visit : http://articlopedia.gigcities.com file:///D|/important.html9/13/2006 8:50:19 PM

The Dummy's Guide to Data Analysis Using SPSS Mathematics 57 Scripps College Amy Gamble April,2001 ©Amy Gamble4/30/01 All Rights Rerserved
© Amy Gamble 4/30/01 All Rights Rerserved The Dummy’s Guide to Data Analysis Using SPSS Mathematics 57 Scripps College Amy Gamble April, 2001

TABLE OF CONTENTS PAGE Helpful Hints for All Tests...... .1 Tests for Numeric Data 1.Z-Scores 1 2.Helpful Hints for All T-Tests........... 2 3. One Group T-Tests....2 4.Independent Groups T-Test..... .3 5.Repeated Measures (Correlated Groups or Paired Samples)T-Test...........3 6.Independent Groups ANOVA....................... 4 7.Repeated Measures (Correlated Groups or Paired Samples)ANOVA..................4 8.Correlation Coefficient.4 9.Linear Regression. Tests for Ordinal Data 1.Helpful Hints for All Ordinal Tests. 7 2.Kruskal-Wallis H. > 3.Friedman's 4.Spearman's............ 7 Tests for Nominal Data 1.Helpful Hints for All Nominal Tests 8 2.Chi-Square Goodness-of-Fit....... P 3.ChiSquare Independence8 4.Cochran'sQ.… .8 5.Phi or Cramer's V(Correlations for Nominal Data).9 父
ii TABLE OF CONTENTS PAGE Helpful Hints for All Tests................................................................................................1 Tests for Numeric Data 1. Z-Scores.................................................................................................................1 2. Helpful Hints for All T-Tests.................................................................................2 3. One Group T-Tests.................................................................................................2 4. Independent Groups T-Test ...................................................................................3 5. Repeated Measures (Correlated Groups or Paired Samples) T-Test .....................3 6. Independent Groups ANOVA................................................................................4 7. Repeated Measures (Correlated Groups or Paired Samples) ANOVA .................4 8. Correlation Coefficient ..........................................................................................4 9. Linear Regression ..................................................................................................5 Tests for Ordinal Data 1. Helpful Hints for All Ordinal Tests .......................................................................7 2. Kruskal – Wallis H.................................................................................................7 3. Friedman’s .............................................................................................................7 4. Spearman’s.............................................................................................................7 Tests for Nominal Data 1. Helpful Hints for All Nominal Tests.....................................................................8 2. Chi-Square Goodness-of-Fit ..................................................................................8 3. Chi-Square Independence ......................................................................................8 4. Cochran’s Q ...........................................................................................................8 5. Phi or Cramer’s V (Correlations for Nominal Data) .............................................9

SPSS Guide to Data Analysis Page 1 of8 For All Tests Remember that the Significance (or Asymp.Sig.in some cases)needs to be less than 0.05 to be significant. The Independent Variable is always the variable that you are predicting something about (i.e.what your Ha predicts differences between,as long as your Ha is correct).The Dependent Variable is what you are measuring in order to tell if the groups (or conditions for repeated measures tests)are different.For correlations and for Chi-Square,it does not matter which one is the Independent or Dependent variable. Ha always predicts a difference (for correlations,it predicts that r is different from zero,but another way of saying this is that there is a significant correlation)and Ho always predicts no difference.If your Ha was directional,and you find that it was predicted in the wrong direction(i.e.you predicted A was greater than B and it turns out that B is significantly greater than A)you should still accept H,even though H predicts no difference,and you found a difference in the opposite direction. If there is a WARNING box on your Output File,it is usually because you used the wrong test,or the wrong variables.Go back and double check. Tests For Numeric Data Z-Scores (Compared to Data) Analyze Descriptive Statistics>Descriptives Click over the variable you would like zscores for Click on the box that says Save Standardized Values as Variables.This is located right below the box that displays all of the variables. If means and standard deviations are needed,click on Options and click on the boxes that will give you the means and standard deviations. .The zscores will not be on the Output File!!! They are saved as variables on the Data File.They should be saved in the variable that is to the far right of the data screen.Normally it is called z,and then the name of the variable (e.g.ZSLEEP) Compare the zscores to the critical value to determine which zscores are significant.Remember,if your hypothesis is directional(i.e.one-tailed),the critical value is or-1.645.If your hypothesis is non-directional (i.e.two- tailed),the critical value is or-1.96
SPSS Guide to Data Analysis Page 1 of 8 For All Tests · Remember that the Significance (or Asymp. Sig. in some cases) needs to be less than 0.05 to be significant. · The Independent Variable is always the variable that you are predicting something about (i.e. what your Ha predicts differences between, as long as your Ha is correct). The Dependent Variable is what you are measuring in order to tell if the groups (or conditions for repeated measures tests) are different. For correlations and for Chi-Square, it does not matter which one is the Independent or Dependent variable. · Ha always predicts a difference (for correlations, it predicts that r is different from zero, but another way of saying this is that there is a significant correlation) and Ho always predicts no difference. If your Ha was directional, and you find that it was predicted in the wrong direction (i.e. you predicted A was greater than B and it turns out that B is significantly greater than A) you should still accept Ho, even though Ho predicts no difference, and you found a difference in the opposite direction. · If there is a WARNING box on your Output File, it is usually because you used the wrong test, or the wrong variables. Go back and double check. Tests For Numeric Data Z-Scores (Compared to Data ) Analyze ‡ Descriptive Statistics ‡ Descriptives · Click over the variable you would like z-scores for · Click on the box that says Save Standardi zed Values as Variables. This is located right below the box that displays all of the variables. · If means and standard deviations are needed, click on Options and click on the boxes that will give you the means and standard deviations. · The z-scores will not be on the Output File!!! · They are saved as variables on the Data File. They should be saved in the variable that is to the far right of the data screen. Normally it is called z, and then the name of the variable (e.g. ZSLEEP) · Compare the z-scores to the critical value to determine which z-scores are significant. Remember, if your hypothesis is directiona l (i.e. one-tailed), the critical value is + or – 1.645. If your hypothesis is non-directional (i.e. twotailed), the critical value is + or – 1.96

SPSS Guide to Data Analysis Page 2 of9 Z-Scores Compared to a Population Mean and Standard Deviation: The methodology is the same except you need to tell SPSS what the population mean and standard deviation is(In the previous test,SPSS calculated it for you from the data it was given.Since SPSS cannot calculate the population mean and standard deviation from the class data,you need to plug these numbers into a formula). Remember the formula for a zscore is: X-4 2= You are going to transform the data you got into a zscore that is compared to the population by telling SPSS to minus the population mean from each piece of data, and then dividing that number by the population standard deviation.To do so,go to the DATA screen,then: Transform→Compute Name the new variable you are creating in the Target Variable box (ZUSPOP is a good one if you can't think of anything). Click the variable you want zscores for into the Numeric Expression box. Now type in the zscore formula so that SPSS will transform the data to a US population zscore.For example,if I am working with a variable called Sleep, and I am told the US population mean is 8.25 and that the US population standard deviation is.50,then my Numeric Expression box should look like this: (SLEEP-8.25)/.50 Compare for significance in the same way as above. For All T-Tests The significance that is given in the Output File is a two-tailed significance. Remember to divide the significance by 2 ifyou only have a one-tailed test! For One Group T-Tests Analyze→Compare Means→One-Sample T Test The Dependent variable goes into the Test Variables box. The hypothetical mean or population mean goes into the Test Value box.Be Careful!!!The test value should be written in the same way the data was entered for the dependent variable.For example,my dependent variable is "Percent Correct on a Test"and my population mean is 78%.If the data for
SPSS Guide to Data Analysis Page 2 of 9 Z-Scores Compared to a Population Mean and Standard Deviation: · The methodology is the same except you need to tell SPSS what the population mean and standard deviation is (In the previous test, SPSS calculated it for you from the data it was given. Since SPSS cannot calculate the population mean and standard deviation from the class data, you need to plug these numbers into a formula). · Remember the formula for a z-score is: s - m = X z · You are going to transform the data you got into a z-score that is compared to the population by telling SPSS to minus the population mean from each piece of data, and then dividing that number by the population standard deviation. To do so, go to the DATA screen, then: Transform ‡ Compute · Name the new variable you are creating in the Target Variable box (ZUSPOP is a good one if you can’t think of anything). · Click the variable you want z-scores for into the Numeric Expression box. Now type in the z-score formula so that SPSS will transform the data to a US population z-score. For example, if I am working with a variable called Sleep, and I am told the US population mean is 8.25 and that the US population standard deviation is .50, then my Numeric Expression box should look like this: (SLEEP – 8.25)/.50 · Compare for significance in the same way as above. For All T-Tests · The significance that is given in the Output File is a two-tailed significance. Remember to divide the significance by 2 if you only have a one-tailed test! For One Group T-Tests Analyze ‡ Compare Means ‡ One-Sample T Test · The Dependent variable goes into the Test Variables box. · The hypothetical mean or population mean goes into the Test Value box. Be Careful!!! The test value should be written in the same way the data was entered for the dependent variable. For example, my dependent variable is “Percent Correct on a Test” and my population mean is 78%. If the data for

SPSS Guide to Data Analysis Page 3 of 9 the "Percent Correct on a Test"variable were entered as 0.80,0.75,etc.,then the vest value should be entered as 0.78.If the data were entered as 90,75, etc.,then the test value should be entered as 78.In order to know how the data were entered,click on the Data File screen and look at the data for the dependent variable For Independent Groups T-Tests Analyze>Compare Means Independent-Samples T Test The Dependent Variable goes in the Test Variable box and the Independent variable goes in the Grouping Variable box. Click on Define Groups and define them.In order to know how to define them, click on Utilities>Variables.Click on the independent variable you are defining and see what numbers are under the value labels (i.e.usually it's either 0 and 1 or I and 2).If there are more than two numbers in the value labels then you cannot do a t-test unless you are using a specified cut-point(i.e.if there are four groups:1 =old women,2 young women,3=old men,4=young men,and you simply wanted to look at the differences between men and women,you could set a cut point at 2).If there are no numbers,you should be using a specified cut- point.If you have more than two numbers in the value label,or if you have no numbers in the value labels,and no cut-point has been specified on the final exam you are doing the wrong kind of test!!!! On the Output File:Remember,this is a t-test,so ignore the F value and the first significance value (Levene's Test).Also,ignore the equal variances not assumed row. Before accepting Ha,be sure to look at the means!!!If I predict that boys had a higher average correct on a test than girls and my t-test value is significant I may say,yes,boys got more correct than girls.However,this t-test could be significant because girls got significantly more correct than boys!!(Therefore,Ha was predicted in the wrong direction!)In order to know which group got significantly more correct than the other,I need to look at the means and see which one is bigger!! For Repeated Measures (a.k.a Correlated Groups,Paired Samples)T-Test Analyze>Compare Means>Paired-Samples T Test ● Click on one variable and then click on a second variable and then click on the arrow that moves the pair of variables into the Paired Variables box.In order to make things easier on yourself,click your variables in,in order ofyour hypotheses.For example,ifyou are predicting that A is greater than B,click on A first.This way,you should expect that your t value will be positive.If the test is significant,but your t value is negative,it means that B was significantly greater than A!!(So Ha was predicted in the wrong direction and you should accept Ho). Ifyou are predicting that A is less than B,then click on A first.This way,you
SPSS Guide to Data Analysis Page 3 of 9 the “Percent Correct on a Test” variable were entered as 0.80, 0.75, etc., then the vest value should be entered as 0.78. If the data were entered as 90, 75, etc., then the test value should be entered as 78. In order to know how the data were entered, click on the Data File screen and look at the data for the dependent variable. For Independent Groups T-Tests Analyze ‡ Compare Means ‡ Independent-Samples T Test · The Dependent Variable goes in the Test Variable box and the Independent variable goes in the Grouping Variable box. · Click on Define Groups and define them. In order to know how to define them, click on Utilities ‡ Variables. Click on the independent variable you are defining and see what numbers are under the value labels (i.e. usually it’s either 0 and 1 or 1 and 2). If there are more than two numbers in the value labels then you cannot do a t-test unless you are using a specified cut-point (i.e. if there are four groups: 1 = old women, 2 = young women, 3 = old men, 4 = young men, and you simply wanted to look at the differences between men and women, you could set a cut point at 2). If there are no numbers, you should be using a specified cutpoint. If you have more than two numbers in the value labels, or if you have no numbers in the value labels, and no cut-point has been specified on the final exam you are doing the wrong kind of test!!!! · On the Output File: Remember, this is a t-test, so ignore the F value and the first significance value (Levene ’s Test). Also, ignore the equal variances not assumed row. · Before accepting Ha, be sure to look at the means!!! If I predict that boys had a higher average correct on a test than girls and my t-test value is significant I may say, yes, boys got more correct than girls. However, this t-test could be significant because girls got significantly more correct than boys!! (Therefore, Ha was predicted in the wrong direction!) In order to know which group got significantly more correct than the other, I need to look at the means and see which one is bigger!! For Repeated Measures (a.k.a Correlated Groups, Paired Samples) T- Test Analyze ‡ Compare Means ‡ Paired-Samples T Test · Click on one variable and then click on a second variable and then click on the arrow that moves the pair of variables into the Paired Variables box. [In order to make things easier on yourself, click your variables in, in order of your hypotheses. For example, if you are predicting that A is greater than B, click on A first. This way, you should expect that your t value will be positive. If the test is significant, but your t value is negative, it means that B was significantly greater than A!! (So Ha was predicted in the wrong direction and you should accept Ho). If you are predicting that A is less than B, then click on A first. This way, you

SPSS Guide to Data Analysis Page 4 of 9 should expect that the t value will be negative.If the test is significant but the t value is positive,you know that it means that B was significantly less than A (so Ha was predicted in the wrong direction,and you should accept Ho). For Independent Groups ANOVA Analyze→General Linear Model→One-Vay ANOVA Put the Dependent variable into the Dependent List box.Put the Independent variable into the Factor box. ● Click on the Post Hoc box.Click on the Tukey box.Click Continue If means and standard deviations are needed,click on the Options box.Then click on Descriptive. For Repeated Measures (a.k.a.Correlated Groups,Paired Samples)ANOVA Analyze General Linear Model>Repeated Measures Type in the within-subject factor name in the Within-Subject Factor Name box. This cannot be the name of a pre-existing variable.You will have to make one up. Type in the number of levels in the Number of levels box.How do youknow how many levels there are?If my within-subject factor was "Tests"and I have a variable called "Test 1"a variable called"Test 2"and a variable called"Test 3," then I would have 3 levels.In other words,the number of levels equals the number of variables(that you are examining)that correspond to the within- subject factor. 。Click on Define Put the variables you want to test into the Variables box.Preferably,put them in the right order(if there is an order to them).This will keep you from getting confused.For example,I should put in my "Test 1"variable in first,my "Test 2" variable in second,etc. For post hoc tests,click on Options,highlight the variable,move it into Display Means For box,click on Compare Main Effects,change Confidence Interval Adjustment to Bonferonni (the closest we can get to Tukey's Test).You may also want to click on Estimates of Effect Size for eta Remember to look at the Tests of Within-Subjects Effects box for your ANOVA results. Correlation Coefficient(r) Analyze→Correlate→Bivariate Make sure that the Pearson box (and only the Pearson box)has a check in it. Put the variables in the Variables box.Their order is not important
SPSS Guide to Data Analysis Page 4 of 9 should expect that the t value will be negative. If the test is significant but the t value is positive, you know that it means that B was significantly less than A (so Ha was predicted in the wrong direction, and you should accept Ho).] For Independent Groups ANOVA Analyze ‡ General Linear Model ‡ One-Way ANOVA · Put the Dependent variable into the Dependent List box. Put the Independent variable into the Factor box. · Click on the Post Hoc box. Click on the Tukey box. Click Continue . · If means and standard deviations are needed, click on the Options box. Then click on Descriptive . For Repeated Measures (a.k.a. Correlated Groups, Paired Samples) ANOVA Analyze ‡ General Linear Model ‡ Repeated Measures · Type in the within-subject factor name in the Within-Subject Factor Name box. This cannot be the name of a pre-existing variable. You will have to make one up. · Type in the number of levels in the Number of levels box. How do you know how many levels there are? If my within-subject factor was “Tests” and I have a variable called “Test 1” a variable called “Test 2” and a variable called “Test 3,” then I would have 3 levels. In other words, the number of levels equals the number of variables (that you are examining) that correspond to the withinsubject factor. · Click on Define . · Put the variables you want to test into the Variables box. Preferably, put them in the right order (if there is an order to them). This will keep you from getting confused. For example, I should put in my “Test 1” variable in first, my “Test 2” variable in second, etc. · For post hoc tests, click on Options, highlight the variable, move it into Display Means For box, click on Compare Main Effects, change Confidence Interval Adjustment to Bonferonni (the closest we can get to Tukey’s Test). You may also want to click on Estimates of Effect Size for eta2 . · Remember to look at the Tests of Within-Subjects Effects box for your ANOVA results. Correlation Coefficient (r) Analyze ‡ Correlate ‡ Bivariate · Make sure that the Pearson box (and only the Pearson box) has a check in it. · Put the variables in the Variables box. Their order is not important

SPSS Guide to Data Analysis Page 5 of 9 Select your tailed significance (one or two-tailed)depending on your hypotheses Remember directional hypotheses are one-tailed and non-directional hypotheses are two-tailed. If means and standard deviations are needed,click on Options and then click on Means and Standard Deviations Linear Regression Graphs→Scatter→Simple This will let you know if there is a linear relationship or not. Click on Define. The Dependent variable (criterion)should be put in the Y-axis box and the Independent variable(predictor)should be put in the X-axis box and hit OK .In your Output:Double click on the scatterplot.Go to Chart>Options.Click on Total in the Fit Line box,then OK. Make sure it is more-or-less linear.The next step is to check for normality Graphs→Q-Q Put both variables into the Variable box.Hit OK. .Look at the Normal Q-Q Plot of ....not the Detrended Normal Q-Q of... box.If the points are a "smiley face"they are negatively-skewed.This means you will have to raise them to a power greater than one.If the points make a "frown face"then they are positively-skewed.This means you will have to raise them to a power less than one (but greater than zero).To do this,go to the DATA screen then: Transform→Compute Give the new variable a name in the Target Variable box.Since you will be doing many of these (because it is a guess and check)it may be easiest to name it the old variable and then the power you raised it to.For example,SLEEP.2 if I raised it to the.2 power,or SLEEP3 if I raised it to the third power. Click the old variable(that you want to change)into the Numeric Expressions box.Type in the exponent function (**)and then the power you want to raise it to.Hit OK. Redo the Q-Q plot with the NEW variable (i.e.SLEEP.2,and not SLEEP) Repeat until you have the best fit data.The variable that you created with the best-fit data (i.e.SLEEP.2)will be the variable that you will use for the REST OF THE REGRESSION(no more SLEEP). The next step is to remove Outliers.To do so,run a regression: Analyze→Regression→Linear
SPSS Guide to Data Analysis Page 5 of 9 · Select your tailed significance (one or two-tailed) depending on your hypotheses. Remember directional hypotheses are one-tailed and non-directional hypotheses are two-tailed. · If means and standard deviations are needed, click on Options and then click on Means and Standard Deviations. Linear Regression Graphs ‡ Scatter ‡ Simple · This will let you know if there is a linear relationship or not. · Click on Define . · The Dependent variable (criterion) should be put in the Y-axis box and the Independent variable (predictor) should be put in the X-axis box and hit OK. · In your Output: Double click on the scatterplot. Go to Chart ‡ Options. Click on Total in the Fit Line box, then OK. · Make sure it is more-or-less linear. The next step is to check for normality. Graphs ‡ Q-Q · Put both variables into the Variable box. Hit OK. · Look at the Normal Q-Q Plot of …., not the Detrended Normal Q-Q of … box. If the points are a “smiley face” they are negatively-skewed. This means you will have to raise them to a power greater than one. If the points make a “frown face” then they are positively-skewed. This means you will have to raise them to a power less than one (but greater than zero). To do this, go to the DATA screen then: Transform ‡ Compute · Give the new variable a name in the Target Variable box. Since you will be doing many of these (because it is a guess and check) it may be easiest to name it the old variable and then the power you raised it to. For example, SLEEP.2 if I raised it to the .2 power, or SLEEP3 if I raised it to the third power. · Click the old variable (that you want to change) into the Numeric Expressions box. Type in the exponent function (**), and then the power you want to raise it to. Hit OK. · Redo the Q-Q plot with the NEW variable (i.e. SLEEP.2, and not SLEEP). · Repeat until you have the best fit data. The variable that you created with the best-fit data (i.e. SLEEP.2) will be the variable that you will use for the REST OF THE REGRESSION (no more SLEEP). · The next step is to remove Outliers. To do so, run a regression: Analyze ‡ Regression ‡ Linear

SPSS Guide to Data Analysis Page 6 of 9 The Independent variable is the predictor (the variable from which you want to predict).The Dependent variable is the Criterion (the variable you want to predict). ● Click on Save and then click on Cook's Distance.Like the zscores,these values will NOT be in your Output file.They will be on your Data file,saved as variables(COO-1). Do a Boxplot: Graphs→Boxplot Click on Simple and Summaries of Separate Variables. Put Cook's Distance (COO 1)into the Boxes Represent box.Put a variable that will label the cases (usually ID or name)into the Label Cases by box. Double click on the Boxplot in the Output file in order to enlarge it and see which cases need to be removed(those lying past the black lines).Keep track of those cases.(Some people prefer to take out only extreme outliers,those marked by a *. Go back to the Data file.Find a case you need to erase.If you highlight the number to the far left (in the gray),it will highlight all of the data from that case.Then go to Edit>Clear.Repeat this for all of the outliers.ERASE FROM THE BOTTOM OF THE FILE,WORKING UP;IF YOU DON'T THE ID NUMBERS WILL CHANGE AND YOU'LL ERASE THE WRONG ONES!DO NOT RE-RUN THE BOXPLOT LOOKING FOR MORE OUTLIERS!Once you have cleared the outliers from the first (and hopefully only)boxplot,you can continue. Now re-run the regression. Remember:R tells you the correlation.R tells you the proportion of variance in the criterion variable(Dependent variable)that is explained by the predictor variable (Independent variable).The F tells you the significance of the correlation.The predicted equation is:(B constant)+ (Variable constant)(Variable).Where B constant is the number in the column labeled B,in the row labeled (constant),and where Variable constant is the number in the column labeled B and in the row that has the same name as the variable [under the (constant)row]
SPSS Guide to Data Analysis Page 6 of 9 · The Independent variable is the predictor (the variable from which you want to predict). The Dependent variable is the Criterion (the variable you want to predict). · Click on Save and then click on Cook’s Distance. Like the z-scores, these values will NOT be in your Output file. They will be on your Data file, saved as variables (COO-1). · Do a Boxplot: Graphs ‡ Boxplot · Click on Simple and Summaries of Separate Variables. · Put Cook’s Distance (COO_1) into the Boxes Represent box. Put a variable that will label the cases (usually ID or name) into the Label Cases by box. · Double click on the Boxplot in the Output file in order to enlarge it and see which cases need to be removed (those lying past the black lines). Keep track of those cases. (Some people prefer to take out only extreme outliers, those marked by a *.) · Go back to the Data file. Find a case you need to erase. If you highlight the number to the far left (in the gray), it will highlight all of the data from that case. Then go to Edit ‡ Clear. Repeat this for all of the outliers. ERASE FROM THE BOTTOM OF THE FILE, WORKING UP; IF YOU DON’T THE ID NUMBERS WILL CHANGE AND YOU’LL ERASE THE WRONG ONES! DO NOT RE-RUN THE BOXPLOT LOOKING FOR MORE OUTLIERS! Once you have cleared the outliers from the first (and hopefully only) boxplot, you can continue. · Now re-run the regression. · Remember: R tells you the correlation. R2 tells you the proportion of variance in the criterion variable (Dependent variable) that is explained by the predictor variable (Independent variable). The F tells you the significance of the correlation. The predicted equation is: (B constant) + (Variable constant)(Variable). Where B constant is the number in the column labeled B, in the row labeled (constant), and where Variable constant is the number in the column labeled B and in the row that has the same name as the variable [under the (constant) row]

SPSS Guide to Data Analysis Page 7 of9 Tests For Ordinal Data Remember,since this is ordinal data,you should not be predicting anything about means in your Ha and Ho.Also,you should not be reporting any means or standard deviations in your results paragraphs. Therefore,if you need to report medians and/or ranges go to Analyze Descriptive Statistics>Frequencies.Click on Statistics and then click on the boxes for median and for range. Kruskal-Wallis Analyze Nonparametric Tests>K Independent Samples Make sure the Kruskal-Wallis H box (and this box only)has a check mark in it. Put the Dependent variable in the Test Variable List box and put the Independent variable in the Grouping Variable box. Click on Define Range.Type in the min and max values (if you do not know what they are you will have to go back to Utilities>Variables to find out and then come back to this screen to add them in). Friedman's Test Analyze Nonparametric TestsK Related Samples Make sure that the Friedman box(and only that box)has a check mark in it. Put the variables into the Test Variables box. In the Output File:Even though it says Chi-Square,don't worry,you did a Friedman's test (as long as you had it clicked on). Spearman's Correlation (rs) Analyze Correlate Bivariate Click on the Pearson box in order to turn it OFF!Click on the Spearman box in order to turn it ON! Choose a one or two-tailed significance depending on your hypotheses. Put variables into the Variable box
SPSS Guide to Data Analysis Page 7 of 9 Tests For Ordinal Data · Remember, since this is ordinal data, you should not be predicting anything about means in your Ha and Ho. Also, you should not be reporting any means or standard deviations in your results paragraphs. · Therefore, if you need to report medians and/or ranges go to Analyze ‡ Descriptive Statistics ‡ Frequencies. Click on Statistics and then click on the boxes for median and for range. Kruskal-Wallis Analyze ‡ Nonparametric Tests ‡ K Independent Samples · Make sure the Kruskal-Wallis H box (and this box only) has a check mark in it. · Put the Dependent variable in the Test Variable List box and put the Independent variable in the Grouping Variable box. · Click on Define Range. Type in the min and max values (if you do not know what they are you will have to go back to Utilities ‡ Variables to find out and then come back to this screen to add them in). Friedman’s Test Analyze ‡ Nonparametric Tests ‡ K Related Samples · Make sure that the Friedman box (and only that box) has a check mark in it. · Put the variables into the Test Variables box. · In the Output File: Even though it says Chi-Square, don’t worry, you did a Friedman’s test (as long as you had it clicked on). Spearman’s Correlation (rs) Analyze ‡ Correlate ‡ Bivariate · Click on the Pearson box in order to turn it OFF! Click on the Spearman box in order to turn it ON! · Choose a one or two-tailed significance depending on your hypotheses. · Put variables into the Variable box
按次数下载不扣除下载券;
注册用户24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
- 《社会科学研究方法》课程教学资源:SPSS Software_PASW Statistics 17(SPSS 17)Part 1:Descriptive Statistics.pdf
- 《社会科学研究方法》课程教学资源:SPSS Datasets(GSS ESG)General Social Survey——Cycle 24:Time-Stress and Well-Being Questionnaire.pdf
- 《社会科学研究方法》课程教学资源:2010年中国妇女社会地位调查问卷(流动).pdf
- 上海交通大学:《系统思考与科学决策 Systems Thinking and Scientific Decision-making》课程教学资源(讲义)Lecture 8 Insight into Beer Game & Beyond.pdf
- 上海交通大学:《系统思考与科学决策 Systems Thinking and Scientific Decision-making》课程教学资源(讲义)Lecture 6 The Market that Plays.pdf
- 上海交通大学:《系统思考与科学决策 Systems Thinking and Scientific Decision-making》课程教学资源(讲义)Lecture 5 Leverage Points – what is the real problem?.pdf
- 上海交通大学:《系统思考与科学决策 Systems Thinking and Scientific Decision-making》课程教学资源(讲义)Lecture 4 More causal diagrams and modes of behaviors.pdf
- 上海交通大学:《系统思考与科学决策 Systems Thinking and Scientific Decision-making》课程教学资源(讲义)Lecture 2 System Dynamics(2/2).pdf
- 上海交通大学:《系统思考与科学决策 Systems Thinking and Scientific Decision-making》课程教学资源(讲义)Lecture 2 System Dynamics(1/2).pdf
- 上海交通大学:《系统思考与科学决策 Systems Thinking and Scientific Decision-making》课程教学资源(讲义)Lecture 1 Systems Thinking and Scientific Decision-making.pdf
- 上海交通大学:《治理之善:公共行政热点解析》课程教学资源(PPT课件讲稿)课程介绍(主讲:章伟).pptx
- 上海交通大学:《治理之善:公共行政热点解析》课程教学资源(PPT课件讲稿)绪论 从管理到治理:从“上海交通大治”谈起.pptx
- 上海交通大学:《治理之善:公共行政热点解析》课程教学资源(PPT课件讲稿)结语 善治与治善:寻找政府正义之门.pptx
- 上海交通大学:《治理之善:公共行政热点解析》课程教学资源(PPT课件讲稿)专题四 多中心治理:中国式蜗居何时了.pptx
- 上海交通大学:《治理之善:公共行政热点解析》课程教学资源(PPT课件讲稿)专题六 协商治理:医患纠纷破解之道.pptx
- 上海交通大学:《治理之善:公共行政热点解析》课程教学资源(PPT课件讲稿)专题五 自组织治理:空巢老人,不该如此死去.pptx
- 上海交通大学:《治理之善:公共行政热点解析》课程教学资源(PPT课件讲稿)专题二 整体性治理:12345与最多跑一次.pptx
- 上海交通大学:《治理之善:公共行政热点解析》课程教学资源(PPT课件讲稿)专题三 合作治理:校园暴力化解之道.pptx
- 上海交通大学:《治理之善:公共行政热点解析》课程教学资源(PPT课件讲稿)专题一 新公共管理:监狱民营化.pptx
- 上海交通大学:《治理之善:公共行政热点解析》课程教学资源(阅读资料)教育部等九部门关亍防治中小学生欺凌和暴力的指导意见.pdf
- 《社会科学研究方法》课程教学资源:SPSS Software_Upenn_2009- The Very Basics of SPSS.pdf
- 《社会科学研究方法》课程教学资源(阅读材料)Reflections on Choosing the Appropriate Level of Abstraction in Social Science Research.pdf
- 《社会科学研究方法》课程教学资源(阅读材料“中国式”邻避运动核心议题探析———基于民意视角(上海交通大学:王奎明、钟杨).pdf
- 《社会科学研究方法》课程教学资源(阅读材料)“幸福悖论”的积极心理学思考——我们如何更加幸福.pdf
- 《社会科学研究方法》课程教学资源(阅读材料)中国农民工工资走势(1979-2010).pdf
- 《社会科学研究方法》课程教学资源(阅读材料)中国城乡居民的教育机会不平等及其演变(1978—2008).pdf
- 《社会科学研究方法》课程教学资源(阅读材料)中国政府购买社会组织公共服务现状分析.pdf
- 《社会科学研究方法》课程教学资源(阅读材料)哈尔滨话合口呼零声母[υ]化的社会语言学研究.pdf
- 《社会科学研究方法》课程教学资源(阅读材料)大学毕业生“啃老”现象原因探析.pdf
- 《社会科学研究方法》课程教学资源(阅读材料)大学生生活习惯对自我肯定意识的影响.pdf
- 《社会科学研究方法》课程教学资源(阅读材料)如何走出个案——从个案研究到扩展个案研究.pdf
- 《社会科学研究方法》课程教学资源(阅读材料)如何走出个案——从个案研究到扩展个案研究.pdf
- 《社会科学研究方法》课程教学资源(阅读材料)婚姻质量——度量指标及其影响因素.pdf
- 《社会科学研究方法》课程教学资源(阅读材料)婚姻质量——度量指标及其影响因素.pdf
- 《社会科学研究方法》课程教学资源(阅读材料)当下历史虚无主义之我见(北京师范大学:郑师渠).pdf
- 《社会科学研究方法》课程教学资源(阅读材料)微博兴起背景下大学生思想政治教育的挑战与应对(浙江树人大学:尹晓敏).pdf
- 《社会科学研究方法》课程教学资源(阅读材料)我国本科工程教育实践教学问题与原因探析.pdf
- 《社会科学研究方法》课程教学资源(阅读材料)户口还起作用吗_户籍制度与社会分层和流动.pdf
- 《社会科学研究方法》课程教学资源(阅读材料)方言接触的影响因素研究.pdf
- 《社会科学研究方法》课程教学资源(阅读材料)青少年网络犯罪及其治理路径探究.pdf