Remember confusing mass of stats, lets look again (briefly)
Can't Find This Yet : CORRESPONDING QUESTION PRINTOUT
- total = total number of students
- high, mid, low --> number in each group
- changes for each question though (omits)
- bit more complicated in how divided up
- omit = how many didn't write this one
- NF = not finished, omitted everything here on
- NR = no response = omit broken down by hi/mid/low group
- total across the top = % that chose that alternative
- test score means = average on 60 item test that students choosing this alternative got
- discriminating power -->that's what our "d" means:
- high group minus low group
- RPB = point biserial correlation (R= correlation)
- CRPB = corrected point biserial is point biserial, not counting this question
(want these around .200 --> but have to look at each item
--> may be valid but different sort of topic)
- IRI= Item Reliability Index ((combines CRPB and Difficulty into one number,
but since we look at CRPB and Difficulty directly, don't need it))
- Standard Error of Discriminating Power = standard deviation of the error
column of discrimination --> don't need to know about? -->zero because such
huge numbers to play with
- Con= if did this infinite number of times would get a frequency
distribution of point biserial; 95% of corrected point biserials would
be within this range)
Can Do item analysis on students responses to PROVINCIAL EXAMS
- doesn't just have to be test questions you wrote
- after every exam, dept. publishes item analysis for that exam