Consolidating results, with poster:
Sophomore (2014), Junior (2015), Change, Poster
231, 225, -6, @foosondaughter
200, 221, +21, @merething
178, 214, +36, @icantsleep
191, 218, +27, @mtrosemom
119, 119, +0, @mtrosemom
194, 209, +15, @Pannaga
200, 208, +8, @Pannaga
199, 216, +17, @kikidee9
187, 212, +25, @phoenixmomof2
206, 281, +12, @CA1543
**Methodology/b:
Just as a experiment, I’d like to try a very different, independent mechanism of estimating the cutoffs. Many students who took the PSAT as a junior also took it as a sophomore. As it turns out, the relationship between sophomore and junior scores has been studied (see https://research.collegeboard.org/sites/default/files/publications/2012/7/researchnote-2010-41-score-change-2007-psat.pdf), and sophomore-to-junior scores are highly correlated, with scores generally showing just a slight 3-4 point increase on average per section (see Table 2 on page 11 in that document). However, for sophomores who already had high scores (such as those who are in the range of NMSF), this increase is actually reduced or even negative (both because of limited headroom and “regression to the mean”). See Figure 4 on page 10 of the document how the increase from sophomore to juniors scores is equal to zero for PSAT scores around 70 – which happens to correspond to the region we are most concerned with in estimating the cutoff.
My hypothesis, then, is simple. If people are willing to share sophomore and junior scores, we can estimate the deviation between the old test and the new test. The average difference could then be applied to the old cutoffs to estimate the new cutoffs. To keep it simple, I think it is reasonable to use just the selection index rather than look at the individual breakouts. I figure we’ll need at least 30 or so scores before we have meaningful data.