National Merit Cutoff Predictions Class of 2017

@IABooks – I do think a call is likely better than an email. Educators can pull various reports but if your GC is ot familiar there are archived and upcoming webinars: https://lp.collegeboard.org/help-resources-accessing-scores. You can look here to see kinds of reports:
https://collegereadiness.collegeboard.org/educators/k-12/reporting-portal-help

For example:
"Scores by Institution
This report provides aggregate performance data by institution, as well as student-level performance at each school, for every score. Note that this particular report may experience slow load time, especially for large schools. Please be patient, and don’t refresh, as some reports could take up to five minutes to load.

Tip: The Roster Detail report contains most of the same score data, and loads faster. It can also be exported to Excel for additional analysis. The Roster Detail report is accessed through the main Roster report, accessible from the Summary Dashboard or the report selector. Be sure to click the Roster Detail link for the specific assessment you are interested in."

Wonder how many junior students took the PSAT, how many students scored above your state’s last year’s cutoff and what were the scores & how do they compare to the state?

Regarding your student’s report - go to the online report – https://studentscores.collegeboard.org/viewscore/details
View details & look for User percentiles. Scroll down to the reading writing and math subsections and click on the circled “i” on the right. Also you can click on Show details. This video might help a little: https://collegereadiness.collegeboard.org/psat-nmsqt-psat-10/scores/understanding-scores
One change this year is that “User” percentiles – show your student at a percentile that is at or above other students from a research study - not actual test takers and also it does not tell you that you student is above others at this level but at least “tied” with him/her.

You can use this to understand your child’s scores better (and much of what we have been discussing on this thread): https://collegereadiness.collegeboard.org/pdf/2015-psat-nmsqt-understanding-scores.pdf

Hope this helps & is not overwhelming.

Thanks so much!!

@CA1543 Suddenly all the references to page 11 make sense! :slight_smile: I’ll call the GC on Monday and post my findings or lack thereof then. I’ll see what I can find out about statewide data for Iowa as well

@IABooks – We all love those “Ah ha” moments! :slight_smile: thanks for your inquiries.

Thank you @thshadow for considering my idea. I can tell from your previous posts that you understand statistics far better than I do. Does any body else feel that maybe CB was not even trying to represent the top one percent on page 11 but just extrapolating from the mean and SD? It just fits too precisely. A 214 is 2.54 standard deviations from the given mean and SD as @thshadow calculated. We might not really want to use page 11 at all when considering the top 1% and cut offs for NMSF.

Believe it or not, I could not stay away and put together this description of my efforts yesterday. My goal is to make sense of the 2015 SI%chart which seems to conflict with the 2015 Concordance charts and with the anecdotal evidence from high schools in GA, OK and IL. I know that this is a goal most of you have.

My plan is to try to convert this chart so that it defines ‘percentile’ in the same way that the 2014 SI % chart defines it.

Here goes my second attempt at an explanation of what I am doing:

Partial copy of 2015 SI% chart:
(with added definition as given by Compass Report)

Score 2015 SI chart What it means

(definition B)

214 99+ 99+% of students scored at or below this score
213 99 99% of students scored at or below this score

212 99 99% of students scored at or below this score

211 99 99% of students scored at or below this score

210 99 99% of students scored at or below this score

209 99 99% of students scored at or below this score

208 99 99% of students scored at or below this score

207 99 99% of students scored at or below this score

206 99 99% of students scored at or below this score

205 99 99% of students scored at or below this score

204 98 98% of students scored at or below this score

203 98 98% of students scored at or below this score

202 98 98% of students scored at or below this score

201 97 97% of students scored at or below this score

200 97 97% of students scored at or below this score
199 96 96% of students scored at or below this score

What this chart tells us:

OK, this SI% chart does not tell us the total number of test takers that took the test. However, one of the important things it is telling us is that the scores 202-204 have been scored by an entire 1% of test takers. It also tells us that the scores 200-201 are held by an entire 1% of test takers. We don’t know what number of test takers are in this band, but we know that each band equals 1%. We also don’t know how many of the 1% of test takers scored at the 3 individual spots for the band 202-204. The CB has not given us that information. It may be that each of the three scores garnered .333% of test takers or it may be that the 1% of scores were distributed unevenly. All we can assume is that an entire 1% of test takers had scores in those 3 ranges. Same with the band of scores 201-202. We do not know if test takers scored those two scores evenly, each garnering .5% of test takers, or if the scores were distributed unevenly. One thing seems certain: that each of the individual SI units 204, 203,202, and 201 will not have the equivalent fraction of a %ile since in one band it is 3 scores that at up to 1% and in the other band it is 2 scores that add up to 1%.

Crucial assumption:
If we know the exact fraction of a %ile of test takers that scored at each individual SI score unit, then Do Not move on with my explanation. But please do report how we know that information.

Assuming I am right so far, then I move on to Converting the definition within the parameters of the information given:

So with this in mind, if I want to understand the information in this 2015 chart in terms of the older, 2014 definition of percentiles, I will need to understand the scores as they correlate ONLY to the percentage of students that scored below them and not to the band of students who scored in the same percentile with them. I can not break up a band, though, because I do not know how the scores are distributed within the band. If I move only one SI score unit down, I might move .3% or .1% or .5%. We don’t know how to do this because the CB does not give us that number. It could result in factually incorrect statements. Thus, working only with the numbers that they give us, I think it is reasonable to translate like this:

Score 2015% Def B Converted to Def A
204 98 98% of students scored at or below this score → 97% of students scored below this score

203 98 98% of students scored at or below this score → 97% of students scored below this score

202 98 98% of students scored at or below this score → 97% of students scored below this score

201 97 97% of students scored at or below this score → 96% of students scored below this score

200 97 97% of students scored at or below this score → 96% of students scored below this score

If this conversion seems reasonable, then the 2015 SI% chart, once fully converted, looks VERY MUCH like the 2014 SI % chart. In addition, it corresponds more closely to the conversion charts. In addition, it accommodates predictions which make sense of the anecdotes of high scores at single high schools in GA, OK and IL. The biggest place for confusion with this method is where to begin the 99+ range of scores. As there is no range higher in the 2015 Chart and bc the high end scores have changed since 2014, the transition spot for 99 to 99+ will require other theorizing.

@thshadow says:

<<And if the percentile table is just a bell curve (and maybe they didn’t really care about the upper end), possibly the concordance table was computed completely differently - possibly in a way that’s actually accurate at the top end.

That would agree with the anecdotes - that the percentiles at the top end are just wrong, and that the concordance numbers are correct.>>

Do you think they used the actual distribution of 2015 SI scores to concord with previous years? I was recently thinking that’s what they did - and the reason it’s “preliminary” is because they don’t want to depend on just one curve so will need to verify the current actual against the SAT curves in March and May. They will then have three curves they can either overlay with each other or average or something to set the final concordance tables for this year.

Does that make sense? It would certainly explain why concordance results in a lower percentile for some of the scores (which, of course, are in the same general area where Jed Applerouth identified the so called “inflation”).

BTW, thanks to @AJ2017 for pointing out in Post #2130 that CB was kind enough to fix the cut-and-paste error pertaining to the mean and Std. deviation of the Page 11 percentile table!!! For the benefit of those who (like me) were not aware prior to this afternoon, 148 is the mean and the standard deviation is 26. I have now printed out that new page for my hardcopy. What else did CB happen to update w/o letting everyone know? Or did they announce via social media?

@AJ2017 - it’s really a great observation. It’s a logical explanation as to why the percentile table could be wrong (at least the upper part of the table). I think @DoyleB in particular will be very interested to read it. Though it’s likely bad news for me and my daughter… :frowning:

Can we rebuild the page 11 percentile table using the concordance table instead?

Edit/update: We can concord each SI unit to a previous year score, then look up those percentiles (many have already done so I think we just haven’t put it into a complete table ). THOSE would be the correct percentiles for page 11.

@likestowrite if @AJ2017 is correct about the page 11 table just being a hypothesized normal distribution AND the concordance tables are more accurate (as they are proving to be) then I’m going to posit that a rebuilt SI table using concordance to find the new percentiles will end up looking a lot like your tables.

I’ve really enjoyed following these posts for the past couple weeks! Please forgive me if I missed a discussion on this point, but was wondering whether you folks believe that the recent CB concordance tables could also be overstating students’ performances this year? Or, do people believe that the recent CB concordance tables likely represent conservative NM cutoffs? (since it seems that concorded scores for this year’s test would be higher that those predicted by various websites that we’ve discussed over the past couple weeks)

@AJ2017, "Does any body else feel that maybe CB was not even trying to represent the top one percent on page 11 but just extrapolating from the mean and SD? It just fits too precisely. "

May I suggest another way to look at the “it fits too precisely”? I would think CB would use the scoring curves to create a nice bell curve. Isn’t that what they SHOULD do?

Yeah, @Speedy2019 someone living in this house who actually knows a thing or two about distributions pointed that out to me tonight! His comment: Of COURSE it’s a bell curve. Duh. They fit it any way they want to.

Back to the drawing board for me. Look forward to the superior intelligence of others on this. I’m doing Movie Night.

99+ = 222-226
99 = 221-214
98 = 213-209
97 = 208-206

@F1RSTrodeo thank you for your the prediction, this is another data point for us to consider. To understand fully, what “theory” /“assumption” / “hypothesis” is this based on.

My take on the concordance tables post#1927 and mapping it to the 2014 percentiles. No hard data other than my comparison of 2015 to 2014

@Mamelot so what were the mean and std deviation before CB change them?

Once you can decide what is 99%… I don’t think you’ll be able to say the number of spots from the bottom of the 99% as in years past. Not the same number of slots

@likestowrite:

“Score 2015% Def B Converted to Def A
204 98 98% of students scored at or below this score → 97% of students scored below this score
203 98 98% of students scored at or below this score → 97% of students scored below this score
202 98 98% of students scored at or below this score → 97% of students scored below this score
201 97 97% of students scored at or below this score → 96% of students scored below this score
200 97 97% of students scored at or below this score → 96% of students scored below this score
If this conversion seems reasonable…”

I don’t think it’s reasonable.

All possibly scores are integers. So if 97% of students scored at or below 200, then it must be true that 97% of students scored below 201.

Obviously, these percentiles are rounded off, but that doesn’t matter. If it’s really 97.xyz% of students scored at or below 200, then it must be true that 97.xyz% of students scored below 201.

If 97.326% of people have been married 2 or fewer times, then 97.326% of people have been married fewer than 3 times.

But your converted chart says that 96% of students scored below 201.

This is why you move only one line when you do the conversion from Def B to Def A.

I still don’t know why the perfect scores in each table don’t say 100%, though.

Interesting. I didn’t know that they hadn’t had the mean and SD listed on page 11 before because I had not looked for it before. But they were probably working with those values from the beginning even though they had not properly recorded it before. It works too perfectly at the top 1% where they do not have as much real data from the studies. And to @Mamelot, yes, they fit the data to the bell shape curve the best they can, but their priority was not to accurately represent the top 1%. But, ironically, we who analyze the data the closest are most interested in the top 1%…