National Merit Cutoff Predictions Class of 2017

Any 2015 SI other than 228 can give a # of different concorded scores. The lower the SI, the more variation in possible concorded scores. I made a chart that concords a current 214 to various pre-2015 scores, depending on how the 214 breaks down. So, for example, unless I’ve made a mistake, a 38 reading / 38 writing / 31 math concords to 220; the range of possible concorded scores for a current 214 is 209 to 220. If you assume that each combination of reading / writing / math scores is equally likely, then the median and mean concorded scores for a current 214 is about 213.

HOLY developments, Batman! Big news today with this article. I’m a little sad that our little group wasn’t the one to solve the ‘mystery of the inflated percentiles’ :

Definition A: The percentage of students scoring below you.
Definition B: The percentage of students scoring at or below your score.

Definition B produces higher values in almost all cases and never gives lower values. College Board shifted from Definition A to Definition B this year, introducing an additional source of percentile inflation.

Shame on the college board for not defining that in the report to GCs and to test takers. I wish Scooby Doo and Shaggy and the mystery machine could take them down.

I vote for Shaggy & Scooby too!!! But maybe Batman would use a batarang. For CB not to come clean on change in doing percentiles is another basis for loss of trust and confidence. As the Compass report points out & others on this thread have as well, there are several mistakes in the CB published materials about the Oct. PSAT that have yet to be corrected - that is pretty unprofessional. What grade would our students get if they made such mistakes??

@Lea111 - I think I did something similar in a new post I just made:
http://talk.collegeconfidential.com/sat-act-tests-test-preparation/1855868-reconciling-2015-psat-concordance-tables-with-percentile-tables.html

One thought is that the max equivalent concordance score seems to somewhat match the percentile tables…

@DoyleB: I’m still wondering if it’s based on “national” - despite the fact that publishing a “national” SI table makes no sense either.

I’ve been assuming that that’s the case. The concordance and the various percentile tables could possibly all be consistent with one another - all coming from the same set of research data - once you realize that the user percentiles in the math and EBRW tables are off by one line compared with old reports, and if you assume that the SI percentiles are all high school juniors data, not user data, and also off by a line compared with old reports.

I agree that it doesn’t seem to make much sense to publish SI percentiles that compare against all high school juniors, but CB is obviously making a lot of errors right now. The median SI isn’t 468 or whatever it says in the report either; if that very obvious error could go out on the report, why not inclusion of a less relevant (but still accurate) set of percentiles?

As far as heads rolling, again, it’s probably silly data to include, but is how big is that mistake compared with all of the other mistakes and choices CB has made lately - June SAT screw-up, kids not getting their scores in time to decide whether to take the SAT or whether and how to study for it, kids getting a mistakenly rosy picture, via print PSAT report or download of how they’re doing compared with other college-bound kids, etc.

@Lea111 yes - I understand that there is a range as has been shown on this thread - just frustrated the CB reports about scores leads to various results - no way to get any comfort about how to really view a score.

I just notice this report too by Compass: http://www.■■■■■■■■■■■■■■■/has-the-sat-lost-its-way/ - just comments on their detailed report - but in a way that is rather critical & if widely read will cause more anger at CB. Can’t imagine they did not anticipate that people would be figuring out what is going on and making many reasonable, if harsh judgments about the CB’s motives. Maybe colleges will stop putting so much weight & value on the SAT - at least for awhile. But I do feel badly for students & families who might be rather optimistic about NMSF based on the score reports only to find out he really is so different in September. Why don’t we write & thank Compass & Bruce Reed who wrote the opinion piece - there is a form at the bottom of this page: http://www.■■■■■■■■■■■■■■■/has-the-sat-lost-its-way/
We can also ask CompassPrep to develop thoughts about projected cut off scores - no idea if they will but might be worth asking.

" I wish Scooby Doo and Shaggy and the mystery machine could take them down." Did anyone listen to Serial podcast? It feels like that to me … except I know that there are very few people who would find our mystery interesting.

@suzyQ7 at #1663. Not true, pg 6 of Understanding Your Scores 2015 does state following as the first sentence under the Percentile discussions.

"Percentile ranks represent the percentage of
students that score equal to or below the score
the student obtained. "

If they are using definition B of “percentile” (percentage of students that score equal to or below the score), why is a perfect score not 100%ile?

@dallaspiano I meant something like this.

http://www.gamedayconsultant.com/news-articles/new-sat-act-test-score-conversion-chart

To score a 33-36 on the ACT, you are in the top 1%,but to score a 36, you are in the top .076%.

http://blog.prepscholar.com/how-many-people-get-a-34-35-36-on-the-act-score-breakdown

Yes, I know the problem is converting from 2400 to basically 2280.

Do you see how a 1440 is close to a 214, if you knock off a zero from the SAT number? Doesn’t it look like the concordance?

@thshadow @DoyleB It must be the late afternoon but I can’t wrap my head around the CompassReport and our CC comments.

So I have a basic question: My dd’s friend, I will call her Sally, made an SI 214. I’d like to look at Sally’s score and understand what it means in relation to last years scores through the SI% charts and the new definition “B” of percentile.

Sally’s 2015 PSAT
214 – Out of 24 99th%ile places, it is the 10th highest (and first at 99+).

Sallys 2015 PSAT Concordered:
210 – Out of 7 different 98%ile spots, this is the 5th highest.

Question: Is this not about the same score with the percentile definition change? Sally’s 214 in 2015 is mid 99th %ile of students scoring Worse And As Well as Sally. Should not that be about the same as a score last year of 210 which is mid/up 98th %ile of students who scored worse (but not as well) as Sally?

Sorry to break it down simply but I need to get a grasp on this concept. Any help is appreciated.

However, I’m glad that the “meddling kids” didn’t unearth any fraud. I’m actually happier with the 2015 SI table now than this morning! Yes, it still looks inflated but there’s a big difference between “99.5% concords to 98.5%” and “99.5% concords to something GREATER than 98.5%”. No, it’s not exact. But we are getting closer.

Wondering if the reason the concordance tables seem off is that we, in fact, DON’T have last year’s percentile table. It was never released. There could be small differences between the actual 2014 table and the actual 2013 table. Or . . . perhaps the concordance is to an average of previous tables rather than a specific one. In our endeavors we are trying to concord to the most recent historical table available online - which happens to be 2 years old - and that might not be quite accurate.

<<question: is="" this="" not="" about="" the="" same="" score="" with="" percentile="" definition="" change?="" sally’s="" 214="" in="" 2015="" mid="" 99th="" %ile="" of="" students="" scoring="" worse="" and="" as="" well="" sally.="" should="" that="" be="" a="" last="" year="" 210="" which="" up="" 98th="" who="" scored="" (but="" well)="" sally?="">></question:>

@likestowrite that’s kinda how I’ve been looking at it . . .

@Mamelot: from Concordance tables: “In December 2015 [ha], at the same time that student scores are delivered [ha] from the first administration of the redesigned PSAT/NMSQT (2015 and future), preliminary concordance tables will be released to link the PSAT/NMSQT from 2014 and earlier to the redesigned PSAT/NMSQT (2015 and future).”

@micgeaux, Thank, I read those two before. But your comments in #1669 give me some ideas.

With old SAT , 550+ @2400. There are 12 slots (+/-1) for range 99+, With new PSAT we have 15 slots for range 99+. So the top slot of new PSAT (associate with 228) is even rarer (499 maybe)

Well, I know it’s not a good way to assume new PSAT data table behave like old SAT data table. And I have no choice.

The whole range 99+ (old SAT) produce average 7700+ plus. Then there is no way at half range of 99+ of New PSAT can produce 15000 (to be specific at SI 222)

Headache … headache for a junior like myself

@lea111 yeah I saw that wording as well - pretty sure I was doing some haw haws myself.

So what does that mean linking from 2014 and earlier - if everyone is just using that 2013 percentile table instead of considering 2014 (not released but CB has it . . . ) then we might be off a bit. It depends on that 2014 table (not released). Does that make sense? This is just speculation, btw.

Edit: IOW, if we actually HAD access to the 2014 table we might see the concordance table concord better.

There is a simple explanation for all of this. The concordance table is either wrong or totally misinterpreted. It simply does not make sense. On the other-hand, you can’t fake or mistake percentile numbers because the percentiles will be the percentiles no matter what the scale. You can’t say that the top five percent are all 99%, that is not mathematically possible. There can be no such thing as “99% percentile inflation”. If we generally know the number of people who took the test, and we generally know where the 99% cutoff is, than we will generally know how to compare that to previous years, and make educated speculations about the actual cutoffs. The CB know how to count, and their computers know how to count. They are not going to suddenly tell a bunch of people’ “oops, our bad, you are actually in the 97 percentile”. They took an extra month to release the scores to make sure they got them right. I’m sure they don’t want to totally embarrass themselves come September.

@mamelot Thanks! Yes, if I am looking at it the right way, the reason the SI% chart has seemed inflated is because of the new Definition of percentile being used. And, understanding that, the SI and Concordance charts seem to be more aligned. The only thing stopping them from totally aligning is a) that we are concording to a research sample and not 2014 data and b) that we don’t know the # of test takers that the SI % chart is assuming. Does this still sound right?

Regarding my post #1676, if CB based it’s concordance table on LAST YEAR’s percentile table (not released) - and we know that last year saw several high scoring states actually increase a point or two - couldn’t that explain some of the mystery? If SI’s are concording to scores that should be a higher percentile than the table being used, may it not be the case that the table being used is not the correct table?