National Merit Cutoff Predictions Class of 2017

@mamelot That meshes with what Art stated in his response on his website, that the scores were being related to the SAT, not focused on NMSC’s scoring. That is the only thing that makes sense based on the fact that you can have a 209 concord to a 196 which is too low for commended.

@Mamelot - I like what you’re saying, though I’m not 100% sure I’m following…

So what order do you think things happened in?

  1. They did their research study, and came up with predicted percentiles, and then used that to make the tables to convert raw scores into final scores. (Presumably, they were aiming for the final scores to equal about the same thing they used to equal.) Maybe this was even done before they gave the PSAT?!?!?
  2. They gave more real tests - or maybe they gave a real test. And discovered that kids scored much higher than they thought they would. Ruh roh.
  3. They didn't change their percentile tables - because they're defined based off of a research study, and it's not "wrong" based on their narrow definition. But they released a preliminary concordance table which more honestly showed the score inflation. The reason the percentile tables were never marked as preliminary is because they never intended on giving any further clarification about the research study. The research study happened, they have the percentiles from it, and that's final. :-)
  4. They gave more tests, and conceivably they got more data from kids took both the new SAT and the old SAT. If their concordance data is mainly from kids that took both new and old, this means that these kids scored much higher on the new SAT, compared to the old.

Does this sound plausible?

If so, it means:
A. They weren’t trying for the new scores to be inflated - it just happened.
B. Neither the concordance tables nor the “percentiles” give what people actually want - which is real percentiles among people that took the test…

@thshadow

  1. They have the real percentiles, they did not have to use the inflated percentiles. So that was purposeful. They had close to 3 months to compile them from October to release date in January.

@Mamelot

Sorry to be repetitive, but neither concordance nor percentiles have ANY effect on determining national merit scholars. It’s just the top scorers that win…

@suzyQ7 - do they ever release the real percentiles, i.e. compared against people that took the test that year? I think the extra time was to work on the concordance tables, and they never intended to release a “real” percentile table (of test takers from this year).

No, but this year was different because they had no old real tests to compare to. How could their “research study” have failed so poorly? How could they publish bad numbers in good faith after seeing the actuals? Remember, the purpose of the PSAT is to help prep for the SAT.

It’s convenient (for the CB) that the inflated percentiles made students feel good and drove so many of them to take the March SAT.

They did a horrible horrible job with this new test rollout, specially, the failure to have a good/accurate research study. The class of 2017 really got screwed by the CB. They refuse to explain it too. They just want it to blow over. They did not publish this doc on their site even though it’s been out since mid May. Deceptive company.

@suzyQ it doesn’t matter that NMSC uses different methods to determine SF’s. Guidance counselors are supposed to be able to give proper advice based on the results of the student’s PSAT and that includes understanding where the student truly ranks percentage-wise. That not only affects planning for a potential SF designation but just proper college planning overall.

@thshadow Overall I agree with your description in #4681. Just one nuance: guessing that they actually wanted and expected the percentile tables that they reported (User Group and SI percentiles) to be accurate - they weren’t, but there wasn’t much that could be done about that. After all, that research group was the “norm” group for the initial test(s). So yes they definitely did their research and came up with what they thought was the correct distribution, correct percentiles, etc. Then they administered the actual test and . . . .oopsie. Missed. So they needed to do some subsequent adjustments! Did they change the degree of difficulty in questions, or change the algorithm that converts raw to scaled scores? Or both? Not sure.

The timing is indicated in CB’s response to the ACT criticism: 1) they conducted two concordance studies - one in Dec. 2014, and the other in Dec. 2015. So those definitely bookended the administration of the Oct. PSAT; and 2) the SAT concordance was not based on actual results but on those studies. I’m assuming that those concordance studies were in conjunction with any research done to curve or place percentiles on the test. One thing that’s not at all clear is whether they did studies on PSAT separate from SAT or whether they are applying the results both to SAT and PSAT. Certainly they are using the same concordance table for both, which means that the percentiles have to be the same between the two tests. Betting that the curves between the two are supposed to be identical (PSAT curve is shifted to the left by 80 points to allow some “room for improvement” when you take the SAT but other than that it’s supposed to be just a slightly shorter version of the SAT).

To answer your two questions at the end:

A) Agree. It’s the simplest explanation.

B) None of CB’s results have ever been in terms of real percentiles from the actual test. They’ve always been in reference to a prior year’s test or, in this “initial year” of the revision, a “research study”. However, if my theory of those “preliminary” tables is correct (i.e. they actually DID have to use real percentiles of the Oct. PSAT in order to concord properly) then that’s probably the closest to “reality” that we’ll see from the College Board. Had things gone according to plan, then 2015’s actual percentiles might have been reported in the Understanding Scores 2016 report. However, given that actual results were so bizarro, I’m fully expecting that Understanding Scores 2016 will contain research study percentiles - the same ones used to create the just-finalized PSAT (and SAT) Concordance Tables.

@Mamelot but that’s my point, the CB, in January published those grossly inaccurate percentiles that were printed on every PSAT report. And the preliminary concordance tables are also quite different than these, right?

@suzyQ7 - Yes. Not sure whether CB was directing GC’s to the percentile tables or to the preliminary concordance tables. I thought the latter (that’s one of the reasons they were created). I believe that those preliminary concordance tables had to be based on actual because what other percentiles could they have used other than “norm group” or actual? And we know that “norm group” was way off base.

And then here is one more thought:

Another way to explain the difference between the preliminary and final concordances is that the preliminary seemed more about what your percentile actually was on the PSAT itself while the final concordances seem to be more about what your percentile will be on the SAT (barring further preparation). PSAT percentiles (total, section, and test scores) on the score report now may well be reported in terms of their SAT-potential (or SAT Equivalent). Just another way of saying that CB isn’t focusing on National Merit at this point.

Let’s hope Art & the Testmasters folks can shed further light on what the CB put out in the concordance tables. It would be helpful to get a real look at “Final” SI percentiles – are they saying final = preliminary but info it still not based on real, live test-takers so that info is still not meaningful?

“It’s convenient (for the CB) that the inflated percentiles made students feel good and drove so many of them to take the March SAT.”

Has anyone listened to Julie Lythcott-Haims’ podcast Getting In? They had a grandparent of a junior call in and say that the family had been planning for the kid to take the ACT, but her percentile score on the PSAT score report was so high that they’d decided to have her take the new SAT instead. J L-H and the professional college counselors on the show had a few different responses, but none of these even highlighted the difference between the User Percentiles and the percentiles given on the score reports, much less discordance between both tables of percentiles, on the one hand, and the preliminary PSAT concordance tables, the data from various schools that had come out, the 209 Commended cutoff, the NHRP cutoffs that had come out, and the analyses on Compass, Testmasters, Prepscholar, etc., on the other hand. In other words, the J L-H and the professional counselors were, in April or early May, I think, at the same level of understanding that we all were for the first 3 minutes of looking at our kids’ score reports. They were apparently willing to swallow the score reports’ percentiles hook, line, and sinker - in April or early May!

So yeah, those basically dishonest percentiles are very much having effects in the real world, probably big effects. To the detriment of families who aren’t paying a huge amount of attention, and probably to the short-term benefit of the College Board.

@Lea111 – yes I recall reports like what you are referencing. this is an interesting rece article too:
https://www.insidehighered.com/news/2016/05/16/act-and-college-board-offer-conflicting-views-how-compare-sat-and-act-scores – includes commentary about the inflated SAT scores.

@Mamelot I appreciate your analysis, but am having trouble absorbing it:

“Yes. Not sure whether CB was directing GC’s to the percentile tables or to the preliminary concordance tables. I thought the latter (that’s one of the reasons they were created). ** I believe that those preliminary concordance tables had to be based on actual because what other percentiles could they have used other than “norm group” or actual? **And we know that “norm group” was way off base.”

So you think the original concordance was based on the actual PSAT from October? So when I convert my sons total score (1480) from the Oct PSAT using the original concordance, it equates to 226. What does that say about the new “final” concordance, now that it says that his 1480 equates to 213? What would that tell his GC?

Are you saying that the 213 maybe is in terms of 213 out of 228 vs the original that was 226 out of 240? Then why does the header still say: Redesigned PSAT/NMSQT (2015 and future) to PSAT/NMSQT (2014 and earlier).?

Also, I can’t wrap my mind around the new equating of 1520 (perfect score) to be equal to a 221/228. How can this be?

This concordance makes no sense and is a HUGE change from the original. Is CB explaining it to GCs? How can they not explain the huge change?

@suzyq7 I don’t think the concordance is meant to make sense in terms of theSI scores, only equating to SAT scores.

Fwiw, I have zero idea whether or not this is logical thinking, but hey, none of it is logical to me, if Imtake my dd’s new PSAT score, concord it to an old PSAT score and then multiply that concorded score by 1600/1520, the result is almost (close but not exact) the new PSAT score. Not sure that is meaningful or not.

@suzyq7:

<<what does="" that="" say="" about="" the="" new="" “final”="" concordance,="" now="" it="" says="" his="" 1480="" equates="" to="" 213?="" what="" would="" tell="" gc?="">>

It would tell his counselor that he would have scored an old PSAT of 226 and an old SAT of 2130 had he taken those tests last October. Did he take the new SAT yet? What was this score? That is actually the relevant number that should be before the GC at this point. If he didn’t take the SAT yet, then he knows that he needs to work from a base of 2130 (old) and with time and additional prep he’d score higher than that. If he’s not planning to take the new SAT (either because his old one is fine or because he focused on the ACT) then the 213 isn’t even an issue because it doesn’t lend any insight to how National Merit will shake out.

<<are you="" saying="" that="" the="" 213="" maybe="" is="" in="" terms="" of="" out="" 228="" vs="" original="" was="" 226="" 240?="" then="" why="" does="" header="" still="" say:="" redesigned="" psat="" nmsqt="" (2015="" and="" future)="" to="" (2014="" earlier).?="">>

No the 213 is an “SAT equivalent” PSAT score; i.e. had the student taken the SAT that day he wouldn’t have gotten 2260 - he would have gotten 2130. The old PSAT didn’t have the strict predictive mapping to SAT that the new PSAT has - because the curves were actually distinct one from another. So a 226 PSAT could easily have resulted in a old SAT score of 2130 had that student taken the exam the next day (or a short time later). No more. The PSAT actually IS the SAT now just a shorter version of it. Your PSAT score IS supposed to be your SAT score had you taken the SAT that day (or the next day or a week later . . .). The confusion about these new tables can be easily resolved once you realize that the new PSAT and SAT are perfectly aligned in terms of curves - but when you concord to old PSAT or old SAT you have to account for two distinct distributions. I believe we saw the old PSAT distribution in the preliminary concordance tables and are seeing the old SAT distribution in the finalized ones.

<<also, i="" can’t="" wrap="" my="" mind="" around="" the="" new="" equating="" of="" 1520="" (perfect="" score)="" to="" be="" equal="" a="" 221="" 228.="" how="" can="" this="" be?="">></also,>

Think of it this way: if a student got a 240 on the old PSAT, what is his “best guess” score on the old SAT had he taken that instead that same day? CB is saying it would NOT be 2400. It would be less. So that’s why the new tables are showing a 1520 concordant to a 240 (old PSAT) but an effective 2210 (old SAT).

I hope this makes sense!

@Mom2aphysicsgeek at #4692 - HEY! I LIKE that method! Makes SO much more sense than the back-hand-springs I’ve been typing.

For my D3:

1470 => 211 (old PSAT, “SAT Equivalent”) => 222 (old PSAT; 211*1600/1520) = “Preliminary” concorded old PSAT score.

Have no idea if this is legit but I suspect all the stuff I’ve been typing in the last several posts can easily be represented by the above.

Is it possible that the way they compute concordance is actually quite simple? Obviously some of the kids that took the new PSAT had also taken the old SAT (and/or the old PSAT). Maybe they just used that data? Then as time goes on, more kids have taken multiple versions, either new PSAT + new SAT or old SAT + new PSAT or new SAT. So they get more and more data. And that’s why the tables change.

The PSAT concordance tables are equivalent to the SAT concordance tables (divided by 10, of course). And CB has maintained that the SAT concordance is based off of two specific studies: Dec. 2014 and Dec. 2015. And NOT the actual March SAT.

For sure they ran analysis using actual testers but it couldn’t have involved the new SAT. At all. Otherwise they’d be boldface-lying in that press release.

@thshadow

Mamelot, I think you have it exactly right.

I wonder why CB seems to be allergic to using the data that’s actually relevant, i.e. the other kids that took the same test!! :slight_smile:

My son was part of the Dec research study. He got a 2130 on the old SAT and a 1480 on the new SAT. With the subscores taken into account the concordences actually say they are equivalent. One reason why the concordences may be off is that a lot of seniors were in the the research study(at least at my school). They knew that they wouldn’t get scores till May after applications were due so there was no incentive for the seniors to try to do well on the test. The kids just wanted the $50 for taking the test.