@mtrosemom - I agree, it would definitely be logical if there were more NMSF, and then the cutdown to NMF was more selective. But I’m not sure if “logic” has anything to do with it, and/or if they can even change the rules at this point…
According to the new concorded chart, my sons PSAT score went down 10 from his Sophomore year to Junior when it counts. I find that a little hard to believe.
he says you can’t get enough from perfect scorers in cali to reach the amount
Remember guys, all this concordance DOES NOT MATTER for NMS. The semi finalists are selected by taking the top scorers based on the number allocated ‘winners’ for each state.
NMS does not really have a problem. We have a problem in predicting NMS.
NMS will just sort all the scores from highest to lowest and take the number of winners they need from each state. The only issue they may have is the breakpoint at which they take them. A one point move can add or subtract many students, so perhaps they go over 16K and make the qualification more difficult in order to be fair there.
I assume there is nothing in the documents about methodology? I’m wondering if they used some of the March SAT scores to create this concordance under the theory that the PSAT score is designed to line up with the SAT score. That would also explain waiting until May to release this.
@suzyQ7, while all this is true, for any state with a 2014 cutoff >=221 SI would yield way too many perfect scorers, while anecdotal evidence suggests otherwise. That includes, CA, DC, MD, MA, NJ, and VA. There just aren’t enough perfect scorers in 2015 according to Art from Compass per his comments at
http://www.■■■■■■■■■■■■■■■/national-merit-semifinalist-cutoffs/
Completely agree with Art.
I think Art’s comments here are in line with what I was suggesting: “I suspect that CB has prioritized making the PSAT concordance agree with the SAT concordance and not in making sure that the concordance makes sense from a National Merit perspective (it’s not applicable for NM, since it is not how NMSC will determine qualifying scores).” His comment is at the bottom of this page: http://www.■■■■■■■■■■■■■■■/national-merit-semifinalist-cutoffs/
I agree with you @candjsdad (and Art!) The concordance is no good for NMS. We can just keep waiting until NMS has finished sorting their spreadsheet from top to bottom by state to determine the cutoff.
@candjsdad - I briefly looked through the doc to see if there was anything interesting in the descriptions, but I didn’t notice anything. I did not look very closely, however. I also have a day job…
Can someone give me Art’s email? IM me? I don’t really know how things work on this site.
I’ll email him the PDF…
OK, I just emailed it to Art.
Re Art - you can post a question here:
http://www.■■■■■■■■■■■■■■■/national-merit-semifinalist-cutoffs/
I don’t have his email address
@thshadow 1420=209 and @itsgettingreal17 1470=209 My D has a 1370 (730V/640M) with a 210 SI. There is quite a range equating to a 209. Am I correct in my understanding that ONLY the SI from the PSAT matters to NM? I know my D won’t go beyond commended but I’m thankful she got that.
@thshadow Ok, after messing around with the tables, and say a 34R (concords to a 64) + 33W (concords to 60) + 37.5M (concords to 72)=209 current for commended but only 196 concorded (201-201 is avg for commended).
Not sure how 201-201 ended up in there. I meant to type 200-202.
It looks like this final PSAT concordance is the same as the SAT’s. Other posters noticed this earlier. They used a December 2015 research study to come up with the SAT concordance and that of course was AFTER the October PSAT. Methinks that the “preliminary” concordance might have been based on actual percentiles in order to make sense for the 2017 NMSQT competition, while this “final” concordance is designed for subsequent PSAT’s this year. It won’t matter in a year, of course. This is as transitionary a year for the PSAT as it is for the SAT.
I wouldn’t get too worked up over the revised concordance tables for NMSF purposes. Looks like it goes too far the other way. Remember we have an ample amount of anecdotal evidence which seems to suggest that the cutoffs will be on the higher side. But good grief, there’s no way 16,000 students got a perfect score – not even close to that.
Take a look again at the Testmasters data for Texas (link below), which has 10,000 (or at least 8,000) real scores and look at the graph/chart. It’s clear that based on that large sample, TX will likely be SI 219.
Ugh, this is why I haven’t said a peep to S about NMSF predictions. As we say at our house “This can only end in tears.”
Here’s a possible explanation of what is going on with these new tables:
If you take your SAT score and concord it using the PSAT tables you get the same answer as if you concorded using the SAT tables (off by a magnitude of 10, to be sure, but otherwise they concord the same). My D3’s 1500 from her SAT concords to a 217 old PSAT. It also concords to a 2170 old SAT. And her 1470 PSAT concords to a 211 old PSAT and a 2110 old SAT.
So now we know what they mean when they say that the PSAT score "is what you would have received on the SAT had you taken it that day . . . ". They meant that - Literally!!! At least for this initial year they have based the percentile tables applicable to both the SAT and the PSAT on the same research study - most likely the one in December 2015 AFTER they were able to score the October PSAT and see how messed up the curve was. Not clear that they will continue to do so in the future (hopefully each test will have it’s own historically-derived set of percentiles) but in order to get them to match at the outset they decided to use the same curve for both tests.
The reason why the “Preliminary” concordance was so different from the percentiles reported in the Understanding Scores Report is probably because they used actual October PSAT percentiles to form the concordance tables! This most likely wasn’t their Plan A. When the results came down, they were so very different from the prior research study percentiles that they had no choice. Nothing else can explain the accuracy of those “preliminary” concordance tables (for instance, predicting the commended #) other than that they used actual percentiles this one time. They had to. There was an academic competition with big decisions and mega dollars involved - and bad press would have been the minimum amount of headache for CB had they released “preliminary” concordance tables that didn’t match reality. Imagine guidance counselors and others giving advice to juniors based on that Page 11 Percentile Chart!! Instead, they were instructed to rely on the preliminary concordance.