@knowledgeless, I did notice something interesting when we accessed the online PSAT score report. When I clicked on the little “i” for D3’s User Group percentile (which hasn’t changed, btw) I got the following message:“User group percentiles are derived from the previous year’s fall administration of the PSAT/NMSQT, representing 11th grade students who typically take the PSAT/NMSQT. For example, a student’s score in the 75th percentile means that 75% of the user group of 11th grade U.S. students would have had scores at or below that student’s score.”
Pretty sure this is a different message from previous, when it said that User Group percentiles were based on a research study. Not sure if that’s why your son’s percentiles changed but it’s weird that they changed the definition like that. Seems that it’s glitching and giving the fall 2016 percentile definition early.
@knowledgeless, I checked the PSAT scores about a month ago, and the report only showed 1 set of percentiles instead of the user group and national representative percentiles. It’s now back to what it was originally.
@knowledgeless – I see this too on the score report now:
" Fall 2015 percentiles were set based on a research study. Fall 2015 benchmarks were preliminary benchmarks. Benchmarks have been finalized for Spring 2016 assessments and forward, and user percentiles will now be based on administration data. " Abouve for the User Group Percentiles it has this explanation: “User group percentiles are derived from the previous year’s fall administration of the PSAT/NMSQT, representing 11th grade students who typically take the PSAT/NMSQT. For example, a student’s score in the 75th percentile means that 75% of the user group of 11th grade U.S. students would have had scores at or below that student’s score” I have a copy of the "Understanding Your Score Report posted in January & it says this about the User Percentiles: "User group percentiles are derived via a research study sample of U.S. students in the student’s grade, weighted to represent students in that grade (10th or 11th) who typically take the PSAT/NMSQT. "
So it seems CB is now saying the User Percentiles for Oct. 2015 test are not from a research study but based on the prior year (2014) PSAT administration - perhaps then “concorded” based on the new scale? If our percentiles did not go down, that’s good. I am still unclear though if the SI percentiles are finalized or what the status is.
@suzyQ7 our college counselors are not focused on the PSAT at all. They are just focused on the SAT/ACT scores and how they stack up to other kids who are applying and being accepted to the schools on the students list. Now, maybe they are just different but all our students had to submit to the college counselors a list of potential colleges in the spring of the junior year.
As others have said the parents and students on this forum are very knowledgable about PSAT, Scores, % etc.
Can’t find D3’s original printout so no way to check this but are those benchmarks the same as they were last fall? 390 for EBRW seems very low when you consider how compressed that upper part of the distribution was. Benchmarks aren’t necessarily the mid point - except that they kinda were in CB’s explanatory materials released in 2015. With a 480 benchmark for the SAT, that means they are expecting a 90 point increase from the PSAT just on the reading/writing portion!
Does anyone have a printed version of the original online report? If so can you provide what the PSAT benchmarks were at the time?
The info I wrote above in #4782 about the User Percentiles appears online just below my DS’s scores. This is the Title: Score Report for PSAT/NMSQTFall 2015. Highlighted Tab is “Report Details”.
@Mamelot If you are asking for the benchmarks from the original online report received in January 2016 from the Oct 2015 PSAT, the PSAT reading/ writing benchmark was 390 out of 760 and they projected the SAT benchmark would be 410 out of 800. For math, the benchmark was 500 out of 760 and the projected math SAT benchmark was 520 out of 800. Hope that’s what you were looking for. I don’t have benchmarks provided for the previous year.
@paveyourpath thanks that’s exactly what I was looking for. So the benchmarks for PSAT didn’t change. But it looks like the benchmarks for the SAT DID change because now they have 480 for EBRW and 530 for Math. So the SAT Math benchmark increased 10 from projected but the EBRW increased 70! And the expected progression for EBRW (PSAT to SAT) was supposed to be +20 and it’s now +90!!
Not sure how they are determining what score a “benchmark” should be. In their promotional materials the benchmarks were supposed to be the midpoint, so 460 for PSAT and 500 for SAT. And you could measure progress by the fact that the distributions were supposed to be vertically aligned, so that just by progressing in your college prep courses you were expected to increase your score by +40 (and achieve the same percentile) - that seemed to be the thinking, at any rate. They have a bit of work to do to smooth out the EBRW side - big difference between 90 and 40. The math side looks a bit more reasonable (although higher than they seemed to be expecting initially).
@Mamelot I see what you are saying. I had not noticed the updated benchmarks when I looked to see if the percentiles had changed. You would think CB would send an email advising that they have made changes to the score report and highlight what the changes are. I know…not going to happen.
@paveyourpath I’m wondering if the counsellors were updated on this how preliminary some of this information was throughout the year - probably so. But what must they be thinking? Raising your EBRW by 90 points should NOT be expected on any test where the scale is a 600 point range. It’s bizarre. Obviously that PSAT benchmark of 390 is way too low and doesn’t even jibe with their Research Group Mean of 489. Perhaps - hopefully! - no one is using PSAT right now to set benchmarks and progress in “college-readiness”. None of this is really pertinent to anyone seeking a National Merit designation, of course, but it really highlights the poor design and/or research testing of the “suite of assessments”. After all, one of the big selling points was that the tests are designed so that you (and your GC) could track your progress toward college-readiness. Not so if the benchmark increases 70 points in six months! Can you imagine the conversations in the GC’s office: "Well kid, College Board has just decided that college will be a LOT harder for you to get into . . . ".
Imagine that conversation when the student and his parents were thinking they were in a great place since their percentile and benchmark said they were in the 96, 97, etc… Percentile. Perhaps that student decided against a lot of prep before the March SAT because they were so “on track”.
Well fingers crossed for S. He said the SAT this morning seemed easy. Now to wait for July (SAT/AP scoes) and Sept( ? ) for NMSF. Stinks we’ll more or less know his confirmation score before he even knows if he has something to confirm.
I am trying to digest’s Art’s post too but seems the May concordance is not very helpful - particularly at higher score levels. Art says, among other things : “I do not see evidence that the May concordance provides students with better information than did the January concordance. For above average scorers, the new concordance paints an overly pessimistic picture of how new scores compare with old. National Merit hopefuls are advised to ignore the May concordance…”
From your first graph, I think I finally understand what you mean when you say that they are forcing the SAT concordance table on to the PSAT. But if you think about it, it makes sense. I think an underlying claim that college board makes is that your score on the PSAT predicts your score on the SAT. 1 to 1. (Maybe with a slight rise due to time.) Given that claim, if you have a 1:1 conversion from old PSAT to old SAT, and a 1:1 conversion from new PSAT to new SAT, then really there’s only room for a single concordance table.
I guess we think (hope!) that the SAT concordance table is accurate - it seems like it will be actively used in college admissions, so it better. If that concordance table doesn’t match the actual PSAT results (which as you point out it clearly doesn’t), then the likely conclusion is that the 1:1 targeted correspondence between new PSAT and new SAT didn’t hold. Oops.
To summarize: the new PSAT scores were inflated compared to the old PSAT, while the new SAT scores were a little inflated (but not as much) compared to the old SAT. This means that the new PSAT scores overestimated what you will score on the SAT. But CB didn’t admit that. Presumably in future years it will be back in line…