<p>Plot the CA numbers w year on x-axis & psat cutoff on y-axis. A linear regression thru those points would predict 223 for Class of 2015. But given the scatter in the data (i.e. the data points don’t fall neatly on the regression line), a reasonable high-side guess is 224, and reasonable low-side guess is 222.</p>
<p>It’s a GUESS. If I could predict the future, I wouldn’t have to work for a living…</p>
<p>How do u know that reading/writing scores are lower this year?</p>
<p>@GMT – thanks for the analysis! I sure hope you are correct, since I know several people hoping it doesn’t go up much from last year (including me, of course!).</p>
<p>Plotting cut versus year makes a false assumption that scores steadily rise. We all sure hope that’s not the case, and past history has shown dips as well as rises. Those of us who were doing linear regressions previously were plotting cuts against *top scorers<a href=“found%20on%20the%20state%20summaries”>/I</a>. Some states with lower cuts needed to dip down as low as 65-70 range to get reasonable results, while others could limit their totals to the 70+ top scorers.</p>
<p>P.S.: I’ve only run those regressions for PA, but perhaps someone has done it already for CA. </p>
<p>@PAMom,
From year to year there are ups & downs-- that’s just statistical noise. But the overall TREND has been up. </p>
<p>I agree that for long-term projections, a linear trend is inappropriate bcs there is a 240 ceiling for psat scores, but for a projection one year forward, a linear trend is probably ok for populous states like CA. In my guestimate for CA, I acknowledged the noise in the trend, that’s why I gave a lo, mid, high projection of: 222, 223, 224.</p>
<p>I have not seen a state summary. Where can those be found? </p>
<p>It’s worthwhile to study trends for your own state, going back a good 5-10 years. Many states do go up and down within a set range. And yes, the scores have been steadily creeping up over time, but a straight linear regression for date versus cut is going to predict that all scores stay the same or rise next year, which is unlikely, given the state summary data for last year. I predict that many states will see a slight drop this year.</p>
The test takers who score in the 75-80 range on the subtests (col7) comprise less than 2% of all the California test takers. The percentage of kids who make it into this group is trending upward with time. This may be the result of the test questions becoming increasingly easier; or the Flynn Effect; or the dumber kids in CA being selectively eaten by wild coyotes; or increasingly more kids prepping for the PSAT. </p>
<ul>
<li><p>If you plot year vs mean score (col4) and year vs cutoff score (col5), and do a simple linear regression for each set of data, you will see that both the mean and the cutoff scores are trending upward with time, but the** cutoff scores are increasing at more than twice the rate of mean scores.** </p></li>
<li><p>The **ups & downs of the cutoff scores<a href=“col9”>/b</a> **do not always track the ups & downs of the mean scores<a href=“col8”>/b</a>. There was a streak of 3 consecutive increases (three '+'s in a row) in cutoff scores when the mean scores were bobbing up & down.</p></li>
<li><p>If I apply a constant scaler to normalize the mean scores (col6) to the last cutoff data point, 223, then the scaled up cutoff for the Class of 2015 would be 221 :-? , but I think this projection is too low, as per my earlier arguments. Based on the faster growing trend of the cutoff scores, my prediction for the California cutoff is 223 (+/- 1)</p></li>
</ul>
<p>DISCLAIMER: my crystal ball for picking stock trends has been miserable, so take this analysis w a HUGE grain of salt :-" </p>
<p>Someone already posted this comments last month.</p>
<p>"I’ve taken all the data from California in 2013 and compared it to the 2012 data. Check it out here for averages, percentage increases/decreases, and some charts. Enjoy!</p>
<p>Looks like the qualifying score will be lower this year!"</p>
<p>Here is my crystal ball prediction of cutoffs for each state, based on an individual analysis of each state’s data. We’ll see in September whether my predictive skills merit derision or worship…</p>
<p>Columns are:
Class of 2008
Class of 2009
Class of 2010
Class of 2011
Class of 2012
Class of 2013
Class of 2014
Class of 2015 prediction w +/- uncertainty
State/region</p>
<p>In between the columns, I have flagged whether the cutoff went up, down, or was flat in the subsequent year. For the Class of 2015 I expect the cutoffs will be flat-to-lower than last year’s, overall. </p>
<p>Note that for states with an asterisk (e.g., Rhode Island), there was a lot of scatter in the data; therefore, therefore the uncertainty is +/- 2, instead of +/-1.</p>
<p>Sure would be nice for us if Texas did in fact not increase. The problem for Texas, though, is that the percentages are very misleading because there was a substantial increase in test-takers. So in certain categories, while there was a decrease in percent of test takers scoring in each range, there was an increase in actual number of students scoring in that range. </p>
<p>My regressions don’t use the mean data at all, but rather the total number of kids scoring in those top ranges. IIRC, most of my variations (using various combinations of data) put PA in the 216 range as well. I can’t imagine PA going up, given the data from the past several years. I wouldn’t bet on a drop, but I do think 216 is likely. 217’s and above really should be safe (fingers crossed!)</p>
<p>So I ran this by @PAMom21 in a PM - because I thought I had a stroke of genius but also knew it was possible that I was just lost in the weeds…</p>
<p>Anyway, here is a thought. You know how we are frustrated because we don’t have any way to know if the kids who scored well in one PSAT subject also scored well in another? In other words, did a large percentage of those kids who scored well in M this time also score well in CR and W? No way to know. But it just occurred to me, looking at the state data, that we do have SOME indication because the state data breaks out male and female test takers! I got pretty excited about this today (clearly my paying work was a tad dry today), and decided to look at some numbers. In Texas this time, there was a larger than usual discrepancy between the number of males scoring high in M versus males scoring high in CR and W. More males than usual scored high in M (bad news), but fewer scored high in CR and W (good news). So the number of males who could have possibly scored high in ALL subjects is lower. Same pattern with females. </p>
<p>What do you all think? Is there a way to use the male/female breakdown in the state data to add accuracy to our cutoff score predictions? </p>
<p>My analytical skills take my only to the “I wonder” stage, then I’m stuck. :-S </p>
<p>@Barfly Considering males and females separately would give you a little more accuracy than lumping them together as long as the minimum is in two different categories for the genders. However, you still have the unknown that some of the high scorers in the underrepresented category (say CR for males) will not be those that scored high in math and/or writing. I suppose you could assume some percentage for that, but assuming 100% in the minimum category also scored in the highest range for the other two categories would give you the maximum number of scorers that could have scored high in all three categories. And that should be a smaller value that would be closer to the actual number than if you had considered all students together.</p>
<p>@ STEMfamily, would that even matter unless they do a 50:50 apportionment of male:female semi-finalists? Does anyone know whether they do a 50:50 apportionment? </p>