How ETS computes the scoring curve and the deal with short passages...


<p>To clear up all the confusion that has abounded on this board lately about these issues, I have decided to write up this post. I apologize if this issue has already been discussed and explained thoroughly before. I hope you find it informative!</p>

<p>First of all, College Board HAS warned us about short passages before on the verbal section, so it was perfectly legitimate for them to include them on a scored section. Unfortunately, they warned students of the question only in the 2003-2004 Taking the SAT booklet, which not all students read THOROUGHLY (if they read it at all). If you look closely, you'll find short passages discussed briefly. I made sure to warn all of my students NOT to automatically assume that short passages are experimental, because I knew that College Board is evil, and that the short passages appeared on scored sections last November and this June. As a matter of fact, I told my students that they would probably appear on the November exam, and they indeed did. So watch for them on December and/or January, too!</p>

<p>Now, here's the deal with the scoring curve. First of all, to debunk a popular myth, the curve is NOT dependent on the population that takes it. It is PREDETERMINED, and designed so that scores are directly comparable across different adminstrations, from month to month, year to year. A 500 from 1996 should, in principle, mean exactly the same performance as a 500 today. No one really knows for sure how ETS constructs the curve (except their staff, of course), but here's my best guess, based on my experience and the scattered information I've obtained from various sources:</p>

<p>ETS first tests out future questions on an experimental section. Questions that do not appear to distinguish lower-scoring students from higher-scoring students, that appear to be biased against certain groups, that appear to be too difficult (e.g., only 1% get it correct), that are ambiguous, etc., are thrown out or sent back for re-editing. To eliminate any outliers (i.e., students who don't take the section seriously and score badly on it, but do fine on the scored sections, because they KNOW it's experimental and students who -- gasp! -- cheat on the exam (and do quite well on the other sections) but cannot do the experimental section well) that may ruin the data, they have guidelines that instruct them on when to throw out the experimental section data from a particular student. The questions that pass all these initial filters are then analyzed for their characteristics, particularly, the difficulty of the question. A standard difficulty rating is assigned to the question after the actual difficulty of the question on the particular exam administration is standardized against a standard testing population (let's say all the students who score a 400, who score a 500, who score a 600, and so on). Once the standard difficulty of the question is determined, it is included in a pool of questions that may be included on a future exam. Exam writers then draw from this pool for each exam, making sure to follow strict distribution requirements (i.e., there must be a certain number of questions of each question type and topic type) and at least rough difficulty distribution requirements (i.e., about a certain number of easy questions, a certain numbers of hard questions, etc.). After an entire test is constructed, the scoring scale is computed, using a formula, which may be complicated, that produces the raw-score-to-scaled-score conversion based on the standard difficulty ratings of all the questions on the exam.</p>

<p>Therefore, they know to a high degree of certainty what the correct scaled score conversion chart should be for an exam BEFORE they even administer it. As a final check, however, ETS has a few thousand students around the country take previously administered questions (whose characteristics are already known) as part of the "experimental" section, so that they can detect any minor flaws and fine-tune the scale if necessary.</p>

<p>The conclusion is that there is ordinarily no advantage whatsover to taking an exam on a certain date versus another. If you are not the most careful test-taker, but have no problem handling tough questions, however, it may be best for you to take a hard test (on which some mistakes can be forgiven to a certain extent) rather than an easy test (on which you have to be almost perfect to score very high). An easy test might be best for you if you are VERY careful, but can be stumped by a few extra-hard questions.</p>

<p>Hope this helps!</p>

Master SAT tutor (I'm quite modest)</p>