<p>I noticed that there are various discussion threads here on College Confidential that mention college student score ranges. The National Association of College Admission Counselors (NACAC), </p>
<p><a href=“http://www.nacacnet.org/MemberPortal/[/url]”>http://www.nacacnet.org/MemberPortal/</a> </p>
<p>the federal Department of Education Integrated Postsecondary Education Data System (IPEDS), </p>
<p><a href=“http://nces.ed.gov/ipeds/[/url]”>http://nces.ed.gov/ipeds/</a> </p>
<p>and the Common Data Set Initiative </p>
<p><a href=“http://www.commondataset.org/[/url]”>http://www.commondataset.org/</a> </p>
<p>have collaborated to set common standards for colleges gathering data about admission characteristics of their applicants and reporting data about their enrolled classes each year. </p>
<p>I happened to write about this in a July 8th, 2007 email to two private email lists that include readers of these CC forums. As I wrote then, "By National Association of College Admission Counselors (NACAC) Statement of Principles of Good Practice,</p>
<p><a href=“http://www.nacacnet.org/NR/rdonlyres/9A4F9961-8991-455D-89B4-AE3B9AF2EFE8/0/SPGP.pdf[/url]”>http://www.nacacnet.org/NR/rdonlyres/9A4F9961-8991-455D-89B4-AE3B9AF2EFE8/0/SPGP.pdf</a> </p>
<p>and by the actual practice of the Common Data Set, colleges report only interquartile ranges for each section of the SAT." The NACAC principle reads like this: </p>
<p>
</li>
</ol>
<p>The Common Data Set Intiative Instructions read: </p>
<p>
</p>
<p>However, as I noted in my July email, "Both the Education Trust college profiles </p>
<p><a href=“http://www.■■■■■■■■■■■■■■■■■■/default.htm[/url]”>http://www.■■■■■■■■■■■■■■■■■■/default.htm</a> </p>
<p>and the U.S. News profiles </p>
<p><a href=“http://colleges.usnews.rankingsandreviews.com/usnews/edu/college/rankings/rankindex_brief.php[/url]”>http://colleges.usnews.rankingsandreviews.com/usnews/edu/college/rankings/rankindex_brief.php</a></p>
<p>suffer from a common methodological error: college median SAT scores are reported by summing the ASSUMED median verbal score (actually the standard score halfway between the 25th percentile and 75th percentile score) and the ASSUMED median math score (same wrong definition of ‘median’)." </p>
<p>Can you all see at once why summing the 75th percentile critical reading score for the whole entering class with the 75th percentile math score of that same group may (probably does) overstate the 75th score level for the (unreported) composite scores of that group? In case this doesn’t go without saying, I’ll post here what I wrote back in July: “But most test-takers have a strong area, either critical reading or math, and thus summing scores from the two sections considered individually probably OVERstates the combined scores of most students at most colleges. At least this error is systematic across all colleges, so that their rank order based on these figures is largely unaffected.” The way to be 100 percent sure, of course, what the composite score distribution is in a particular college entering class would be to report it, based on the actual figures received by the college admission office, but NACAC discourages that. Some colleges report median composite scores anyway, in disregard of NACAC principles. Those reported medians should NOT be assumed to be comparable to calculated medians of composite scores derived by the U.S. News or Education Trust methodology. </p>
<p>It’s sufficient, of course, to look at the interquartile ranges to see if a college has room in its enrolled class for a few more peak-scoring applicants. And once a student wraps his or her mind around how to read interquartile ranges reported for each test section, it is really much more helpful for the student’s planning to know those ranges than only to know a (possibly incorrect) median composite score for a college to which the student may apply.</p>