<p>I mean that 750 Biology is 82 percentile, 750 Chemistry is 79 percentile, 720 English literature is 88 percentile, 790 Chinese is 43 percentile. </p>
<p>I am confused about the relations between the score and the percentile, 750 Biology = 750 Chemistry? 720 English Literature < 750 Chemistry? or 790 Chinese < 720 English literature? Could you please explain about it? Thanks!</p>
<p>This is just a function of the number of people taking the various tests, and the “self-selection” of those who do. For example, relatively few students take the Physics subject test, and those who do tend to be very good at it. So a higher percentage of them score well. In contrast, for the SAT 1, every student (about a million and a half) takes the test, and they have widely varying levels of ability. So while a 700 on the SAT1 might be well into the top ten percent of all takers, a 700 on the Physics SAT2 might be only in the 70th percentile of those self-selected few who take that test. I haven’t checked the actual numbers, but that’s the gist of it.</p>
<p>The Chinese test is probably a special circumstance. Not only do few students take that test, but presumably a large percentage of those who do are native speakers. Thus a very high percentage of them “ace” the test.</p>
<p>I agree with silverturtle and the percentile argument. I will respond to this using the same post that I used on a different forum:</p>
<p>“I believe that the percentiles associated with the SAT II are much more indicative of performance than the scores themselves. But I do think that the current difficulty and scoring of the tests are fine as they are. All Subject Tests are standardized to assess a body of knowledge appropriate for a particular level. It is just that many appeal to a particular and better-prepared demographic. For instance, the exams in Far Eastern languages (i.e. Chinese, Korean, Japanese) typically have a higher mean score since these are most often taken by native speakers and are languages that are not widely offered as educational opportunities for non-native-speaking students. (That accounts for the relatively lower mean scores for the more commonly taught languages such as Spanish, German, and so on.)”</p>
<p>Assessing students on the basis of their performance relative to others is very useful information for a university. Overall, I feel that standardized testing percentiles are more constructive data than other comparative measures like class rank. </p>
<p>So although the score evaluates one’s overall knowledge, the percentile places the score into its proper context.</p>
<p>Silver, would you mind sharing the names of the colleges? It’s just that I would be shocked if ANY adcom had any inkling of the % of the Subject Tests. Moreover, they just don’t care. A 750+ is a 750+. They do look for breadth, of course. And, I believe that they mentally discount the language tests for native speakers – other than URMs.</p>
<p>
</p>
<p>Yup, and the score – not percentile – tells them that.</p>
<p>It has been quite a few months, but I know that it was at least one of Harvard, Princeton, and Yale whose admissions officer giving the information session said that the percentiles (which are shared in the Score Reports mailed to colleges) are more helpful than the scaled scores. The officer used Math Level 2 and Literature as an example: 750 is the 94th percentile in Literature but only the 75th percentile in Math Level 2.</p>
<p>The scores provide benchmarks of a student’s understanding of the material on a test. But it is completely unfair to evaluate two scores on two different tests equally due to discrepancies in scoring curves, difficulty, and so on. The composite score simply does not provide performance comparisons across other Subject Tests. For instance, a 770 on the Chinese exam correlates to the 33rd percentile. However, a score of 540 on the Literature exam is parallel to the 32nd percentile. It would be irrational to assess them equally by scaled score simply because each performance is virtually identical compared with those who took each respective test. I too have surfaced this exact question with college representatives from the University of Chicago, Yale, and the University of Pennsylvania, and all three affirmed that the percentiles are much more indicative of performance.</p>
<p>silverturtle, I’m really surprised that you have heard that from the admission officers. Shouldn’t the scaled score be more important? Among the different subject tests, the percentile to scaled score comparison varies mostly because of the difference in the overall ability of the entire pool of test takers. One’s percentile may be lower because he is competing against a student body that is relatively stronger in the subject as a whole, but that does not make his or her own high test score less significant.</p>
<p>Consider the math I and and the math II exams, for instance. A perfect score on the math I would yield a much higher percentile than the exact same score on the math II exam, yet a high score on math II, though translating to a lower percentile, is generally thought to be more preferable. </p>
<p>The same applies to the far eastern language subject tests that mifune had previously mentioned, as well as, let’s say as an example, the physics subject test. A large number of high school students never takes physics, or at least never completes the course in time for the subject test, so the people who do choose the physics exam are comparably stronger in math and science, resulting in lower percentile ranges. But does the lower percentile necessarily renders a high score on the physics subjects less impressive/significant than the exact same score on U.S. history?</p>
<p>anyway, just my opinions. maybe mifune already said something like that in his posts above, but I’m kind of too lazy to read through it all :P</p>