AP Tests = Joke?

<p>

</p>

<p>I simply cannot believe that this is true, and for a number of reasons. Due to it being the only AP course that I am familiar with, I will restrict my remarks to those in AP Calculus.</p>

<p>(1) The aim of an AP Calculus course and a college level calculus course are inherently different. By necessity, the goal of an AP Calculus course is to teach calculus sufficiently to do well on a test, and oftentimes to prepare students for the AP specific test itself. This is frequently for a number of reasons, but primarily so that students can achieve placement credit at universities, and also so that schools can have their names in periodicals such as Newsweek magazine to make themselves look impressive.</p>

<p>Conversely, the goals of calculus at a collegiate level are to teach calculus to the extent necessary to meet the goals of the particular mathematics department. If a lot of the mathematics in a particular college focused on teaching delta-epsilon proofs for limits, those particular studenst would find themselves overly prepared for what appears on the exam. If the Trapezoidal Rule for approximating the area under a curve were not deemed a worthy task (especially since Simpson’s Rule does a better job of approximating said area), these students would find themselves at a disadvantage on an AP test.</p>

<p>The goals of the two groups are (necessarily) different. Although both have calculus in common, the emphases of these two courses may be different. That is very much unlike any high-caliber AP course, where the teacher of said course knows that Trapezoidal Rule is a must, Simpson’s Rule and delta-epsilon proofs for limits are not, and are therefore covered only at the teacher’s discretion and time permitting.</p>

<p>(2) If the exams were desired to be equally different, why would AP exams see such drastically different curves in a relatively short time interval? Do the mock tests really get this so far wrong? If they do, how can they be considered reliable?</p>

<p>For instance, it was announced at the AP Calculus reading in Kansas City this year that the 2007 AP Calculus AB test was the hardest test ever given to AP Calculus students. Ever. And it wasn’t even close.</p>

<p>But 21.0% received 5’s and 25.7% received 1’s, with a mean grade of 2.94. The 2005 exam had a nearly identical distribution, with 20.7% receiving 5’s and 25.2% receiving 1’s, also with a mean grade of 2.94. If the exams were designed to be equally difficult, why are the distributions so similar? Why are the means identical? And why need to adjust the raw cut score at all?</p>

<p>We recognize that no two tests can be the same in difficulty, and my conjecture is that the folks at the College Board do not even try to do this. Rather, my understanding is that the College Board tries to preserve equity in results: that a student who earned a solid 5 on one AP exam would almost ceratinly earn the same solid 5 by taking a different version of the same AP exam.</p>

<p>And that’s where the consistency of the results within the multiple choice comes in.</p>

<p>

</p>

<p>It would be difficult to adjust the score distributions on the current test form immediately, because the purpose of the comparability study is to confirm that the standards are aligned. (Source: <a href=“Supporting Students from Day One to Exam Day – AP Central | College Board”>Supporting Students from Day One to Exam Day – AP Central | College Board; on page 4.) If the standards are not aligned, it’s not a simple matter of adjusting the score distributions on the current form. There are two reasons for this:</p>

<p>(1) If the current form is not the ones used in the comparability studies, then all you have truly confirmed is that the standards were not aligned in the year of the AP study. Those standards may have already been adjusted, or they may be further out of whack than indicated.</p>

<p>(2) It takes time in order to make the kinds of adjustments needed. Every change that occurs to an AP exam is given two years notice to AP teachers so that these changes can be incorporated into the curriculum.</p>

<p>As a for instance, sign charts used to be acceptable as a justification for relative minima and maxima on the AP Test. I imagine that these compatibility studies determined that college professors were not sufficiently impressed by these explanations. (It also could be that the test development committee was not sufficiently impressed, and just created the expectation, pushing the raw score downwards slightly to earn the same achievement.) But the change did not occur overnight. Rather, the change occurred as a deliberate course of action by the College Board pushing for a statement such as “f has a relative maximum at x = c because the sign of f ’ changes from positive to negative at that point.”</p>

<p>It is my claim that these comparability studies – while they may provide information about additional points of emphases and information of additional topics to study or remove that are needed in the future – are often not providing actual information that goes directly into the scoring for that particular year, but rather that it guides the long-term direction of the program.</p>

<p>If you have contradictory source information, I would love to see it. I admit that almost all of my information is second-hand knowledge from attending workshops for each of the last five years in order to maximize the success of the AP Calculus program at my particular school.</p>