<p>Siserune - you are confusing the accuracy and fairness of conferring a numerical value to a grade with the mathematical calculation of the average once the value is assigned. The former has all kinds of issues, but that is not the point. The latter has rules, and is not a “design choice”.</p>
<p>
</p>
<p>Is this true? I would imagine if grades were assigned as, for example, 3.3 +/- 0.1, the average value (i.e. the GPA) would have to be reported as a confidence interval, and not given a 3 digit definite value.</p>
<p>And I do not think you can substitute the latter for the former because it “feels right”.</p>
<p>It’s true for conventional grades. Just write out the equations and count significant digits.</p>
<p>3.3 (grade points) ÷ 1 (course grade)
33.0 (grade points) ÷ 10 (courses grades) =
333.0 (grade points) ÷ 100 (course grades) = </p>
<p>On the other underlying question, I think that siserune and others who think that the whole concept of significant digits doesn’t really apply are correct. To make the s.'s point concrete: FSU and a number of other schools use a system under which A- = 3.75 and B+ = 3.25. There’s no more measurement inaccuracy in that system than in one that assigns 3.7 and 3.3. Tthe equal-spacing model is not inherently more fair; in fact, I think one could argue that the .75/.25 model probably better captures our intuitions of what an A- is (just a little bit below an A).</p>
<p>But conventional grades don’t have variance.</p>
<p>And I keep coming back to the original problem - which is that there is a disconnect between the accuracy of the assigned grades (which I acknowledge completely are inexact) and that of the GPA calculated from those grades, which is calculated to a precision not justified by the original assignment (and then used as a rigid threshold). </p>
<p>Although I concede your explanation for why that calculation is correct.</p>
<p>Framing this in terms of math and significant digits turned out to be a red herring. GPAs are wildly imprecise measures of student achievement; they are infinitely precise, in a mathematical sense, measures of what a student’s gpa is. It’s a shame that decisions about scholarships and honors are sometimes based rigidly on the latter instead of the former. Is there anything more to it than that?</p>
<p>Practically speaking, not really. My concern is that it would be even more of a shame (and the height of pedagogic irresponsibility) if that “infinite precision” was determined incorrectly. So I hope you are correct.</p>
<p>
</p>
<p>Yes, it is true. Confidence intervals could be used or not, but they don’t change the point: if the width of the confidence interval is 0.2 for the individual grades, then the interval around the GPA would still be an order of magnitude smaller, and it would also still be meaningful to report the center of the GPA interval to 1-2 more digits of accuracy than the course grades themselves.</p>
<p>

And I keep coming back to the original problem - which is that there is a disconnect between the accuracy of the assigned grades (which I acknowledge completely are inexact) and that of the GPA calculated from those grades, which is calculated to a precision not justified by the original assignment (and then used as a rigid threshold).
</p>
<p>But the higher precision <em>is</em> correct, as has been pointed out many times by now.</p>
<p>Whether you accept numerical thresholds or the use of GPA within them is a separate issue; maybe you also dislike having a voting age and a drinking age.<br>
Given that a system exists within which GPAs are calculated, and are used in numerical thresholding, then it is important to report GPAs not only to the extra digit of accuracy that they naturally possess compared to course grades, but to a second extra digit of accuracy. This is to prevent an over- or under-sensitivity of the thresholded results (scholarships etc) to individual course grades, and to avoid lumping too many students at the threshold. That is exactly what the schools do, so they are right again.</p>