So you would prefer high-school-like tests with 70-80% C problems, with only 20-30% for B and A problems (for the purpose of this discussion, assume no curve grading)? Or perhaps the tricks described in #97 and #99 to rescale the range of a test where B and A problems consume more than 20-30% of the effective possible scores on the test?
It likely differs between colleges and departments within the college (and curve or non-curve grading may exist in both STEM and non-STEM departments). Curve grading is more common in larger classes, because it makes no sense in smaller classes where it is more likely to get a significantly stronger or weaker group of students than expected. Also, some instructors doing curve grading for larger classes may only curve passing grades (i.e. D and F grades are assigned on a more absolute scale), or may only curve grades if the test and assignments result in lower scores than intended for a pre-set non-curve grading scale (because the test and assignments were more difficult than intended).
So it is hard to generalize the use of curve grading across entire colleges or types of subjects.
No.
People have, earlier in the topic. Limiting As, weeding out some # of students with the lower grades (because they are lower than the others, not because they have done a certain level of work on the exam objectively).
I don’t get it either. So far the only explanation that makes any sense to me is that schools that do that lack the resources to take all the students that could succeed, so they drop the lower ones. That is still a situation I think I’d want to avoid.
No objection to this from me.
^You can’t even generalize them across the same COURSE. I have never taught anywhere where some sort of curving policy was specified to us as the professors by the department, the college, or whomever. Even where I teach now, not everyone teaching the same course does the same thing. As I mentioned, I only curve final grades if there seems to be a need. I don’t curve on individual exams, even if the average is 55%. Another professor who teaches 2 of the same courses as I do applies massive curves to his exams so that students can pass by putting their names on the test.
@OHMomof2 Admittedly I haven’t read all of the previous posts, but in my experience, “weeding” out isn’t done to limit the number of kids, nor is it done to eliminate kids who can “handle” the work.
From what I’ve seen, weed out classes are more “wake up classes.” The vast majority of kids who get admitted to engineering programs (pc note - or any other program), are academically capable of doing the work. They have the fundamental math skill, the knowledge and the aptitude.
The weed out classes filter out the kids who aren’t willing to step up to the speed of learning that is required to get through the material, who aren’t willing to put in the practice time with the problem sets. Many of these very smart kids were able to coast through HS with near straight A’s never really breaking a sweat. When asked to work hard they don’t have that desire.
In my opinion, weed out class start with the assumption everyone is capable of doing the work, what they test is who is willing.
This is very unlike my experience. It was the Problem Sets™ that caused terror in the STEM classes I took, at least for most classes at the sophomore level and above. Most took 4-10 hours/week to do (after you knew the material); in one very memorable, very extreme case the problem sets often took 40 hours per week. In the hard classes we were almost never assigned textbook “exercises” to do; they were usually considered too easy and we were supposed to be motivated enough to do a few on our own if we felt the need. In contrast, the “problems” were supposed to really challenge your understanding and build your critical thinking skills by forcing you to apply the material to new situations. (Of course, not all my professors subscribed to this philosophy, but seemed to).
In comparison, the exam questions were almost always easier than the problems on the problem sets; however, since you only had 2 or 3 hours to get them done the challenge factor was about the same.
Freshmen classes are different. The students are still adjusting to college and most are not going to major in the subject. Here, professors have to assign exercises to the students to teach the basic manipulations since it isn’t reasonable to expect them to select or do exercises on their own. For example, a possible homework for the week would consists of 9 “exercises” and 3 more challenging “problems”. My guess is that students don’t grasp this distinction … they expect to see an exam mostly full of “exercises”; instead, 1/2 or more of it will be at the “problem” level.
@sylvan8798 Sorry. I should have put the two different but related ideas in two different posts.
The “how fair is that” comment has to do with how one can package oneself to appear smarter by picking an easier major. We are in fact rewarding gaming and penalizing effort.
The other idea is really an aside. Even among A grades given in the same class, they are not all the same. It may not show up in a transcript, but it will in standardized testing. Studies coming out of Vanderbilt show that SAT, given at age 13 to the talented top one percent, can be differentiated. Those in the top quarter of one percent are significant more accomplished than those in the bottom quarter of one percent, even decades later. They earn more doctorates, more patents, more literary publications etc. The notion that above a certain level scores don’t matter is simply false.
I send my children to college to learn. If they can attend our local cc then transfer to our state school and get an ABET accredited engineering degree, why would I choose a school whose goal seems to be to drum out as many students as they can over the colleges that are committed to teaching them? Are the courses at your college better? Not if both met the standards of the accreditation agencies. Do your engineering students earn more? It doesn’t sound like they do. It sounds like I can save myself thousands of dollars, an enormous amount of aggravation, and have the same (if not better) outcome – a happy, well educated child with an ABET accredited degree and no student loans. That works for me.
“Those in the top quarter of one percent are significant more accomplished than those in the bottom quarter of one percent, even decades later. They earn more doctorates, more patents, more literary publications etc”
Do most people have those goals, though? Not everyone will just up and die if they don’t revolutionize their fields in some way. Many people just want to work hard, be appropriately compensated and enjoy what they do. It’s completely dancing on the head of a pin to separate out the one percenters yet one more time.
“The “how fair is that” comment has to do with how one can package oneself to appear smarter by picking an easier major. We are in fact rewarding gaming and penalizing effort.”
If indeed there is a significant “dumb” factor in the easier major, wouldn’t employers figure it out and thus wouldn’t repeat the behavior? Or maybe, just maybe, there is not the yawning chasm of a difference that you think.
Of course, the other explanation is that employers don’t need 1000 clones - they need some people who are good at some things and some people who are good at other things. It matters not if the physics major is academically “smarter” than the psychology major if he can’t write a press release or devise a marketing campaign or come up with a better retention strategy for employees.
@austinmshauri - I think you may have missed where it was explained that “weeding out” is about determining drive and effort among the college kids who may have coasted through HS. It would not presumably weed out a qualified kid on the track you describe, any more than a direct-from-HS kid (probably less likely considering more personal maturity).
I have no problem with holding students to a particular standard. My issue is with using the work of other students who happen to be in that class to define the standard. IMO the comparison should be to the work/performance/aptitude/whatever as expressed on the exam. Not to other students.
Actually, many state schools (apparently mainly at the selectivity level of well known schools like Purdue, Minnesota, Virginia Tech, Texas A&M) are capacity-limited in engineering majors. Unfortunately, some (like those named) have chosen to admit frosh to a pre-engineering status where the students must later compete by GPA to enter their majors (i.e. merely passing with C grades and GPA >= 2.0 is not necessarily enough to get into the desired major). Starting at community college does not avoid such need to get a high GPA, since the community college student needs to earn a high GPA to be admitted to the desired four year school and desired major as a transfer student.
Note that having to compete by college GPA (or other criteria after entering college) to enter a major may not necessarily be unique to engineering majors or state schools. An example of a restricted major at a well endowed private school is visual and environmental studies at Harvard.
ABET accreditation sets a high minimum standard, but it is possible for different schools to have different breadth and depth in courses, offerings of courses, and instructional methods within a given ABET accredited major.
Exactly. A forced distribution of scores that pre-specifies a quota of A, B, C grades etc. (the definition of a “curve” that a lot of us object to) punishes people for having smart classmates. It also has a tendency to add noise to the grading process that rewards factors that have little to do with understanding of the material.
So it looks like people are arguing about various different test writing and grading methods that are not the same:
(a) Absolute scale grading with thresholds similar to high school (e.g. A = 90%, B = 80%, C = 70% or similar), with most test questions of the type that C students can answer.
(b) Absolute scale grading with thresholds that can be much lower than high school (e.g. A = 85%, B = 55%, C = 25% or similar), with matching proportions of A, B, and C student questions on the test.
© Relative grading ("on a curve’), where test question difficulty need not be as finely calibrated as in (a) and (b).
Which type of test writing and grading methods would you prefer, find acceptable, and dislike?
My thoughts:
A is generally fair for most classes.
B is fair for more complex classes where small but significant differences in understanding can lead to substantial differences in absolute scores (e.g. quantum mechanics becomes substantially easier if you are a more effective mathematician).
C is not fair or reasonable in general.
Once again, I completely agree with @NeoDymium .
A - ideal for testing rote building block skills that students need to be able to do quickly and with a high degree of accuracy - spelling, addition and multiplication, foreign language vocabulary, etc. Also good for assessing whether students have learned basic knowledge that almost everyone should know - how many branches of governments are there, etc.
B - good for many classes which form the building blocks for more advanced college classes and/or for which there is a highly standardized curriculum. This includes most high school college prep classes. Includes college classes such as Calculus I, II, III or Differential Equations, where a good portion of the time is spent learning mechanical computation skills. I’d also put some classes such as 1st year Organic Chemistry into this category too. However, I might exclude some college classes primarily designed to train prospective majors in the subject, such as “math for math majors”, “physics for physics majors”, etc.
C - there are many different sub-types here, some of which I think are very bad and some of which are perfectly fine. Regardless, it’s very hard to “finely calibrate” test questions that really assess critical thinking in upper division college students in almost any field. This includes classes in fields ranging from the humanities to advanced STEM classes. For example, I’ve never had a history professor lay out in gory detail a 500 point rubric that distinguishes between an “A” paper and a “B” paper. Not sure why a lot of people are so uncomfortable with this.
Of course, just as important as what grading scale you use is the type and nature of the questions on the exam.
And there are things that don’t fit into any of these categories. I can’t even imagine how you’d assign a non-ordinal numerical score to research done by an advanced undergraduate or graduate student.
I had a roommate who handed in a paper two weeks early and got a B. Later the professor said he’d have given her an A if he’d realized that it was better than most of the other papers. Ouch! Sounds like even humanities/social science profs can be guilty of curving the bad way!
Regarding the class with 40 hour problems sets, mentioned upthread, who the heck thinks this is a good idea? You take more than one of those courses and you are sunk. I have no problem with exams where the median numerical grade is lower than what is typical in a high school course, but I do think that students should be exposed to enough of the problem solving type of question that they aren’t completely blindsided by the exams.
D’s high school teachers laid out pretty detailed rubrics for research papers. 500 points, no…but easily 50.
I’d be annoyed by that prof @mathmom , unless he graded the later papers accordingly.