TA confession: I'm sorry, but most of your children (my students) are average

High score a 70? Major teaching failure.

Well, I think that the students’ complaining stems from the fact that they didn’t understand the level of difficulty of the test or the requirements for an A. When you create a test with the goal of obtaining a normal distribution of scores, your expectation is that most of the students will receive C’s, some will receive B’s or D’s, and there will be a couple of occasional A’s or F’s. I wonder if students were aware of this expectation going into the exam. Unlike the OP, I don’t believe that a test with a normal distribution of scores is an “ideal”, fair test. What is fair is constructing a test that accurately measures the students’ mastery of the material. Especially when you are working with a group of exceptionally strong, motivated, well-prepared, hard-working students, it is not unreasonable to have the expectation that most of them (with some exceptions) will achieve a solid mastery of the material, not only those who are 3 standard deviations above the mean. Under these circumstances, if more than half of the class gets an A, it does not mean grade inflation, it just means that more than half of the class really “got it” – and shouldn’t it be our goal and expectation as instructors that most of our students “get it”? If I am not mistaken, this is the philosophy behind AP tests: they are constructed to measure a certain level of mastery of the material, and on some tests, only 5% or so of students get 5’s, while on others, the number is closer to 50%. On the other hand, I can easily create a test where the scores will be normally distributed and only a few students will get A’s. This is achieved by including some tricky questions which are difficult to solve under the time pressure and the overall stressful conditions of the exam. Typically, students who will solve these questions are those who have simply seen the “trick” before, outside of the given course (or, less commonly, students of truly extraordinary ability.) A student can have a solid, A-level mastery of the material and still be unable to answer the tricky questions in an exam setting. These students will be upset to learn that they have only received a C, and I don’t blame them.

Major teaching failure? Have you considered that the course may have been more rigorous with higher expectations than the average?

I grew up in one of those Commonwealth countries that use the 100 point scale that the University of Toronto does. In high school, an 80 average conferred honours (sic). In college, an 85 average could get you into a top graduate school in the U.S. We never thought that the grading system was Draconian. About the only complaint I ever heard was from a roommate who felt that I had it easier as a Mathematics major since getting the correct answer was all that mattered whereas there were “style points” in all his English and History courses. (He was not a garden variety whiner as he was a Rhodes Scholar nominee.)

In the U.S., we have grade inflation and grade compression. The latter is particularly frustrating given the use of a 10 point or 12 point scale. Imagine a cross country race in which the official timer recorded minutes but not seconds. Everyone who finished the race with a time between 25:00.0 and 25:59.9 would be recorded as having taken 25 minutes. That would be bizarre in the extreme, yet that is exactly what we do with grades. We have students receiving the same A whose mastery of the material is markedly different. When they apply for graduate school, they look the same on paper. What happens then is a complete crapshoot based on three letters of recommendation. Throwing away information in this fashion is hard to fathom. In no way am I saying to take the student with the 86 average over the one with the 85, but it is unfair and unproductive to give a student who gets a final grade of 94 in a class the same A as the one who got 86, which I what I have to do on occasion.

When my own kids went to college, I would often hear something like the following: “I will get an A in the class even if I get a 50 on the final.” Of course, I am happy to hear that the semester has gone well, but note that this reality takes away part of the incentive to achieve excellence. (One of my kids’ schools does give an internal A+, which does help, but many schools do not.)

At this point, one might be concerned about making fine distinctions. Is an 85 on an essay or an exam really different from an 86? They may not be different in the sense of statistical significance, but we often give one student an A and another an A- with the same difference in apparent performance. It is not clear why {87,86,85,84} is undesirable but {A,A,A,A-} is not.

292. Well, hopefully they are doing more, but the flyer in our mailboxes and message in the parent neighborhood facebook page gave no indication that they were doing more than collecting these items. And regardless, IMO, they shouldnt expect the donors to do the work.

I’ll ignore the fact that the “century of studies” that were analyzed primarily involve data from generations ago before the types of jobs we usually emphasize on this forum existed, using a cognitive test that is quite different from the SAT, with sections for things like finger dexterity. That said, the first study you referenced found that GMA test was not a better predictor of job performance (not job training success) than things like work study tests or structured interviews, which conflicts with your earlier assertion that employers “do not need to test for specific skills.” And the second study found that among the new grads (the focus of this discussion), the work performance correlation was much lower than the overall numbers above from the first study. And perhaps most importantly, they are just looking at an isolated correlation with a single factor instead of how much it adds to the performance prediction beyond existing criteria like an actual employer would use the criteria in hiring decisions. The predictive value drops tremendously when you use a real world situation like this. The first study acknowledges this limitation saying employers sometimes use more than 2 selection methods in hiring decisions. I’m not sure how hiring was generations ago, but in the types of modern jobs we focus on in this forum, it’s common to use far more than 2 criteria – interview, references, background check, past experience, GPA screen, etc. Yes, testing has some utility in a variety of situations, but it’s not the end all to hiring decisions, as you have implied.

coarse … You said what I was trying to say in post #273, only you said it so much better.

Fleur007 said “tricky” questions are required to ensure that only a few students achieve an A. I think the questions that are able to separate the good from the great are actually questions that require the students to apply what they know to something novel. They need to go beyond memorizing and spitting out what they heard in a lecture to actually being able to extend the applications to something unknown. Professors typically aren’t trying to trick students, they are trying to use tests as another vehicle to teach as well as a way to determine exceptional students versus proficient ones. Profecient shouldn’t earn a top grade and proficient isn’t mastery.

High score a 70? GREAT test. It separates the best students from the average students from the not so good students.

^ it’s a 9th grade class.

The situation is VERY different depending on what grade we’re talking about. While it’s an exageration to say a 70 is a great average and a great test because we really don’t know that (even in college, since the “norm” used nowadays to separate students is a B, not a C, and we may think all kinds of ill wishes toward grade inflation it’s not changing anytime soon, so that’s the reality we live in), in the case cited it was a 9th grade class.
Either 1° the students are still getting used to high school, still functioning at middle school level, not doing the work properly, and the test was supposed to wake them up
or 2° the teacher isn’t used to 9th graders.
Most 9th grade tests early in the year mostly test who’s done the work and understood the homework. It’s basically a redo of the homework, with a bit of “apply what you know to new situations” but barely. Remember, those are 2-3 months into their 9th grade, they’re 13-14.
College is, obviously, a completely different situation. :slight_smile:

In the case of the ninth grader, he received a 70 on his bio test. There is NO indication that 70 was the high score, average score or low score. Aside from the parent dissecting the study guide, we really don’t have any more information about the grades, teacher or class.

@cfsmap, I certainly agree that exams should include questions that require students to think and to go beyond spitting out memorized information. However, what do you do when you have a class of very capable students where many if not most of them will be able to perform at this level, while you are still striving to achieve a normal distribution of scores with only a few A’s? What sometimes happens, and I am speaking from my own observations, is including a question that requires knowing some obscure formula not mentioned in class, or a novel technique which is extremely difficult to come up with on your own in an exam setting (and which some students might have seen elsewhere, giving them an advantage.) This is what I mean by “tricky”.

I am just not in favor of a bell curve at all costs. Set the criteria for each grade level. For an A, this will most definitely include going beyond memorization, thinking critically, problem-solving, applying what you have learned in a new situation, etc. Then do give A’s to the students who meet these criteria, even if it happens to be half of the class.

Fleur007 … As stated, there is a huge difference between high school and college. The students in your son’s bio class will likely have enough homework grades or get enough extra credit opportunities so they end up with an A or a B at the end of the term.

But, going back full circle to the OP and the follow-up post #28: Implied in some of your comments, there is a belief that all the students in the class are capable of being top performers, and the failure was not that the students didn’t learn the material, but the teacher failed to teach. The study guide was bad and the students worked hard preparing for the test, so they should have done better. The opportunity to retake the test is viewed as a penalty versus an opportunity to actually learn the material - the focus is on the grade, not the education. These are all things that the OP was frustrated seeing in college students.

We have all bought into the Lake Wobegon “all children are above average”. I think it’s harmful to society and harmful to individual students. Being the best is a combination of ability, work ethic and passion. In college, only the best should get an A.

Imagine we were to assemble a class of Nobel Laureates in Physics, who volunteered to sit for a Physics exam. Now, the goal was to force a normal curve on this group, and an exam was devised to achieve just that. (It is not that hard to do.) Who here would agree that imposing a normal curve on a group of elite physicists is conducive to the motto - only the best should get an A?

The class as the OP described comes across as one of the toughest classes in an elite college, where only the very best are participating. To impose some B’s and C’s because some B’s and C’s must be given out is just plain stupid in such a situation, in my opinion. It puts grades above education.

I would also like to make a point about the attitude of the OP. Many decades back, I was a TA myself, in a rather difficult course in a rather elite school, where the professor insisted on testing and grading very harshly. (I believe some people in academia were simply not hugged enough when they grew up, or didn’t get enough dates in high school and college, or something like that.) The students were distraught, and I talked to each one of them and tried to cheer them up. There was not much that I could do, as the exams were not set by me. But I tried to be a member of the senior brethren (I was an undergrad only a few years back) offering them a shoulder to cry on, so to say.

I did it because they were my students and I cared for them. The OP shows a rather poor attitude however. TAs of every generation have to face the same situation that the OP is facing. The OP clearly doesn’t like his job of being a TA. I hope he doesn’t take this attitude to his place of work, once he starts to work. Employers wouldn’t be coddling him, they would be disciplining him.

Unless the test was designed with that outcome in mind.

Or if it was testing preexisting knowledge in advance of learning more.

Or if the students were all slackers.

Or…

When highly selective private colleges curve classes, it tends to be a very generous curve where a large portion of the class gets A’s, particularly in classes with a large portion of stellar students. For example, you brought up physics earlier. When I took advanced freshman physics at Stanford, there were only enough students in the class to have one section. The few that took the class were almost all stellar physics students who were at least contemplating a physics major. The vast majority of freshman engineering and pre-med students took different versions of the class, so the other physics options were much larger in size with many sections.The grade distributions of the 3 versions of freshman physics classes, as listed on StanfordRank.com are below:

Physics 61 (mostly physics majors) – Number of A and number of B grades in 2.8x ratio, very few grades below B
Physics 41 (mostly engineering majors) – Number of A and number of B grades in 1.4x ratio, very few grades below C+
Physics 21 (mostly pre-med students) – Number of A and number of B grades in 1.1x ratio, the only class with any grades below C-… an extremely small portion of students failed

Note that in all 3 classes, a larger portion of students are getting A’s than is typical at less selective colleges; and in the advanced class of stellar physics students, a much larger portion of students are getting A’s than in the other physics classes, even though that class is curved (or at least was curved in a past syllabus that is online). In short, as the percentage of stellar students taking the class increases, the percentage of students getting A grades usually also increases.

Extremely few of my classes had the traditional HS system of 90+% = A. Instead professors had the flexibility to put however challenging questions they wanted on exams and choose what they thought represented an A on their exams. For example, I had one chem professor who made exams so challenging that the mean grade was usually ~35%. As I recall, that mean grade of ~35% was curved to a ~B+. In higher level engineering classes, there was generally no publicly explained curve, nor was there a 90+% = A system. Instead professors were free to decide what exam responses corresponded to what grade.

@Data10 It is not quite “do not need to test for specific skills.” Anyway, the concept comes from this:

http://www.apa.org/research/action/who.aspx

I have to be careful not to “torture the data until they confess”. Instead, I prefer to see if the study fits the “total narrative”. If it doesn’t, then it is probably advocacy research (or else there is a world-wide conspiracy going on).

Anyone using standardized test alone to select workers are as foolish as refusing to use one. Management consulting firms appear to use it as an initial screen, and then they check for course vigour and GPA before initiating a series of interview. I think this is the best system to select the best candidates, on average.

Many on CC are suggesting that liberal arts are great at inculcating “critical thinking”. (The implication is that other majors don’t). If this is true, then we should be able to quantify it. (I am thinking specifically of the hiring of young people fresh out of school, and for jobs where no specific technical training is needed).

@ucbalumnus I am glad you mentioned that quote by Hambrick and Chabris. It is absolutely true, and can not be stressed enough.

@MYOS1634 The attempt to hold average grades at a C+ range or there about is a province-wide unwritten agreement. I used Toronto- Scarborough College simply because that is where Mrs. Canuck is from. The best U of Toronto students are really at the St. George Campus… and the best Ontario students are at Queen’s University.

What makes Toronto particularly hard is that there are a lot of hard-driven immigrants and their children settled in town (that are fighting for the 20% A). It is no longer the place of my childhood where people go to U of T to “read” history.

As I have said earlier, this grading practice is unfair. It encourages gamesmanship and puts the 1st generation students at a huge disadvantage. Why struggle with engineering science (considered the toughest undergrad program in U of T) when you can do sociology and get better grades with less effort? I think a focus on standardized testing would dampen “the sport”.

@Canuckguy,

It’s been done.

https://www.insidehighered.com/news/2011/01/18/study_finds_large_numbers_of_college_students_don_t_learn_much

Since they wrote that

why do you advocate judging people by SAT scores?