Classes where average grade % is failing - is this common?

Do all engineering programs grade like this?

I am saying that if it is being effectively taught via problem sets (as it should be, along with lectures, tutoring sessions, and lab exercises), the exams should not have average grades at the 50% or lower level.

And the MD example does it make sense. You are referring to CLINICAL TRAINING. Not giving docs exams on things they have no exposure to and expecting them to know the answer. TRAINING – usually with a supervising physician to coach and supervise. That is not the same as exam dumping on undergrads, that time honored tradition with no actual justification.

I have an issue with arbitrary curves designed to put a certain # of people in a failing zone, for no other reason than to put a # of people there.

As for new approaches to problems in tests…I am reminded of high school geometry, my own, way back when. We memorized some proofs (maybe had access to others we didn’t memorize), and worked with problems where we used them. On exams, we were given problems that expected us to draw on the dozen or two proofs we’d memorized (or had access to), but the problems didn’t say which proofs to use or how - that we had to figure out. I’m fine with that, in fact I think it’s ideal, as long as the class has exposure to this process.

Define the term “effectively taught”.

@intparent You are missing the point. These tests are testing the ability to think through problems applying the formulas and procedures the students should know. You cannot test thinking if you only use problems they have already seen. That tests memory, not problem solving.

Following from that, obviously it is more difficult for students to think through problems for the first time, that is why the standards for passing are not 65% but perhaps 45% (or whatever is appropriate for this particular exam). Professors aren’t expecting you to perfectly work through every aspect of the problem, but they are giving you an opportunity to demonstrate what you understand, while at the same time knowing full well you aren’t expected to work through every problem perfectly.

A typical problem will test 5 or 10 different concepts. Understanding that different students will understand different aspects of different concepts to different degrees. Nobody is expected to solve the problem perfectly, though some will. But rather, they are testing the ability to analyze the problem, draw the relevant diagrams, translate the diagram to the appropriate equations, solve the equations, and draw conclusions from the results.

Each step in the process has value, each is an opportunity to earn points, each is an opportunity to demonstrate understanding.

@al2simon - I think you have some good points, but on what possible basis do you claim that teachers don’t identify true talent in STEM at the high school level?

I don’t think I am missing the point. If the test scores are averaging 50% or lower, then clearly the students are not effectively being taught the building blocks and being given practice/knowledge of how to put them together. The EXAM is not the place to teach them. Yes, the very top group should be stretched (top 10%). Yes, the CLASS should teach students this skill set. But just dumping a load of bricks on most of the class during exams and saying they don’t have the skills to do it isn’t teaching. I know we have a lot of posters on this thread deeply invested in the current approach. But that doesn’t make it a good way to do it… it is just the way it always has been.

I think this is the excerpt from my previous post that you are referring to ??? (emphasis added)

Admittedly my language was flamboyant, but I didn’t say that teachers don’t identify true talent at the high school level. I hope it was clear that I saying that the type of training to develop the specialized expertise and the research capability to do things like invent the transistor really only occurs at the university level and not at the high school level.

I know lots of us have had important mentors who were our high school teachers (in my case, I can think of one in particular who sadly passed away 20 years ago). However, their role is almost never expert training of students. Clearly, one important job they have is to give them the proper foundations when they go to college.

But even more importantly (in my opinion), at their best their role is to inspire a student to have love and passion for learning, as well as to enjoy a subject and to be excited by discovery. Almost always this comes from some combination of the student themselves, a parent, or a high school teacher. In order to do this, it goes without saying that teachers are identifying and nurturing talented students.

I still think that identifying and training top students is almost always done at universities. Companies can build on the training that students receive, but nowadays they almost never are able to spend time training them for basic research … most of what they do is tied directly to a commercial product, though computer/computer science related areas have been a notable exception (in 2016, companies like Google). In the past, places like IBM research labs and Bell labs engaged in basic research, but almost all the researchers were scientists and engineers hired from academia after being trained there.

In this respect, the US university system is indisputably the envy of the world.

In truth it’s pretty hard to gauge a person’s talent by the kind of material that ANY high schoolers will be able to complete. Opportunities that are open to them at such a young age have more to do with their parents than with the students themselves - what high school they went to, what kinds of resume-building vacations and projects their parents can pay for, what kind of activities they are pushed into dong when most would prefer to do either nothing or just a few things in their social circle. Their own abilities have generally yet to be developed in an objectively productive way (they’re still running on future potential rather than real world value).

Any achievement that can be gained in high school very quickly becomes obsolete. 1-2 years of age makes you significantly more impressive overall, and even middle of the pack college students can be on par with the best high schoolers in the nation in terms of accomplishments, by virtue of the expanded opportunities open to them.

So a university setting is probably the first place you can start to evaluate what people seem like they will be able to accomplish in the future. Though given that most people peak in their productivity in their 40-50s, it most certainly isn’t the last or the only place you can evaluate people as such. They lose the path of the “prodigy programs” like Wall Street analyst or top school PhD, but frankly, the value of those are overrated.

@al2simon - OK thank you, I misspoke above, and I apologize. However, I would also take issue with the thought that good STEM HS teachers aren’t training the right modes of thought (in addition to identifying/nurturing).

In Honors classes we make sure that there are problems to solve and exams given, that require new applications of knowledge, some with “full use of the 100 points” as I mentioned earlier. We try very hard as well to make sure students with top talent, know that they can reach for the stars, regardless of the stereotype threats and so forth as I mentioned.

I know that not all HS teachers are doing this in STEM, just as not all college professors are doing wonderful work, but it’s certainly my professional goal, and I see it in many classes and many schools.

I went to Penn State graduate school in Computer Science in the 1980s. There were more students wanting to major in it than they could handle. They also had many people enter the graduate program without assistantships. The upper level classes for majors were typically graded well below a C average, with 25% Fs and 25% Ds. The lower level graduate classes were grades with equal As, Bs, and Cs, where Cs were essentially failing for a graduate student. The average salary for new BS and MSs was high, as there was a good reputation for those who made it through the programs.

"No, people have explained why it needs to work that way. The ability to apply concepts and equations to solve new situations is important in the STEM fields. "

The humanities aren’t just “plug and chug.” They require applications of concepts to new situations too. Just like STEM.

No dispute here. That’s why I’m drawing a distinction between training in very important foundational skills vs specialized training required to do things like invent the transistor. However, I do think that in general problem solving skills aren’t taught that well in high school. The US has the best university system in the world; its high schools just aren’t at that level.

Having said that, the deep, dark secret that every teacher (at all levels) knows is that the best way to produce a great student is to start with a great student. Inspire them, teach them, challenge them, then get the hell out of their way :slight_smile:

(Again, I’m not saying that teachers aren’t vital to a student’s development).

^ this is the type of curved grading I would avoid as a student (assuming the Cs and below were doing well, just not as well as the top students).

They’re failing students because they don’t have the resources to teach them all properly.

@intparent - why does a test where the average score is 50% show that students aren’t learning the material?

You are locked into the paradigm used in most HS tests where only the points from 60-100 are actually used. Points 0-60 are awarded by testing for the absolute simplest material that virtually everyone gets correct.

The tests where the average scores are in the 50% typically eliminate the obvious questions that everyone is expected to know and replace them with more opportunities for students to demonstrate knowledge of the challenging material.

For example, say there are four distinct, complex ideas covered by the test material. A B C D

Test 1:
Q 1 and 2 cover A and B and the questions are easy so everyone should get near full credit
Q 3 and 4 cover C and D and are more complicated, and more thoroughly test full understanding of those concepts. Full understanding will get most of the available points.

Test 2:
Q 1 - 4 each cover A B C D and are complicated, like Q 3 and 4 above

The average on Test 1 will be higher, but is unfairly biased towards those that emphasized C and D in their studies.

Test 2 will have a lower average but everyone is treated fairly. Everyone gets a chance to test their strengths and the professor takes into account the fact that he didn’t give away points by expecting an lower average.

I don’t think you understood what was being said at all.

The problem isn’t that people don’t understand that the humanities etc. require creative application of (new) concepts to (new) situations. After all, when people hear “creative”, they almost reflexively think of areas like creative writing, the creative arts, etc. That isn’t the problem.

The problem is that lots of people (most importantly many prospective STEM majors themselves) think that plug and chug is most of what STEM-ish subjects should require, at least at exam time. In one form or another, that’s what several posts here have argued, and that’s what’s wrong … this misunderstanding of the role of creative problem solving is part of what is keeping many prospective STEM majors from achieving their potential in the subject(s).

What part of the gender difference in how students perceive these results are you missing? Or don’t you care? And no one says “plug and chug” is what these exams should be. But if you can’t teach students how to synthesize, and don’t care if you do, then you are honestly not focused on teaching – you are weeding and stroking your own ego in the process.

The building blocks of many of these subjects need repetition to make them stick. For example, when studying for her statics final last semester, DD looked back at test 1, where she struggled, and remarked how easy that stuff now is. Over time, and repetition, it becomes natural, like learning a language.

Professors, understand that 100% comprehension isn’t possible; they also know that it is impossible to say which parts of the subject will be clear to which students.

In some ways it is similar to the forgetting curve. If I give you, pick a number - 100 - words to memorize for tomorrow, in one day you will recall about 40 of them. With repetition, the next day you can remember 40 + 40% of 60 = 64, by the third day 78.

If I want to test to see if you bothered to study, on day one passing should be set somewhere below 40. It is unfair to expect more than that. Day 2, same test I can expect 60+. As long as I curve the results, 40 on day 1 is exactly the same as 64 on day 2. Neither is “failing” - they are both in line with expectations, even though 40 is more than half wrong.

If I wanted to make the test “easier”, I could add questions on the words you’ve been memorizing for weeks. You should get more points, but it doesn’t really tell me anything about what you did yesterday.

Now, lets say I only have enough time to test you on 100 words. Option 1: 50 weeks old words, 50 yesterday words. Option 2: 100 yesterday words.

Test 1 average should be roughly 65, Test 2 average should be roughly 40.

Now, how do I choose the 50 words on test 1? What about the poor kid who randomly knows the 50 words I didn’t choose and doesn’t know any of the 50 I did choose. He actually knows more words than expected (50 vs 40) but his grade is punished because I didn’t include the words he understood. This unlucky kid gets a 50 (15 points below the average, instead of the 50 (10 points above average) he would have gotten on the better test!

That’s the wrong way to think about it and the wrong way to write tests.

The proper way to do it would be to have each question cover both a rudimentary and an advanced understanding of each distinct concept. A rudimentary understanding will get you to a C-level (however many percent that ends up being in the grade system) and the rest will be for understanding the more complex aspects of those concepts.

Under the system you suggest, you’d get equal or more points for understanding 0% of A and B and 100% of C and D than for understanding 80% of A, B, C, and D. And I think it’s clear which of those two students should really have a higher score in a fair system.

@Pizzagirl - Didn’t mean to imply humanities are plug and chug at all. Perhaps however, this isn’t as much of an issue in humanities grading because there is a lot more room for holistic review of essays which inherently are curved to the expectations of the professor.

I would assume I could give the same essay prompt to a regular HS track, AP HS History, US History 101, and Grad School History 5400 class. Same prompt. Grades on the essays would reflect the expected level of sophistication of the student’s analysis. Likely an A answer on the AP level would have trouble passing on the Grad school level.

Engineering grading tends to be more “rubric driven” for lack of a better word. For example, 10 point for the right free body diagram, 10 point for the right equation, 10 points for solving the equation and 10 points for evaluating the results. Partial credit awarded in each depending on sub rubics. It is a bit more difficult (and perhaps a bit incongruous with the population of engineering students) to look at same problem an call it an A, B, C etc. A bit more manageable to add up the points and adjust at the end.