Back in the dark ages happened at my engineering school, happens as both DS’s school and DD’s school as well (though a bit less at DD’s school).
My feeling has always been the purpose was to continually drive home how much work was required to be put in to truly understand the material. A point which was always born out when a small handful of kids always managed to get near perfect scores.
An example from DD Statics class last semester. First test average was in the 60’s, 3 kids had 98’s. By the end of the semester, when DD was studying for finals and looked back at the first test she remarked how easy that material now was.
The required GPA to stay in engineering and whatever hurdles are that exist may or may not be something to consider as a student selected his university. As an example, and correct me someone if I’m wrong because it’s been a couple years since I looked at this type of detail, MSU requires a sophomore engineering student to finish with a 3.0 or better to be admitted into the upper level engineering classes (junior standing, 300 and above), once admitted they must maintain a 2.0 This, in effect, takes care of much of the weeding out of students prior to the upper level for this uni. UofM requires a 2.0 from freshman year on, to remain in engineering but it’s more difficult to get in so UofM probably doesn’t need to do as much weeding.
@NerdMom88 wrote:
“Interesting discussion. I keep seeing posts that engineering students, for example, frequently end up with 2.5 - 3.0 averages due to the difficulty of their exams. Anecdotes in this post would seem to bear this out. What happens to those students who want to major is this or another difficult field but are on a scholarship that requires them to maintain a 3.0 or 3.25 GPA? Schools that offer those scholarships but use frosh and sophomore classes to “winnow out” prospects from the major are sending mixed messages, at best, and seem to be setting up their students for failure.”
This issue is being somewhat addressed right now in Georgia. The Hope scholarship here requires a student to maintain either a 3.0 or 3.3 GPA, depending on the level of the scholarship. Georgia Tech has been referred to as “where Hope goes to die”. Lesser students attending easier schools are having far less difficulty maintaining their scholarships.
Studies have shown that the loss (or potential loss) of this scholarship has led to a reduction in the number of STEM majors. The state house passed a bill this week 167-0 to add 0.5 points to the grade score for STEM classes when computing the GPA used to determine if the scholarship is kept. The bill now goes to the state senate. The Georgia Board of Regents would be required to determine which classes qualify for the additional half point.
Note, though, that this doesn’t change the actual GPA - just the number that is used to evaluate maintaining the scholarship.
Generally I find that large, unspecified curves come from professors who have more ego than teaching ability. It’s true that the 100-percent scale is not a reasonable means of assessing every form of student, and that it’s reasonable to have adjusted grade cutoffs to compensate. It is also reasonable to, in lower division courses that are in part designed to separate those who have what it takes to finish the degree from those who don’t, design the class so a substantial portion of it will fail. However, when applied to more in-depth courses, it serves to create arbitrary hurdles for people that have less to do with ability (everyone will have better and worse classes for any number of reasons) and more to do with professors who have some reason to wish to show off how difficult they can make a class. What professors should do and often fail to do (especially in higher ranked schools, which often have rotten teachers) is to create a reasonable assessment of student abilities and to let students pass if they meet those requirements.
One good professor I had put it this way: It’s easy to make a test that everyone in the class will fail. The real mark of good teaching is to be able to make a fair test and for everyone to do well.
@NeoDymium “passing” and “failing” are arbitrary constructs. There is no inherent reason why getting 65% of the points is passing and 64% is failing.
The knowledge tested in many of these upper level engineering courses is more of a step function than it is a straight line. If I truly understand a concept I will be able to answer almost any question perfectly, if I am missing 5% understanding, I might not be able to work through 25% of the problem, and so on. Multiply that by 4-5 concepts on a test and that’s where those grades come from.
Now what happens when for that problem there are a number of different ways that missing 5% can occur. You can’t just write the problem omitting that 5% and call that a perfect score, because if you account for all the ways somebody doesn’t know that 5%, the problem quickly becomes trivial.
Better to write the question in such a way that the people who know the material get full credit, the people missing 5% get 75% credit and so on. From there figure out how much material somebody needs to know to consider them passing and curve grades based on that back to the letter scale.
Stevens’ graduation rates do not look all that different from other engineering focused schools of similar selectivity (e.g. Colorado School of Mines, Milwaukee School of Engineering). Graduation rates generally correlate closely to admission selectivity. Engineering focused schools and other academically specialized schools may have lower graduation rates than more generalized schools, because students who change major out of the school’s specialties need to transfer to another school. The relatively rigorous nature of engineering curricula can mean that not-top-end students are more likely to need extra semesters, due to taking slightly lighter course loads than needed to finish in 8 semesters.
In some schools this is true, but this is not true in general.
There is some objectivity in saying: this course was designed so that a complete understanding of this course will be around 90% of the total points, a somewhat weaker understanding will get you 80%, etc. Most reasonable professors are also pretty generous around the boundaries - those that aren’t are generally those who follow procedure for the sake of procedure.
That simply isn’t true. A test can be designed to have any arbitrarily high or low level of difficulty. Concepts are seldom understood on a “you get it or you don’t” at a higher level. There is always a continuum of understanding, and a reasonable possibility of making mistakes in calculations etc. This is a pretty short-sighted way of seeing it.
Absolutely disagree. People can miss portions of a question for any number of trivialities that have little to nothing to do with their actual understanding of the course material. Someone can understand 90% of the material and miss 5% of two questions and get a 50% if the test has 4 questions. They can also understand 50% of the material and get 50% by this system. Pretty horrible means of differentiating.
It’s popular at top-tier schools for sure, but to be frank this is strongly to their detriment.
I read an essay in a book called David and Goliath that evaluated why students drop out of STEM fields. The author provided data (mostly based in SAT scores) that showed that kids in the bottom third of the admitted pool in any given college were likely to drop out of engineering, physical sciences, etc. He did not address the practice of curving, which is the first thing that came to mind. Despite that limitation, what I found interesting is that the kid who got a 650 on his Math section who is at the top flight school but in the bottom third of admitted students would tend to drop out of the sciences while a kid with a 600 who was at the top of the heap at a lesser ranked school would be likely to continue. So he concluded that relative ranking was more predictive than raw scores.
I wasn’t entirely persuaded by the evidence that he used (or didn’t include). He also didn’t address the possibility that the difficulty of a given course might vary with respect to the prestige of the school. And standardized tests aren’t always accurate predictors of future success. The conclusions were nonetheless suggestive.
@mamaedefamilia - Very interesting. In the case of being admitted to your dream reach school for engineering, it might be a case of “be careful what you wish for”.
Remember that “STEM” includes biology, which is the most popular major in that category. Changing out of biology may not have the same characteristics as changing out of engineering.
It is not just relative ranking that matters in some fields. A study at University of Oregon found that the chance of success in math and physics majors (but not other majors) was well correlated to SAT math scores, with those scoring under 600 not being able to earn a >3.5 GPA in those majors: http://arxiv.org/pdf/1011.0663.pdf . It is not surprising that students who have difficulty with SAT-level math have trouble in the more advanced math required to major in math or physics. University of Oregon does not have engineering majors, although the authors “expect that similar results also apply to highly mathematical fields of study such as some areas engineering or informatics.”
“There is some objectivity in saying: this course was designed so that a complete understanding of this course will be around 90% of the total points, a somewhat weaker understanding will get you 80%, etc. Most reasonable professors are also pretty generous around the boundaries - those that aren’t are generally those who follow procedure for the sake of procedure.”
But those numbers are arbitrary. If I set a goal of complete understanding = 90% and a somewhat weaker understanding = 60% of the points and get there by eliminating the parts of the question that 99% of the students get right there is no tangible difference between the two tests, except less wasted time, test 1 is “easier” but they provide a professor (and student) with exactly the same information.
Or alternatively, I can change the rubric for a question and create a completely different score distribution while still keeping the same order of scores.
“Absolutely disagree. People can miss portions of a question for any number of trivialities that have little to nothing to do with their actual understanding of the course material. Someone can understand 90% of the material and miss 5% of two questions and get a 50% if the test has 4 questions. They can also understand 50% of the material and get 50% by this system. Pretty horrible means of differentiating.”
I will concede your point here - a test where missing 5% of a question loses you 100% of the available points is a poorly designed test, and in this case there is absolutely no value in making the test more difficult. In my experience though, most engineering exams are 100% the opposite.
They are designed around problem solving, and points are awarded for applying specific procedures.
In Honors courses where I teach HS, we are encouraged to have at least some tests that are written to “fully utilize the hundred points” to get kids practicing receiving e.g. a 47 and having it represent a “B” understanding of the material. I’m sure you know that in most HS classes in the USA, only the top 30 or 35 points of the 100 are really in use. If students are “failing” on a typical HS test, i.e. earning the low 60s or less, that would be a Bad Thing (for them or the teacher or the course etc.).
In principle, there’s no reason that all 100 points shouldn’t be in fair play. It gives much more information about what students actually know. The goal of people who write excellent exams is that no one gets a 100 and no one gets a 0 and there is as much point distribution as possible in between, to determine who really knows what.
Students (parents?) who don’t understand this sometimes refer to getting a 47 that is a B, as “the teacher is rotten and the student failed and the teacher curved the grades to compensate” - but what actually happened is the teacher wrote an excellent assessment, and a student who answered 47% of it correctly, demonstrated what is accepted to be a “B” level of proficiency in the material.
Also, remember that students tend to be more public about experiences they perceive to be shared. So if someone is complaining at the lunch table about “failing,” then there will be confirmation bias because only those who also feel like they “failed” will chime in. And it’s much more socially acceptable in this country to say you failed than to say you aced a test.
There have been exams that I gave and graded, so I know exactly who got what - and I hear lunch-room conversation that bears no relation to reality. Just a couple people being humble about great grades while some of their loud friends whine about how they did terribly [by their own arbitrary standard].
@fretfulmother Thanks for that bit of insight into your high school. It really puts a class my son (age 15) is in now into perspective. My perception has been that he is occasionally bombing a test, but now I think there may be something similar to what you describe at work. It has been a good experience for him because it has kept his overconfidence in check. Our online grading system allows us to see class high/low/median so I can tell where he is related to the rest of his section. I do worry about some of his classmates who may not have the ego to weather a 47 (or in his case a 48). I doubt I could have at his age.
@gettingschooled - Thank you! I hope it is helpful My first 37 was quite unpleasant, and I was 17, so older than your son… But the hope is that if kids get some practice (and by the way, in my HS it’s done for juniors and seniors, so probably older than your son unless he skipped) and aren’t derailed in college. And we (or we’re supposed to) explain ahead of time what to expect. Of course, some kids hear “blah blah blah you’ll get a 98 blah blah blah” (in Peanuts cartoon teacher voice) when we say, “prepare to get a score that looks different from your normal 98 even if you understand the material”.
“Students (parents?) who don’t understand this sometimes refer to getting a 47 that is a B, as “the teacher is rotten and the student failed and the teacher curved the grades to compensate” - but what actually happened is the teacher wrote an excellent assessment, and a student who answered 47% of it correctly, demonstrated what is accepted to be a “B” level of proficiency in the material.”
This thread has taught me a different perspective regarding low overall grades on a test, and for that I am thankful.
I understand your point in the above quoted paragraph, PROVIDING that the “B” level of proficiency in the material was pre-determined to be in the 47% range. Proficiency should be related to the material/application of concepts itself, not to the surrounding students. Is a student suddenly less proficient with that 47% if the surrounding students obtained a 60%? I have seen curves applied both ways - as an absolute against the material, and as a head-to-head competition between students. The first I have no issues with - the second I do.
@calmom2016 - I believe you that there are often person-to-person comparison curves in college, but for me as a HS teacher, it’s based on the proficiency standards of the exam. So everyone could in theory get an A in my class.
Sure, the absolute numbers are arbitrary, but the way they are specified should not be. Say a 90% is an A and a 60% is a B based on a relatively fixed level of difficulty of the testing material, then that’s alright. If that is retroactively applied (a “curve” rather than a modified 100 point system) based on the fact that some people got 90s and some people got 60s, then that is arbitrary and ineffectively designed.
In my experience, a fixed cutoff proportional to the anticipated difficulty of the material tends to be the proper way to do it. Punishing people for having smart classmates devolves into a game of petty politics.