Why send your child to one of the "most rigorous colleges" in the US but not highly ranked?

I think I believe in data much more…although I like to see what was beneath it

@Cannuck : The 1450/1600 or so I got is awful high, I don’t know if my score was depressed, nor why a 1450 would be so different from the 1500’s mentioned…that is ridiculous. Also, it mentions “standardized testing” overall. They are certainly good at measuring some things, but I think they often are not as successful at measuring what they are claiming to for admissions purposes for example. Like the GRE works to some extent, but then apparently not as well for science doctoral students. I would expect the SAT to be a better test for college than the GRE for grad. school which is more about open ended problems/tasks and thinking than college, where many instructors focus on getting students to get the right answer. Which brings me to another question…if the SAT and tests like it are indeed good at measuring aptitude, why doesn’t it seem that college instructors care that much. I think one reason it isn’t as good at predicting GPA is partially because it isn’t targeting specific subjects like the ones studied in college and the fact that, honestly, many college instructors write exams chock full of content but with similar or lower cognitive tasks than the SAT.

There is a very stats sensitive (only recently so…it is in the south) elite private school where I found a blog listing its admissions considerations and it was the only one to blatantly dedicate a section of SAT/ACT scores claiming that we “need to see how you’ll perform in the context of our really smart student body”. No other school makes such claims. Either, this made little sense to me because as far as I know, their courses in several areas (as in other schools) are not challenging enough to warrant a curve or any sort of competition. I believe that maybe GPA or AP scores would be better metrics as they measure capabilities to rise to the challenge in specific classes. If this was a school known for higher than normal rigor (take U of C, HYP, or any Tech schools), to cherrypick ACT/SAT’s like that would perhaps be more relevant. I have yet to understand the goal of having a student body chock full of nearly perfect scorers when they aren’t being challenged anywhere to the extent that schools with comparable or lower scores would (one of its SAT/ACT peers is in St. Louis for example, and from what I’ve seen…that school is much more challenging in STEM, and builds upon the abilities they had coming in. Hell even my school does to a larger extent). The way some colleges use it baffles me to this day. If SAT is supposed to show potential to learn perhaps at even higher cognitive levels, then that should maybe be happening, especially in STEM. I get the “political” nature of making many humanities and social sciences courses not challenging, but STEM at a high scoring school? I always felt that my more difficult instructors should have been the norm at my school and at others of similar or higher rank but it just wasn’t the case…

@jym626, they could be average MCAT section scores. There are 3 sections of the old MCAT. You add up the scores of the 3 sections to get the total score.

Understood, but they should be consistent in their reporting!

@bernie12, as you may have discovered, certain schools have tried to raise the test scores of their entering classes not so much because the rigor of their classes would overwhelm all but those who get the highest scores but more simply to climb rankings.

One thought: among large RUs, rigor (of the most challenging courses) may correspond better with the academics peer ranking than the overall USN ranking: http://www.usnews.com/education/blogs/college-rankings-blog/2013/02/28/which-universities-are-ranked-highest-by-college-officials

@PurpleTitan Is iit the scores that count the most in the selectivity (I think 12.5%) rating? I’m just wondering the incentive since that category counts for so little overall.

@bernie12, every little bit helps. Scores do matter.

You originally brought up the quote as a reference to job screening, using major as proxy for test scores and later said Bock is telling a half-truth and doesn’t mean his statements relating to Google’s internal studies finding test scores near “worthless” and Google revising hiring practices accordingly. I don’t believe Bock is distruthful, nor do I believe Google is using major as a proxy for test scores. Instead I believe that Google is looking for qualified candidates who have skills that are important in being successful in their position, which relates to why Google’s job offerings have a qualifications sections that usually list a list of specific majors (not just ones correlated with high test scores) that relate to the position “or equivalent practical experience.”

Regarding, whether Bock’s major advice is applicable outside of Google, I do not believe that Google is the only company that offers many positions where the training received in a CS major is more critical to performing the job well than the training received in an english or psychology major, so there is relevance outside of the company. However, I do believe that Bock working at Google and being involved in hiring at Google influences his opinion in favor of what Google does. And if you interview someone who works at a company that that primarily hires counselors, teachers, or similar positions that often emphasize training received in non-tech majors, they are likely to have a very different opinion and be similarly swayed by what their company does.

@PurpleTitan Ah, I guess. Whatever floats that school’s boat. Isn’t doing them much good. Again, lower range schools are doing better rank and quality wise (outputs, rigor of academics, etc). As for peer ratings vs. rigor, don’t think so… It seems that schools that are STEM oriented or at least have an engineering school (other than maybe Chicago) get a bump. I think peer ratings are subject to the same follies as counselors and there is of course halo effects…Also, Brown would not be as high (it is known more for how laid back it is though there is of course some rigor) and Caltech would probably easily be top 3. There are definitely some prestige and halo effects going on. Admittedly ones I think they get right that often don’t get recognition are Michigan, Carnegie Mellon, and Georgia Tech. Also, I imagine a tendency for adults (peers) to think of research and overall reputation of the school. The rankings seem to match how I would predict academics would rank graduate programs or something…I honestly don’t know what it means. Only some of the rankings make sense to me. I would hardly put WashU that much below Penn and I also wouldn’t put Berkeley that high up. I think it is excellent and should be kind of in the tops…but if I were to compare many of their STEM classes for example, they would probably fall more in the middle of the pack listed if not near the bottom which is more than respectable. However, to have Berkeley up there with those schools seems to be a halo effect based on opinions about research and grad. programs to me.

Those are section scores. There is a chart in the upper right hand corner of the page that mentions the average section scores and what is a “good” section score. The table at https://www.aamc.org/students/download/361080/data/combined13.pdf.pdf makes it look like 9 or 10 is 56th-83rd percentile, depending on the section, so 11 can be well above average. You can find several other similar MCAT/GPA lists with a Google search, some of which are probably more reliable and use a more consistent format.

@bernie12, think about where that school would be without those test scores, then.

@PurpleTitan : Touche, I see your point. Perhaps that’s the advantage it offers. More people are “watching out” for that school than otherwise. In theory, a school can inflate its reputation without actually doing much. Makes sense…

The higher range clearly gets them some props up to a certain point where those rating then go: “okay, what else makes you special academically?” There is some obvious nuance to the peer ratings that show some halo effect going on but also less gullibility than one would anticipate.

I agree with the original poster. If the school is not ranked and prestigious, than it should (at a minimum) care about its students and give easy grades.

@californiaaa That could actually make it harder. If you are less prestigious and grade inflated, I’m sure it invites suspicion…While I am against such logic due to my principles, I would otherwise be able to see it from a practical point of view. But the fact that more prestigious schools water down and inflate and are successful means that it is because people find the grades somewhat credible or believable. Or they believe that they simply do not matter at all because the people were already smart enough to get in. Prestige and selectivity can play a role in bringing undue credibility that others simply won’t get if they pursued the same practices. The thing that many “prestigious” schools have on their side is that they are still relatively rigorous disregarding whether or not they are pushing their students as much as they should.

Bogus data, data, buy not your fault. For example, I looked up just one school, 'Zona med, and your link shows them with a 3.54 GPA, but the school’s own website shows a 3.7. (And don’t forget, that as a state school…)

The Univ of Pittsburgh is a top 20 med school and has a much higher GPA (than 3.55) for admission. (Not even gonna look it up.)

OTOH, osteopathic schools do have lower admissions requirements than allopathic med schools.

I noticed some of the other schools did not match previously as well, but most of them seem reasonable. You can find several other similar lists on the web that are probably more reliable, and all of them I have seen show many med schools with average GPAs of entering students below 3.7. And of course this is what one would expect, when you consider that the AAMC stats mention the average overall GPA of students matriculating to med school has been below 3.7 in all years for which they provide stats (2003-2014).

@bernie12 Why college instructors don’t care that much for the SAT? GRE not working as well for science doctoral students?

This is the problem of range restriction I talked about earlier, a well known phenomenon in social sciences. It is not really a flaw of standardized testing.

Since folks are talking medicine, let me use it as an example. I can safely say that licensing exams for doctors are unlikely to differentiate doctor quality much because doctors are already pre-selected on that variable. (You need good grades and good test scores to get in). As a result, other factors such as conscientiousness and focus will come to the fore in importance. This is exactly the position at Google. Since the quality of the job applicants are very high, you need to test them on something else to differentiate among them. Does that make sense? Here is another look at it:

http://www.gmac.com/why-gmac/gmac-news/gmnews/2013/may-2013/demystifying-the-gmat.aspx

The rest of what you say makes perfect sense to me. I particularly enjoy your analysis of course content and vigour. I think employers should focus a lot more attention on this area. Some, as I have said earlier, already do.
That is why the chess game between job applicants and employers will continue unabated.

@Data10 My original response to you is the following:

“Capable of doing electrical and computer engineering at a reputable school is an excellent proxy of high test scores. (Maybe not as high as physics, but high nevertheless). Another important point to remember is that Google is in the enviable position of being able to hire the best.”

These are two separate statements. The first came from GRE data. The comment about physics was an aside, again, based on the same GRE data. The second statement was based on the popularity of Google among very strong applicants. These statements are independent of each other. For you the statements somehow become:

“You originally brought up the quote as a reference to job screening, using major as proxy for test scores…”

You linked my statements together in a way that I never said nor intended. You assumed that. This recalibration continued with further statements I made. That is what I meant by twisting the facts to fit your opinion.

You are a linear thinker like most STEM majors, and a very strong one at that. Your world is pretty black and white. This simplification of reality (data) works very well in the natural world, but is inadequate in explaining human complexity. Is this the reason why you found engineering to be easier than doing English essays?

You are looking at Bock as though he too is an engineer, but he studied international relations as an undergrad and an actor as well. He is what we call a “holistic” thinker and he sees the world in various shades of grey. Because of this hardwiring, and his academic training, he makes a great “spokesman” for Google.

Am I reaching a bit? Sure, but I am confident with my induction because I am hardwired that way too. In fact, the first thing I did after reading the interview was to check his undergrad major…and then smiled.

Lol! You are hardly " hard wired" to be holistic.