<p>I have to repost this to correct it from a few mistakes</p>
<p>Main Criteria:</p>
<p>30% - Academic reputation</p>
<p>20% - Faculty Resources
a. Caliber of faculty
b. citations received by faculty
c. degree level of faculty
d. Students per academic staffmember</p>
<p>15% - Student selectivity/Student quality
a. high school GPA
b. Class rank of admitted students
c. SATs scores
d. student achievements</p>
<p>15% - Research/Research output
a. Scientific contributions of the school to society
b. inventions/innovations of the program
b. Citations in international journals</p>
<p>15% - Facilities
a. libraries
b. science laboratories
c. Internet bandwidth</p>
<p>5% - Financial Resources
a. annual budget of the university for purposes of teaching
b. annual budget of the university for purposes of research
c. annual budget of the university for school maintenance</p>
<p>Let’s be honest, the rankings were determined by tossing a few names in a hat and now they just shuffle them around by tossing dice for each school.</p>
<p>Yeah, right now I kind of just want to say all the schools in the top 10 are great. the top 20 are wonderful schools as well. thousands of high school kids would do anything to attend a top 50 or top 100 school. Those institutions are all extremely wonderful, debating who deserves the #6 sport vs. #7 is almost pointless.</p>
<p>P.S.
why is there so much bashing on WashU? Sure they try hard to get students to come, but they actually really care and the academics/social scene are comparable to any other top school. There’s also the idea that they have Tufts syndrome, which is basically an excuse for people who expected themselves to get into WashU and ivies to use when they get waitlisted. I’m not sure, thoughts anyone?</p>
<p>Use the open and purely statistical rankings found at [U.S</a>. University Directory - State Universities, Online University Degree Search and College Rankings](<a href=“http://www.stateuniversity.com%5DU.S”>http://www.stateuniversity.com).
Overall rankings are given as well as by individual category such as SAT score.
The best overall strategy to see where a school is evaluate it using different rankings - sort of like looking at sports ranking using the various polls.</p>
<p>How does one compare the “average SAT scores” as part of the assessment in student selectivity? </p>
<p>The “UCs” (San Diego, Berkeley, Davis, Irvine, LA, etc) will only accept an SAT score that is taken in ONE session. Not so for many of the private schools (Stanford, Penn, Harvard, etc) that will take scores from multiple testing sessions and add them together. Comparing the average SAT Score of an incoming UCLA student to the average SAT score of an incoming UPenn student is pointless.</p>
<p>Also, how does one compare the “average HS GPA” as part of the assessment on student selectivity?</p>
<p>The public high schools in California calculate GPAs differently than the private high schools. A private HS junior transferring to a public Calif. HS will find his GPA lowered because the “honors” classes in sophomore-level Private schools tend to get points awarded on a 5.0 scale, whereas the exact same class in public schools (except for AP/IB level) will only be rewarded on a 4.0 scale.<br>
It could be a huge difference: For example 4.4 GPA from a private Catholic School may be translated as a 4.0 on the Public Schools rating system with NO grade inflation. </p>
<p>If the Assessment is not taking into consideration the different methods of acquiring the information, then, as my accountant would say, “Garbage In, Garbage Out.”</p>
<p>lagunal-- that’s part of the problem with nearly all of these numbers, they really are garbage in and garbage out.</p>
<p>Make no mistake, while I think USNWR is making a serious effort to do what it can with what data is available, the entire system of rankings are designed to confirm the beliefs of the majority of people. USNWR already knows what people think the list should look like, and in developing their criteria, you can be damn sure they tried to find measures which brought them closest to perception as a way of validating their methodology.</p>
<p>If 2x2= ? is a multiple choice question and I multiply properly to arrive at 4 or I do 2+2= 4, all we know is the results match the correct answer. Since we don’t have any proper “process” to follow here, the model has been designed around outputting the expected, “correct” answer.</p>
<p>Technically, it’s not “statistically significant.” That’s a different issue, and “significance” merely means that there is statistical evidence that something did not happen randomly. </p>
<p>A large sample just means that a result is less likely to be biased.</p>
<p>^ Agreed, “significant” may not have been the best word to use in that situation. I also said</p>
<p>“A high response rate is the key to legitimizing a survey’s results. When a survey elicits responses from a large percentage of its target population, the findings are seen as more accurate. Low response rates, on the other hand, can damage the credibility of a survey’s results, because the sample is less likely to represent the overall target population.”</p>
<p>The UC’s are heavily represented because of the sheer numbers of applications they get instate, which makes them uber selective, even though many of them are rather mundane as universities go…not deserving of the top50 in my view. Who cares what USNWR thinks? Your opinion is as valid as theirs.</p>
<p>The UC’s are heavily represented because of the sheer numbers of applications they get instate, which makes them uber selective, even though many of them are rather mundane as universities go</p>
<p>How is UCSB mundane? 25 members in the National Academy of Sciences, 5 Nobel prize winners on the faculty, #10 ranked graduate physics program in the country, #19 ranked engineering program in the country, very few commuter students (96% of students live at least an hour or so away from the school) , party school on the beach, etc. That’s as far from mundane as you can get.</p>
<p>Wesleyan placed #13 again, behind Vassar and Claremont McKenna. keep in mind the data is from 2008, so Wes will surely move up in next year’s rankings when the 2009 data is used.</p>