Of course different academic admissions criteria are correlated with one another. The study used regression analysis to separate the correlations and understand which ones were the driving force behind the academic success. The whole point of the study was trying to understand why being black was correlated with a lesser degree of academic success by using regression analysis with such correlations. They found that when they considered the rest of the application. the regression coefficient for being black dropped extremely low, indicating that being black was not the driving force behind dropping out of “tough” majors Instead it could almost be fully explained by differences in the rest of the application, indicating differences between black and white students the rest of the application (affirmative action) was the primary cause. An extremely similar effect occurred with test scores as did with being black.
For example, attending a private HS school is correlated with a greater degree of HS academic rigor and preparation than attending a public school. So one could try to predict whether a student is going to drop out of “tough” major by looking at whether they attended private or public school, and see a significant correlation. However, if you compare public/private among students who had a similar degree of academic rigor and prep, then the regression coefficient of public/private may drop very low, indicating the driving force is whether students have good academic rigor/prep; not whether they attended public or private school. If this result was found, it would suggest looking at whether students had good academic rigor/prep is a better way to predict dropping out of “tough majors” than looking at test scores. In the Duke study, they found this same effect for test scores. When they included all available admissions variables, they measured the following regression coefficients with staying in the “tough” major:
Female: -0.19 (0.05)
HS Course Rigor/Prep: -0.15 (0.05)
Harshness of Grading: -0.08 (0.025)
…
Application Essay: -0.07 (0.04)
…
Test Scores: -0.03 (0.03)
Test scores were among the least influential factors for switching out of a “tough” major. I chose this Duke study because you suggested the study had a completely different conclusion, not because it had a unique conclusion that fit with my views. Instead every single study I am aware that filters for both a measure of GPA and course rigor came to a similar conclusion. Some more examples are below:
CUNY Study – http://www.aera.net/Newsroom/RecentAERAResearch/CollegeSelectivityandDegreeCompletion/tabid/15605/Default.aspx
“The ATT indicates that there could be a small SAT effect on graduation (2.2 percentage points for a standard deviation increase in college SAT), but this does not reach statistical significance. The ATU is much smaller in magnitude and is not significantly different from zero.”
The Bates test optional study at http://www.bates.edu/news/2005/10/01/sat-study/ and the NACAC study at http://www.nacacnet.org/research/research-data/nacac-research/Documents/DefiningPromise.pdf found no notable difference between GPA and grad rate among test score submitters and non-submitters at test optional colleges, even though the non-submitters had significantly lower test scores than the submitters.
Even studies controlling for just HS GPA and SES (not course rigor) found SAT I added relatively little additional information. For example the Geiser UC studies found that a prediction model using HS GPA, SES, and SAT I could explain only ~4% more variation in cumulative college GPA than a prediction model using just HS GPA and SES. Had they included a control for curriculum, SAT would have almost certainly added far less than the measured 4%.
In the United States, use of IQ tests is not a significant political issue that I am aware of, but one does not need a unique political position to come to this conclusion. The authors of the studies I referenced were quite explicit about how to interpret their results, and that conclusion should not vary with political view.
Considering the number of references I’ve provided in this thread any others, by now you should have an excellent view of where the information is coming from.
The study you referenced is comparing the correlation between a cognitive ability test and an SAT-like achievement test. I don't doubt that such a correlation exists. Instead my claim was every study I have ever seen that includes a measure of HS GPA and HS course rigor/preparation found SAT I scores adds little predictive ability of academic success during college, including the earlier Duke study you referenced that triggered this tangent.