if high-stakes admission is primarily high-stakes test-based, then the test will be cracked. Randomized tests is one possible part of the solution, but not a complete one. One reason I suspect that they reuse tests is because each question is vetted and scaled. This is how the test becomes “standardized”. You need internal validity and this is possible primarily through vetting individual questions with very specific wording and very specific order, I suspect. Creating these tests is costly because of that. Randomization is like flipping quarters. One kid could get 50 heads in a row and then 50 tails. Or one child could get 100 heads in a row and then the second child 100 tails. On average the tests equal the same thing. But the individual child’s experiences are vastly different. Weighting each question by difficulty might solve some of this, but not entirely.
Then you have the problem of the industry of people taking the tests for others. Security at testing sights globally is poor. If you travel to Hong Kong to take the test and there are several thousand people in a room, which is a common experience, cheating is impossible to monitor.
Now there are programs where you can photograph the math question and the computer will return not just the answer but will break down the steps to get the answer, in case you need to “show your work.” In a room of several thousand people monitoring such activities is very hard.
The fundamental issue it seems is that these test are given so much weight. Too much weight, in my opinion. The value of cheating is therefore high. If the weight of these tests were truly reduced, the value of cheating is lower.
Cheating is just the latest indicator of how invalid these tests are. They have been deemed invalid for a long time, in that they are strongly associated with income and race.
Confounding this is that universities that are now seeking tuition dollars from abroad will not vet students properly for cheating or for genuine ability to perform at that school beyond test scores. They rely on these tests to support “readiness” assessments and allow in the student and that cash flow. Students from away can pay as much as three times an in-state student. Public spending on college is lower than ever, caps on in-state tuition are set in place. Where else can public schools going to look for funding? Then the privates, on the small end, many are barely surviving. One life-line is foreign students as they pay top dollar. The tests help foreign students compete with other foreign students to gain entry to the US. The tests allow US institutions a way to approve one student over another perhaps domestic student who costs the college more. On the top end of schools, I know that it looks like colleges are lofty places where dollars, branding, and competition don’t seem to matter, but this is not true. Princeton competes with Harvard and Stanford for stop students. You betcha it’s holding onto that brand and its top rating partly through SAT scores, the higher the better in terms of perceived competitiveness. This perceived competitiveness based on scores is what brings in the next gen of uber competitive students, and maintain that school’s “brand.”
I realize you meant probably meant this facetiously, but for those without a statistical background, while all three scenarios are possible, they are far less likely than getting a perfect NCAA bracket last year and winning a $1B prize.
A standardized test is a useful but imperfect measure of ability, and instead of a discrete score, we should be thinking about ranges. Someone getting a 2200 will likely get somewhere between 2150 and 2250 next time (assuming no additional prep). Given that it is a range, randomized tests can be every bit as accurate as a fixed test in estimating a student’s ability.
Although at the upper end of the score range, large score fluctuations occur, and score distribution matters. For instance, if the 2200 is attained with an 800 in math, a single wrong answer during the next sitting can get a student to 2150. If the 2200 is attained with a 750, eliminating the one wrong answer brings that student to 2250. On the other hand, an extra right/wrong answer in CR or writing will matter less, as will right/wrong answers in math with a score in the 500-600 range - a change of 1 answer in either category could make a 20-30 point difference.
This of course only adds to your point - which is that ranges are a better indicator than hard scores - but it’s always struck me as another odd aspect of the SAT. If there are too many students getting 800s (which is why the math curve is so punishing at the upper end), why not make the math section harder.
Didn’t read the whole article, but didn’t see anything about how some public schools are taking the test on the wednesday before everyone else takes it on saturday… the same exact test. Then they’re posting information about it, such as the essay question, where anyone can look it up and find it.
The SAT is a COMPLETE JOKE, at least in terms of the College Board’s inexcusable inability to address the mounting security concerns that continue to plague its organization year after year, administration after administration…
The ACT also recycles tests in Asia and faces similar cheating concerns, but has never faced such allegations or scandals on a massive scale.
Why?
People at ACT actually take their jobs seriously.
Don’t listen to this guy marvin100, who may be well informed or knowledgeable of test contents as a test prep coach, but hasn’t an inkling of a clue about actual form usage or test administration procedures related to either the College Board or the ACT.
In the summer of June 2014, when rumors of leakage pertaining to the ACT first surfaced in South Korea, the ACT took drastic preventive measures in its subsequent administration of the September ACT exam in South Korea and decided to use two different forms or tests in South Korea alone (60E in South Jeolla-do Province and 69F in the Seoul district), not to mention the fact they mixed up at least 5 different forms in Asia for that particular month. The message from ACT was clear: whether the rumors were true or not, it was not going to take any chances.
In that same year, the College Board used the December 2012 exam internationally in January of 2014, an exam that previously had been recycled on at least two occasions both in the U.S. and elsewhere, even despite overwhelming evidence obtained from the South Korean Prosecutors’ Office of South Korea, District of Yeok-sam-dong (Prosecutor **** oversaw the confiscation and seizure of illegal materials from at least 44 teachers/academies in South Korea, 2013 and this exam was one of many included in that blacklist) that the exam was seriously compromised and should never be used again.
I can certainly give many more concrete examples, backed by factual evidence rather than mere hearsay, but the reason one never hears of any ACT cheating scandals is that the guys at ACT are actually doing their jobs, whereas the College Board, well, do I need to really say anything?
Indeed. Seattle’s public schools, for instance, took the new SAT on Wed. 2nd, before everyone else took it on Sat. 5th. Maybe the entirety of Seattle’s 11th-graders pinky promised not to breathe a word about the test, but it’s hard to understand the CB’s thought process. They’ve brought the same methodology that failed utterly for international SAT sittings (some students take the test well before others, and prep companies can find out which questions are being used) back to the US.
The cynical way to look at this entire issue is to say Asia is a massive market for the SAT, and if it weren’t possible to cheat on the test, more students might take the ACT (currently they lean strongly towards the SAT).
Moreover, what a hoax this whole “lock-boxed” security procedure for the SAT is!
In 2015, when it was announced that the College Board or ETS would be sending “padlocked boxes” to Asia in an effort to thwart the theft of tests from the boxes in which they were delivered, it did so only for fewer than 5 schools (Shanghai American School in China and Seoul International School in South Korea being the main two in which they suspected leakage) in Asia, while the remaining thousand schools dispersed throughout Asia simply received “taped” boxes that any teacher or proctor could have easily stolen a test from. What a publicity stunt! And one wonders why the May 2015 administration of the SAT was leaked in Egypt, the U.S, and all over the world, and I will not even go into what happened in Oct 2015, Nov 2015, and Dec 2015, and Jan 2016.
Conversely, the ACT actually dispatched a team of “security agents” throughout Asia to oversee the administration, storage and proper distribution of its tests and provided detailed training to supervisors, not to mention everything else they did to prevent any type of leakage from anywhere in Asia!
The SAT has been dumbed down twice, first in 1994 and now again in 2016. Perfect scores in either section were rare prior to 1995, but particularly in Critical Reading. Back then it was the Scholastic Aptitude Test and was a sufficiently good test of IQ that Mensa accepted it for admission. Now it is much more of a knowledge test that can be prepped for quite easily.
As to why it has become easy, you can think of it optimistically or cynically. Optimistically, you could say that colleges found that scores beyond 750 showed no difference in how successful a student would be at college. Cynically, you could say that certain demographic groups were dominating the scores at the high and low end and compressing the score ranges allowed selective colleges more freedom in holistic admissions.
What about 2005? That’s when the analogies were eliminated and the writing section was introduced – a section very different from the other two and much more coachable.
I took the SAT in 1988, and my kids are taking it now so I lost track of when the writing section was introduced.
So it sounds like they are dumbing it down about every 11 years. In another generation, it will be “See Spot Run” and you will get 800 points for correctly identifying that Spot is the dog.
Parent involvement doesn’t help. My aunts and uncles who have extremely bright kids that aren’t scoring in the top 10% of the nation are HUGE standardized test haters. Is it possible that the tests are being dumbed down to keep people happy because their kids are getting better scores? No one is gonna support/take a test that most people don’t do well on.
When it was the “Scholastic Aptitude Test” back in the 1980s, it was quite coachable. The math section was basically a test of algebra and geometry. The verbal section was mostly an English vocabulary test, since most questions were easy if you knew the words, but difficult if you did not, and there were SAT preparation books filled with words that allegedly appeared on SATs. There were some other test taking skills involved (time management, elementary probability and statistics as applied to the “guessing penalty”), but these were easily coachable.
@a20171 No matter how much they dumb the test down, only 10% of test-takers can be in the top 10%, and half will be below average. This is one problem the College Board couldn’t solve if the SAT required reading at a 9th-grade level.*
As an educator and clinical professional who has to use standardized tests and a parent of twin seniors, I can’t stress enough how much I despise them in all shapes and forms! The only merit I see to them is that they provide a general picture of strengths and weaknesses but in no way, shape, or form should be used to compare one student against another which is exactly what they serve to do. I have worked in special education for 20 years and have two very different students that I have raised, one being exceptionally bright, in the top 5% with 10 AP courses and another who is a stellar “B+” student with a standard curriculum - neither did exceptionally well on the tests, even with hours of test prep invested. I have tested young children who score in the lowest percentiles, only to demonstrate age-appropriate skills several months later. I understand that we need an equal measure of comparison but test scores should carry very little weight if any imho!
So what alternative do you suggest for allowing colleges to compare one school to another? There is a nearby public high school has average SAT scores approaching 1900, whereas about 10 miles away, there is another public high school where the average is 1250. While test scores for individuals may vary, on aggregate they tell a great deal.
Naturally each school has a top 10%. If standardized tests didn’t exist, should selective colleges assume that the top 10% of both schools are equal? Does that do any favors to the top students admitted from the low performing schools that are suddenly overwhelmed in a competitive college?
I suggest that they simply provide a general picture of a student’s strengths and weaknesses and are not a true measure of a student’s aptitude and potential and should be no more influential in comparison of one applicant against another than any other element of the application. Despite your assumption hebegebe that a student from a low performing school is presumptively going to be overwhelmed at a competitive college, many of these students have been just as competitive if not more so than their “nearby public high school” counterparts from “10 miles away.” Test scores do not predict success.
My own scores were less than stellar and when I applied to grad school, my scores on the math portion of the GRE were abysmal. Regardless, I was accepted to several top programs in my field including George Washington and Northwestern, because guess what? Math is not a critical skill for my field. I maintained a 4.0 in both my undergraduate and grad programs and have been very successful. The only time I felt “overwhelmed” and needed tutoring was in fact when I had to prepare for the math section of the GRE!
My D was accepted to 2 of her reach schools with scores well below their averages and I am grateful that these schools placed weight on much more than her scores. I knew that she had many positive aspects of her app but that she may be at an extreme disadvantage with her scores and her lack of AP and honors courses. Despite that, she has a strong GPA, strong EC’s and lots of volunteer experience. Thankfully the admins could see past her app’s weaknesses and view her more holistically. I am fully confident in her ability to excel at either of these schools both ranked highly competitive.
Easy, big fella. As a teacher in Asia, I’ve heard from many, many students that ACT cheating is commonplace. Your scenario is interesting, but the fact remains that the ACT continues to recycle tests.
You provided anecdotes that some people do not perform as well as they should on tests. I don’t doubt that.
However, my question still remains: How should colleges compare students from a high performing school to a low performing one? Certainly some high performers can come out of low performing schools, but statistically they are much more likely to come out of the high performing school. Why should the students at the high performing school be penalized for this given the more rigorous competition they face?
Now that my D is a junior, I am starting to look at Naviance scattergrams. What I notice is that highly selective colleges seem to have fairly hard cutoffs for grades but fairly soft cutoffs for SAT scores. For example, almost nobody has been admitted to Harvard from our school with a GPA lower than 4.5, but there seems to be no difference between a 2200 or a 2400 in terms of admission or rejection. This is a limited view, but it suggests that selective college use SAT scores to back up the grades.
I would also note that when it comes to standardized testing, US students have it fairly easy. Students can take the SAT or ACT (or both) multiple times, on dates they prefer, anytime from freshman year onwards. This is very different from what most of the rest of the world does, which is that a single test determines your future.