You have /5 articles left.
Sign up for a free account or log in.

SIAATH/Getty Images

The SAT is going digital, adaptive and shorter. For most test takers, this is good news, although a lot of people would prefer that the SAT disappear entirely instead. Still, there are some clear advantages to the new approach:

  • Shorter tests put less of a premium on endurance.
  • Test scores will be available much sooner.
  • There will probably be fewer errors in scoring when everything is digital and you don’t have to worry about bubbling in circles and erasing when you change your mind.
  • Giving everyone access to a graphing calculator is a real improvement in terms of fairness and equity.

It’s still early but not too early to think about how this will play out. Here are some of the key issues.

How will the adaptive technology work? Adaptive tests save time by zeroing in on your skill level. Imagine having to guess a number between one and 100, but you’re told whether your guess is too high or too low. First, you’d guess 50, then either 75 or 25, and so on until you get it exactly in at most seven guesses. Similarly, adaptive tests ask you harder questions when you answer correctly and easier questions when you answer incorrectly, adjusting their estimate of your ability along the way.

Individual test questions are imperfect indicators of the student’s ability, because the test measures a lot of different skills and people tend to be stronger in some areas than others. Also, people sometimes make lucky guesses or miss questions that they could normally handle. Still when the questions are written well, you can get a pretty strong prediction of their overall skills in 20 questions or less. That makes adaptive tests much shorter than their linear counterparts. I once heard a researcher for the GMAT claim that they could get accurate scores in about 10 questions, but they didn’t shorten the test to 10 questions because no one would believe that it would be possible to arrive at a score this quickly.

The SAT is promising to use some version of this, but the details matter. Some adaptive tests are so front-loaded that performance early is much more important that performance later. The application makes up its mind relatively quickly and doesn’t change it easily, so if you dig yourself a hole, you’ll never get out. Alternatively, if you do better than you should at the start, even late mistakes won’t sink you.

Students who know how the system works can make better decisions about how to use their time on earlier questions, before the application has an opinion of their skill level. Another related issue is what the system does with unanswered questions. Early versions of the adaptive GRE let you quit when you wanted with no penalty. Savvy test takers used this to spend more time on early questions and quit as soon as possible, artificially increasing their score. But the test maker changed the scoring to make unanswered questions even more damaging than wrong answers. They did this without telling anyone, and lots of people crashed and burned. Similarly, there will be legitimate questions about how the new SAT will function and what that means for test takers. As usual, people with better information will have an edge.

If the GRE and GMAT are any indication, adaptive SAT scores will be volatile, meaning that we may see wild swings in either direction. If the application gives you hard questions you can handle, you may zoom to the top and stay there. But if you botch things early, or get hit with a passage that you just don’t get, you may sink and not be able to get back up.

Test Security

Adaptive tests require more questions than linear tests because students see only a fraction of the total number of questions in the pool. But how often will the questions turn over? Will every day that the test is administered have its own pool of questions? That would be secure, but if questions are exposed and used again, there will be security concerns, especially if the system tends to pick the questions that provide the most information about the test takers’ abilities. When I worked for Kaplan in the 1990s, we discovered that the test was reusing questions and favoring the “best” ones. We sent in a bunch of people to see how deep the pool was and found that we could reconstruct basically the entire pool with just a few dozen people. We didn’t tell any examinees about this, of course. Instead, we told ETS, the test maker, about the issue. They then sued us for copyright infringement, among other things. Maybe it would have been naïve to expect a thank-you, but that seemed excessive. Anyway, it’s been 25-plus years since then, and the test maker knows that this is an issue, but you can bet that ambitious and unscrupulous people will probe for any and all weaknesses.

Test Content

The College Board has promised that “The digital SAT Suite will continue to measure the knowledge and skills that students are learning in school and that matter most for college and career readiness.” That would seem to indicate that the test content won’t be changing, which is good news if you like the current test and bad news if you think (as I do) that it measures the wrong things and serves as an enforcement mechanism for the Common Core.

As is, the test is a ruthless test of compliance, of very detail-oriented reading and performing the same procedures over and over instead of thinking critically. It’s fair in that it’s the same bad test for everyone, and it’s relevant in that the skills it measures matter at least a little, but it would have been nice to look at some opportunities for improvement. Maybe next time.

Practice

The adaptive format makes it much harder for anyone other than the College Board to create accurate practice exams. How do you mimic a test if the test maker doesn’t disclose how it works? The digital format also means that the College Board has much more control over the number of released test questions that students can find. This could be OK if the College Board offers a good number of practice tests, but what if it doesn’t? There will be nowhere else to go. One of the big problems in current GRE and GMAT prep is that there’s a shortage of released test experiences. You can get a few for free, but the next few are really expensive ($39.95 per test for the GRE and $49.99 for two GMATs).

There are still lots of GRE and GMAT practice questions, but it’s really hard to predict what one’s score will be just by taking a stack of questions. And while it’s entirely possible that the SAT will behave responsibly and give people what they need to prepare, it’s also possible that they’ll use their power to direct people to their preferred resources.

Test Preparation

Full disclosure: I’m a test-prep tutor, and I have lots of SAT students, and so I’ve got my bias here. That being said, here goes: test changes are usually packaged as if they’ll level the playing field and make paying for test prep less valuable. That’s not what usually happens. People who want a stable, predictable experience put more energy into taking the last version of the old version of the test. And then people taking the new test are looking for any help they can get when everything changes. As a result, any change is usually good for people like me. And when the test format/scoring are new and confusing, it’s more important than ever to have trustworthy sources of information. And let’s face it: in a world of grade inflation and competitive students applying to 15-plus schools, people are going to search for any edge they can get. So test prep will continue, at least for now. And while I think that helping people do better on tests is a morally legitimate profession, I think it comes with some responsibility to speak openly about testing and fairness. I hope this essay is a step toward that goal.

Next Story

Written By

Found In

More from Views