You have /5 articles left.
Sign up for a free account or log in.

Last week “Pineapple-gate” erupted when eighth graders in New York state emerged from their standardized reading assessments, scratching their heads in puzzlement. The section of the test on the pineapple and the hare made no sense. The passage itself was odd and the multiple choice questions based on it “weird.”

It was actually “nonsense on top of nonsense,” said children’s author Daniel Pinkwater, whose story “The Rabbit and the Eggplant” had been licensed to Pearson, which had created the eighth grade reading test as part of its $32 million contract with the state. Pearson altered the characters – changing rabbit to hare, eggplant to pineapple – but and tweaked the ending. The story, which Pinkwater describes as a “fractured tale,” involves a tortoise versus the hare-like race, but between vegetable and mammal rather than reptile and mammal. And at the end of Pinkwater’s story, the spectators of the race eat the eggplant. The moral: don’t bet on an eggplant. In the Pearson version, the animals eat the pineapple, believing it has a trick up its sleeve. The moral: pineapples don’t have sleeves.

Now, fans of Pinkwater know his work veers towards the absurd, but it’s not the sort of thing one expects to find on a standardized tests. And many of these young fans emailed Pinkwater. “Some kids took me to task; the phrase sellout appeared on my screen,” he told The New York Times.

But educator and critic of high stakes testing Deborah Meier placed the blame elsewhere, telling the newspaper that the pineapple incident was “an outrageous example of what’s true of most of the items on any test, it’s just blown up larger.” She argued that the passage highlights how “right” and “wrong” are often up for debate in these tests: “the ‘right’ answer is the one that field testing has shown to be the consensus answer of the ‘smart’ kids. ’It’s a psychometric concept,” Meier said.

In the past week alone, there have been multiple cases of wrong answers joining the sleeveless pineapple. Wrong answers on the New York math test. Wrong answers on the Florida science test.

Concerns about accuracy are just one of the things fueling protests against high stakes testing in the American K–12 school system. But just the day before the Pineapple-gate story broke, NYT columnist David Brooks made his case for instituting more standardized testing in higher education.

“It’s not enough to just measure inputs, the way the U.S. News-style rankings mostly do,” he writes. “Colleges and universities have to be able to provide prospective parents with data that will give them some sense of how much their students learn. There has to be some way to reward schools that actually do provide learning and punish schools that don’t.” As such, Brooks argues that it’s time to bring “value-added assessment” to colleges. This is the controversial method being used to rate K–12 teachers’ performances, using test data to purportedly demonstrate how much a teacher “adds value,” if you will, to students’ academic gains. By looking at students’ previous test scores, researchers have developed a model that predicts how much improvement is expected over the course of a school year. Whether students perform better or worse than expected is then tied to the impact that a particular teacher had.

Brooks points to the findings of Academically Adrift and its oft-cited statistics about college students’ failure to learn – including that “nearly half the students showed no significant gain in critical thinking, complex reasoning and writing skills during their first two years in college.”

But Academically Adrift faces its own set of critics, many of whom challenge the data, the methodology, and the conclusions the book makes. Part of the problem: its basis on a standardized test – the Collegiate Learning Assessment (CLA), one that is administered by Pearson graded by robots.  "Automated scoring solutions” is the branding it uses. “Robo-graders” is my preferred terminology.

“Robo-graders” have been in the news a lot lately, thanks to the Shermis and Hamner research that argues automated essay grading software performs just as well as humans. As University at Buffalo English professor Alex Reid contends, “If computers can read like people it’s because we have trained people to read like computers.”  

Clearly those eighth graders that balked at the pineapple question on their reading assessment hadn't learned to think that way yet.  But with the possibility of years more standardized testing ahead of them, maybe they will squelch their love of the absurd and creative Pinkwater and learn to fill in the blanks and formulate their five-paragraph essays.

The moral: pineapples don’t have sleeves.

Next Story

Written By