You have /5 articles left.
Sign up for a free account or log in.
A team of researchers at the University of Colorado at Boulder and the University of Pennsylvania have created AI tools to help admissions officers by analyzing students’ application essays.
The tools help admissions officers identify seven key traits in essays, including teamwork, perseverance, intrinsic motivation and willingness to help others. The researchers published their study in October and included cautionary notes about the new technology.
“Humans have limitations,” Benjamin Lira, a doctoral student in psychology at Penn and co-author of the study, said in a statement. “You’re not going to read the first essay of the day in the same [way] you read the last one before lunch.”
The researchers gathered insights from over 300,000 applications submitted between 2008 and 2009. Each included a 150-word essay. The team fed those into artificial intelligence platforms, using admissions officers to train the platforms and search for the seven traits, going beyond just spotting key words like “leadership.”
“If I say, ‘I donated clothing to a homeless shelter,’ the AI will tell me that it has a 99.8 percent probability of showing prosocial purpose,” Lira said. “But if I say something like ‘I like cheese,’ that drops to less than 1 percent.”
The project was done, in part, to help admissions offices address implicit bias. Other AI-focused technologies, like facial recognition software, have demonstrated issues with bias.
“Our paper shows that AI doesn’t need to be a biased black box, as it has been in a lot of other situations,” Lira said. “You can actually have AI that advances the aims of holistic admissions, which looks at applicants as a whole and not just their grades or test scores.”
They found that reviewing the brief essays can be telling of success within the university. For example, students who included examples of leadership in their essays were more likely to graduate in six years than those who did not.
The researchers did caution that as students—particularly higher-income students—become more savvy with technologies such as ChatGPT, they could alter their essays to fit what they believe will bring the best results.
The AI algorithm can also make mistakes. For example, in the study, if a student were to say they donated heroin to the children’s shelter, it brought a very high score in the “prosocial purpose” category.