In 2013, the computer science department at the University of Texas at Austin started using a homemade machine learning algorithm to help faculty make graduate admissions decisions. Seven years later the system was abandoned, attracting criticism that it shouldn’t have been used.
The algorithm was based on previous admissions decisions and saved faculty members’ time. It used things like attendance at an “elite” university or letters of recommendation with the word “best” in them as predictive of admission.
The university said the system never made admissions decisions on its own, as at least one faculty member would look over the recommendations. But detractors said that it encoded and legitimized any bias present in admissions decisions.
Today, artificial intelligence is in the limelight. ChatGPT, an AI chatbot that generates human-like dialogue, has created significant buzz and renewed a conversation about what parts of human life and labor might be easily automated.
Despite the criticism lobbied at systems like the one used previously by UT Austin, some universities and admissions officers are still clamoring to use AI to streamline the acceptance process. And companies are eager to help them.
“It’s picked up drastically,” said Abhinand Chincholi, CEO of OneOrigin, an artificial intelligence company. “The announcement of GPT — ChatGPT’s kind of technology — now has made everyone wanting AI.”
But the colleges interested in AI don’t always have an idea of what they want to use it for, he said.
Chincholi’s company offers a product called Sia, which provides speedy college transcript processing by extracting information like courses and credits. Once trained, it can determine what courses an incoming or transfer student may be eligible for, pushing the data to an institution’s information system. That can save time for admissions officers, and potentially cut university personnel costs, the company said.
Chincholi said the company is working with 35 university clients this year and is in the implementation process with eight others. It’s fielding about 60 information requests monthly from other colleges. Despite the ongoing questions some have about new uses of AI, Chincholi believes Sia’s work is firmly on the right side of ethical concerns.
“Sia gives clues on whether to proceed with the applicant or not,” he said. “We would never allow an AI to make such decisions because it is very dangerous. You are now playing with the careers of students, the lives of students.”
Other AI companies go a little further in what they’re willing to offer.
Student Select is a company that offers algorithms to predict admissions decisions for universities.
Will Rose, chief technology officer at Student Select, said the company typically begins by looking at a university’s admissions rubric and its historical admissions data. Its technology then sorts applicants into three tiers based on their likelihood of admission.
Applicants in the top tier can be approved by admissions officers more quickly, he said, and they get acceptance decisions sooner. Students in other tiers are still reviewed by college staff.
Student Select also offers colleges what Rose described as insights about applicants. The technology analyzes essays and even recorded interviews to find evidence of critical thinking skills or specific personality traits.
For example, an applicant who uses the word “flexibility” in response to a specific interview question may be expressing an “openness to experience,” one of the personality traits that Student Select measures.
“Our company started back over a decade ago as a digital job interviewing platform so we really understand how to analyze job interviews and understand traits from these job interviews,” Rose said. “And over the years we’ve learned we can make the same kind of analysis in the higher ed realm.”
Student Select has contracts with about a dozen universities to use its tools, Rose said. Though he declined to name them, citing contract terms, Government Technology reported in April that Rutgers University and Rocky Mountain University are among the company’s clients. Neither university responded to comment requests.
A black box?
Not everyone thinks the use of this technology by admissions offices is a good idea.
Julia Stoyanovich, a computer science and engineering professor at New York University, advised colleges to steer clear of AI tools that claim to make predictions about social outcomes.
“I don’t think the use of AI is worth it, truly,” said Stoyanovich, who is the co-founder and director of the Center for Responsible AI. “There’s no reason for us to believe that their pattern of speech or whether or not they look at the camera has anything to do with how good a student they are.”
Part of the issue with AI is its inscrutability, Stoyanovich said. In medicine, doctors can double check AI’s work when it flags things like potential cancers in medical images. But there’s little to no accountability when it’s used in college admissions.
Officers may think the software is selecting for a specific trait, when it is actually for something spurious or irrelevant.
“Even if somehow we believe that there was a way to do this, we can’t check whether these machines work. We don’t know how somebody would have done who you didn’t admit,” she said.
When the algorithms are trained on past admissions data, they repeat any biases that were already present. But they also go a step further by sanctioning those unequal decisions, Stoyanovich said.
Moreover, errors in algorithms can disproportionately affect people from marginalized groups. For example, Stoyanovich pointed to Facebook’s method for determining whether names were legitimate, which got the company into hot water in 2015 for kicking American Indian users off the platform.
Finally, admissions staff may not have the training to understand how the algorithms work and what sort of determinations it’s safe to make from them.
“You need to have some background at least to say, ‘I am the decision-maker here, and I am going to decide whether to take this recommendation or to contest it,’” Stoyanovich said.
With the rapid growth of generative AI systems like ChatGPT, some researchers worry about a future where applicants use machines to write essays that will be read and graded by algorithms.
Having essays read by machines is going to provide “even more impetus to have students generate them by machine,” said Les Perelman, a former associate dean at the Massachusetts Institute for Technology who has studied automated writing assessment. “It won’t be able to identify if it was original or just generated by ChatGPT. The whole issue of writing evaluation has really been turned on its head.”
Being careful
Benjamin Lira Luttges, a doctoral student in the University of Pennsylvania’s psychology department who is doing research on AI in college admissions, said human shortcomings foster some of the issues related to the technology.
“Part of the reason admissions is complicated is because it is not clear that as a society we know exactly what we want to maximize for when we’re making admissions decisions,” Lira said via email. “If we are not careful, we might build AI systems that maximize something that doesn’t match what we as a society want to maximize.”
The use of the technology has its risks, he said, but it also has its benefits. Machines, unlike humans, can make decisions without “noise,” meaning they aren’t influenced by things admissions staff might be, like their mood or the weather.
“We don’t have really good data on what is the status quo,” Lira said. “There might be potential for bias in algorithms and there might be things we don’t like about them, but if they perform better than the human system, then it could be a good idea to start progressively deploying algorithms in admissions.”
“If we are not careful, we might build AI systems that maximize something that doesn’t match what we as a society want to maximize.”
Benjamin Lira Luttges
Doctoral student, University of Pennsylvania
Rose, at Student Select, acknowledges that there are risks to using AI in admissions and hiring. Amazon, he noted, scrapped its own algorithm to help with hiring after discovering the tool discriminated against women.
But Student Select avoids those negative outcomes, he said. The company starts the process with a bias audit of a client’s previous admissions outcomes and regularly examines its own technology. Its algorithms are fairly transparent, Rose said, and can explain what they are basing decisions on.
The analysis produces equal average scores across subgroups, is validated by external academics and is not wholly new, Rose said.
“We use both internal and external researchers to develop this tool, and all these experts are experts in selection,” he said. “Our machine learning models have been trained on a data set that includes millions of records.”
Beyond the ethical questions that come with using AI in admissions, Stoyanovich said there are also practical ones.
When mistakes are made, who will be responsible? Students may want to know why they were rejected and how candidates were selected.
“I would be very careful as an admissions officer, as the director of admissions at a university or elsewhere, when I decide to use an algorithmic tool,” she said. “I would be very careful to understand how the tool works, what it does, how it was validated. And I would keep a very, very close eye on how it performs over time.”