Here’s how the tool works: Imagine someone is deciding which car to buy. First, the user rates the importance of several criteria – cost, then reliability, then fuel efficiency. Then the tool asks the user to choose between several pairs of cars to capture their preferences, using AI to determine which questions to ask and in what order.
If there’s a mismatch between the rankings based solely on their stated values and which cars the user actually prefers, the tool will highlight those inconsistencies. The user can then adjust the importance of each criteria to correct the mismatch, or the tool can predict if there’s a missing factor.
Perhaps the user unconsciously selected red cars over better options with a different paint job. In that case, the tool can show the user evidence of this bias, so they can either adjust their ranking or add color as an additional criterion. The end result is an optimal and totally explainable top choice.
Users can also turn off the AI function entirely for sensitive applications where using AI may be inappropriate.
“One of the most important parts of this project is not to use AI to make decisions for us, but to use AI to help us think through what we want,” Zhang said.
Zhang and Davis tested the tool in two case studies. First, they asked four participants to rank a series of short films. The individuals reported that the tool helped them move from making intuitive or emotional judgments about the films to applying specific criteria.
In the second experiment, they asked four TAs to rank 10 student projects from a previous computer graphics course. The rankings ultimately agreed with the students’ assigned grades, and were highly consistent among the four TAs, suggesting the tool yields accurate, repeatable assessments.
Davis now uses the decision-making tool, which is publicly available, to grade projects in his current class – with the AI function turned off.
“It’s for decisions where the stakes are high,” he said, “and the value of making a better decision is worth the extra rigor.”
Patricia Waldron is a writer for the Cornell Ann S. Bowers College of Computing and Information Science.