Awardees: David Levari and Dan Gilbert (FAS)
Summary: Awardees will investigate the relationship between instructor performance and advice quality by comparing instructor performance on a series of web-based modules and the performance of “students” who completed the modules with instructor advice.
David Levari and Dan Gilbert, both of the psychology department, collected data in the lab and online to investigate the relationship between instructor performance and advice quality, as well as student perceptions of that relationship. In other words, what information do students rely on to choose instructors, and what information should they use?
They created a series of interactive web-based modules that measured performance in domains such as basic math ability, logical puzzle solving, and hand-eye coordination. They administered these online tasks to a population of “instructors” – hundreds of people who completed the tasks and provided advice on how to maximize performance on those tasks. Next, they distributed the same tasks to thousands of new online participants – “students” – where half were given advice on completing the tasks and half did not receive any advice. Ultimately, they discovered that the advice given from the highest performing instructors did not help improve student performance on tasks more than other advice. Students did not know whether their instructor was a high- or low- performer. Based on a student survey, they found that while the highest achieving instructors did not necessarily give better advice, students want advice from the highest performers when given the choice.
The majority of awardees’ time and budget were spent developing appropriate and measurable tasks and the logistics of delivering them to participants online. Levari and Gilbert believe that now that they have a strong foundation of materials, they can reuse them for future research. They also hope to run another iteration of a field experiment using a dart game to study the same principle.