Understanding the usefulness and usability of ComPAIR for peer review is an ongoing process, with researchers and developers at the University of British Columbia actively seeking input to determine best practices using a variety of methods.
During the pilot at UBC, instructors, teaching assistants, and students participated in surveys, focus groups, and/or one-on-one interviews, and these revealed common practices that resulted in a more positive learning experience, regardless of discipline. (More detail on the pilot evaluation process and its outcomes is available in our Teaching & Learning Inquiry article .)
As with any learning activity, how assignments were introduced, designed, and integrated in the course significantly impacted the student experience with ComPAIR. The following are the best practices we recommend.
A student handout template (Word document) is available for instructors to download, customize, and use in their courses. This provides a core introduction and instructions recommended for familiarizing students with ComPAIR.
How scoring works:
When ComPAIR assignments are set up to select answer pairs adaptively—that is, to pair answers that are closer in quality over time—the application tracks a score for each answer. Every answer starts with a base score and that increases or decreases (based on the Elo rating system ) to reflect the outcome of the comparisons the answer competes in. Basically, after every comparison, the winning answer takes points from the losing one, with the difference between the answer scores at the time of comparison factoring into the total number of points transferred.
Once answers move beyond the base score, higher-rated answers are expected to win more often. So if a higher-rated answer wins (as expected), fewer points are taken from the lower-rated answer's score. Conversely, if a lower-rated answer wins in an upset (unexpectedly), more points will be transferred to the lower-rated answer's score. At the end of this process, if reviewers have generally made the "right" choices for the pairs, stronger answers should end up with higher scores and weaker answers with lower scores.
Given the above functionality, some instructors have chosen to use the scores and the ranking they provide to form all or part of a student's grade for low-stakes assignments, most often by chunking the answer list into rough top, middle, and bottom answer groups.
While anyone is welcome to try the application for crowdsourced grades, we caution against relying on ComPAIR scores without any additional validation.
Our internal research so far indicates that ComPAIR may not reliably map to traditional grading, as it ultimately relies on the skills and training of novices (students) to provide accurate, well-informed rankings of their peers' answers. Many instructors have found it effective to grade students with some combination of their participation in the process (answering, comparing, self-evaluating) and the quality of work (answer, peer feedback) they submit.
ComPAIR is an open-source peer review application that pairs student answers for deeper learning through comparison of peer work