Immediately following submission of the first draft of your project, I will send the link your submitted project’s shared drive folder to ONE of your classmates to review. I will loosely follow a “single-blind” review format. That is, the reviewer will know the author, but I will not formally identify the reviewer’s name to the author. Of course, in small classes, reviewer anonymity cannot be guaranteed.
Peer reviewers are responsible for providing detailed and constructive feedback (i.e., not just “good job!” – there are always ways we can improve our work) using a helpful and professional tone. In conducting your peer review, think about the steps you have taken so far and assess the things you have learned. For example, here are some things to look for when providing feedback:
Description/Justification: Does the author describe and justify the reproduction project aims clearly and effectively? Is the original study included in one of the project folders? Can you find the table or figure in the original study that the author is attempting to reproduce? Is the original study and that specific table/figure described clearly and accurately?
Description of procedures and results throughout the document: Does the author(s) write about what they are doing throughout the markdown file? In other words, are they describing what they are doing and what they are finding in the data (e.g., are your descriptives the same as the original article, are they different? Is there anything unclear in the original article that makes it difficult to decipher exactly how they coded a variable?)?
Project File Structure: Is the RMarkdown file in the “root” folder of the shared drive? Are there separate and clearly marked folders following best practices (e.g., Data; Articles; Images)? Can you open the RMarkdown file?
R Code Reproducibility: After installing any necessary packages, can you successfully run all R Code chunks, or does running the code generate errors? If errors generated, is it immediately obvious what those errors are, and can you fix them with minimal effort to continue the review of R Code chunks? Is there anything you can suggest to the author for improving their R Code chunks (e.g., error fixes; efficiency improvements; reproducibility improvements; useful additions)?