This set of pages contains guidelines for reviewers to help them establish whether reviewed kata or translations adhere to the community's quality standards.
The guidelines should be used by reviewers to verify whether new content about to be introduced to Codewars is of sufficient quality. Conformity to these guidelines is a prerequisite for approving a kata or translation.
All Codewars content is created by the community and for the community. The community is the only mechanism for enforcing and elevating quality standards. All Codewars users are encouraged to provide suggestions for improving content and reporting or fixing issues.
Since the guidelines were introduced after many kata had been created, older kata may fail to meet the standard of quality. In such cases, beta kata and pending translations should be updated to improve their quality as much as possible before approval. Existing kata, on the other hand, can be approached in a more relaxed way. Obvious bugs and severe issues should be fixed, but otherwise, guidelines can be applied gradually as the kata receives new translations, or as issues are raised and fixed.
Reviewers who repeatedly violate or ignore rules and introduce poor-quality content to the system can have their approval privileges revoked and offending content withdrawn.
Effective communication is one of the most important skills of a reviewer. Peer reviews and code reviews have been the subject of a lot of research and countless articles, papers, books, and blogs describe their advantages, disadvantages, the ways to conduct them and how not to do them. The gist of the idea is described very nicely in the paper "Humanizing Peer Reviews" by Karl E. Wiegers, the paragraph "Tips for the Reviewer" on page 2, and can be summarized in the context of Codewars as follows:
Reviewers should focus on what they observed about the product, thoughtfully selecting the words they use to raise an issue. Saying, “I didn’t see where these variables were initialized” is likely to elicit a constructive response; the more accusatory “You didn’t initialize these variables” might get the author’s hackles up. You might phrase your comments in the form of a question: “Are we sure that another component doesn’t already provide that service?” Or, identify a point of confusion: “I didn’t see where this memory block is deallocated.” Direct your comments to the work product, not to the author. For example, say “This specification is missing Section 3.5 from the template” instead of “You left out section 3.5.” Reviewers and authors must work together outside the reviews, so each needs to maintain a level of professionalism and mutual respect to avoid strained relationships.
You do not want your reviews to create authors who look forward to retaliating against their tormentors. Moreover, an author who walks out of a review meeting feeling personally attacked or professionally insulted will not voluntarily submit his work for review again. Bugs are the bad guys in a review, not the author or the reviewers. The leaders of the review initiative should strive to create a culture of constructive criticism, in which team members seek to learn from their peers and do a better job the next time. Managers should encourage and reward those who initially participate in reviews and make useful contributions, regardless of the review outcomes.
Some summary and follow-up reading on code reviews can be also found in this nice article on Palantir Blog: "Code Review Best Practices".
Additionally, when a review detects some issues with the inspected kata or translation, posts which report them should contain information helpful for fixing them. Feedback of the form "This and that is wrong." is not very constructive nor helpful to the author. Feedback post should provide some guidance, hints, or links to articles explaining how to fix the problems.