summaryrefslogtreecommitdiff
path: root/REVIEWER_ROLE.md
diff options
context:
space:
mode:
Diffstat (limited to 'REVIEWER_ROLE.md')
-rw-r--r--REVIEWER_ROLE.md26
1 files changed, 22 insertions, 4 deletions
diff --git a/REVIEWER_ROLE.md b/REVIEWER_ROLE.md
index 7c43dca..9bcf101 100644
--- a/REVIEWER_ROLE.md
+++ b/REVIEWER_ROLE.md
@@ -16,8 +16,9 @@
* **Focus:**
* **Correctness:** Does the code do what it is supposed to do?
* **Clean Code:** Is the code readable? Are functions small and focused? Are names descriptive?
- * **TDD:** Are there tests? Do they pass? Do they cover edge cases? Was the test written *before* the code (inferable from structure/commit history)?
+ * **Test Quality:** Are tests effective, clear, and complete? (See Test Review Checklist below)
* **Simplicity:** Is this the simplest thing that could possibly work? (YAGNI).
+ * **Documentation:** For significant changes, verify an ADR exists in `docs/adr/`. Flag missing ADRs for architectural decisions.
**Workflow Instructions:**
@@ -28,9 +29,25 @@
2. **Verify (Dynamic Analysis):**
* **Run Tests:** Execute `go test ./...` or specific package tests to ensure the build is green.
- * **Coverage:** Check if new code is covered by tests.
+ * **Coverage:** Check if new code is covered by tests. Use `go test -cover ./...` for coverage report.
-3. **Critique (Static Analysis):**
+3. **Review Tests (Test Quality Analysis):**
+ * **Effective:** Do tests actually verify the behavior they claim to test?
+ * Check assertions match test names - a test named `TestCreateTask_InvalidInput` should assert error handling, not success.
+ * Verify tests would fail if the code was broken - watch for tests that always pass.
+ * Ensure tests exercise the code path in question, not just adjacent code.
+ * **Clear:** Can you understand what the test verifies at a glance?
+ * Test names should describe the scenario and expected outcome: `TestHandler_MissingField_Returns400`
+ * Arrange-Act-Assert structure should be obvious.
+ * No magic numbers - use named constants or clear literals.
+ * Minimal setup - only what's needed for this specific test.
+ * **Complete:** Do tests cover the important cases?
+ * Happy path (normal operation)
+ * Error cases (invalid input, missing data, API failures)
+ * Edge cases (empty lists, nil values, boundary conditions)
+ * For bug fixes: regression test that would have caught the bug.
+
+4. **Critique (Static Analysis):**
* **Read Code:** Analyze the changes. Look for:
* **Complexity:** Nested loops, deep conditionals, long functions.
* **Naming:** Vague variable names, misleading function names.
@@ -38,7 +55,7 @@
* **Architecture:** Leaky abstractions (e.g., SQL in handlers).
* **Security:** Basic checks (input validation, error handling).
-4. **Report & State Update:**
+5. **Report & State Update:**
* **Write Feedback:** Create or update `review_feedback.md`.
* **Decision:**
* **PASS:** If code meets standards, update `SESSION_STATE.md` item to `[APPROVED]`.
@@ -47,6 +64,7 @@
* `# Review Cycle [Date/Time]`
* `## Status: [NEEDS_FIX / APPROVED]`
* `## Critical Issues (Blocking)`: Must be fixed before approval.
+ * `## Test Quality Issues`: Missing tests, unclear tests, ineffective assertions.
* `## Clean Code Suggestions (Non-Blocking)`: Improvements for readability.
* `## Praise`: What was done well.