summaryrefslogtreecommitdiff
path: root/REVIEWER_ROLE.md
diff options
context:
space:
mode:
authorPeter Stone <thepeterstone@gmail.com>2026-02-05 10:37:09 -1000
committerPeter Stone <thepeterstone@gmail.com>2026-02-05 10:37:09 -1000
commit223c94f52ebaa878f6951ebf7d08754e413bdca7 (patch)
treefc02a201fb83ec49fd5978d2ec9bf79f83e5d57c /REVIEWER_ROLE.md
parent4ac78382343e14c00ba21ee11ee4642255509eef (diff)
Document multi-agent workflow, ADRs, and Agent API
- Add Development Workflow section to DESIGN.md documenting three-role system (Architect, Implementor, Reviewer) with handoff documents - Update CLAUDE.md with Key Documents section pointing to DESIGN.md, role definitions, and ADRs - Add ADR-first documentation policy across all role definitions - Update REVIEWER_ROLE.md with comprehensive test quality checklist - Document Agent API and completed tasks log in DESIGN.md - Update database schema table with 5 missing tables - Update endpoint reference with 10 missing routes - Create ADRs 002-005 capturing key architectural decisions: - 002: Timeline aggregation architecture - 003: HTMX write operations pattern - 004: Concurrent data fetching with graceful degradation - 005: Agent API notification-based authentication - Add migrations/README.md documenting schema history and 007 gap Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Diffstat (limited to 'REVIEWER_ROLE.md')
-rw-r--r--REVIEWER_ROLE.md26
1 files changed, 22 insertions, 4 deletions
diff --git a/REVIEWER_ROLE.md b/REVIEWER_ROLE.md
index 7c43dca..9bcf101 100644
--- a/REVIEWER_ROLE.md
+++ b/REVIEWER_ROLE.md
@@ -16,8 +16,9 @@
* **Focus:**
* **Correctness:** Does the code do what it is supposed to do?
* **Clean Code:** Is the code readable? Are functions small and focused? Are names descriptive?
- * **TDD:** Are there tests? Do they pass? Do they cover edge cases? Was the test written *before* the code (inferable from structure/commit history)?
+ * **Test Quality:** Are tests effective, clear, and complete? (See Test Review Checklist below)
* **Simplicity:** Is this the simplest thing that could possibly work? (YAGNI).
+ * **Documentation:** For significant changes, verify an ADR exists in `docs/adr/`. Flag missing ADRs for architectural decisions.
**Workflow Instructions:**
@@ -28,9 +29,25 @@
2. **Verify (Dynamic Analysis):**
* **Run Tests:** Execute `go test ./...` or specific package tests to ensure the build is green.
- * **Coverage:** Check if new code is covered by tests.
+ * **Coverage:** Check if new code is covered by tests. Use `go test -cover ./...` for coverage report.
-3. **Critique (Static Analysis):**
+3. **Review Tests (Test Quality Analysis):**
+ * **Effective:** Do tests actually verify the behavior they claim to test?
+ * Check assertions match test names - a test named `TestCreateTask_InvalidInput` should assert error handling, not success.
+ * Verify tests would fail if the code was broken - watch for tests that always pass.
+ * Ensure tests exercise the code path in question, not just adjacent code.
+ * **Clear:** Can you understand what the test verifies at a glance?
+ * Test names should describe the scenario and expected outcome: `TestHandler_MissingField_Returns400`
+ * Arrange-Act-Assert structure should be obvious.
+ * No magic numbers - use named constants or clear literals.
+ * Minimal setup - only what's needed for this specific test.
+ * **Complete:** Do tests cover the important cases?
+ * Happy path (normal operation)
+ * Error cases (invalid input, missing data, API failures)
+ * Edge cases (empty lists, nil values, boundary conditions)
+ * For bug fixes: regression test that would have caught the bug.
+
+4. **Critique (Static Analysis):**
* **Read Code:** Analyze the changes. Look for:
* **Complexity:** Nested loops, deep conditionals, long functions.
* **Naming:** Vague variable names, misleading function names.
@@ -38,7 +55,7 @@
* **Architecture:** Leaky abstractions (e.g., SQL in handlers).
* **Security:** Basic checks (input validation, error handling).
-4. **Report & State Update:**
+5. **Report & State Update:**
* **Write Feedback:** Create or update `review_feedback.md`.
* **Decision:**
* **PASS:** If code meets standards, update `SESSION_STATE.md` item to `[APPROVED]`.
@@ -47,6 +64,7 @@
* `# Review Cycle [Date/Time]`
* `## Status: [NEEDS_FIX / APPROVED]`
* `## Critical Issues (Blocking)`: Must be fixed before approval.
+ * `## Test Quality Issues`: Missing tests, unclear tests, ineffective assertions.
* `## Clean Code Suggestions (Non-Blocking)`: Improvements for readability.
* `## Praise`: What was done well.