summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorPeter Stone <thepeterstone@gmail.com>2026-03-24 21:54:31 +0000
committerPeter Stone <thepeterstone@gmail.com>2026-03-24 21:54:31 +0000
commit407fbc8d346b986bf864452c865282aa726272e2 (patch)
tree274aa7861a6e4316c1919e93d944023d60846b44
parente3954992af63440986bd39cce889e9c62e1a6b92 (diff)
parentb2e77009c55ba0f07bb9ff904d9f2f6cc9ff0ee2 (diff)
fix: resolve merge conflict — integrate agent's story-aware ContainerRunner
Agent added: Store on ContainerRunner (direct story/project lookup), --reference clone for speed, explicit story branch push, checkStoryCompletion → SHIPPABLE. My additions: BranchName on Task as fallback when Store is nil, tests updated to match checkout-after-clone approach. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
-rw-r--r--.agent/worklog.md129
-rw-r--r--internal/api/server.go1
-rw-r--r--internal/api/stories.go65
-rw-r--r--internal/api/stories_test.go45
-rw-r--r--internal/cli/serve.go3
-rw-r--r--internal/executor/container.go50
-rw-r--r--internal/executor/container_test.go23
-rw-r--r--internal/executor/executor.go28
-rw-r--r--internal/executor/executor_test.go96
9 files changed, 359 insertions, 81 deletions
diff --git a/.agent/worklog.md b/.agent/worklog.md
index 6fb8033..d747b1c 100644
--- a/.agent/worklog.md
+++ b/.agent/worklog.md
@@ -1,72 +1,91 @@
-# SESSION_STATE.md
+# Worklog
## Current Task Goal
-ADR-007 implementation: Epic→Story→Task→Subtask hierarchy, project registry, Doot integration
+Move Claudomator UI auth into Doot: replace Apache proxy rules with a Doot-side
+reverse proxy, gating `/claudomator/*` behind Doot's session auth.
-## Status: IN_PROGRESS
+## Status: PLAN — awaiting user confirmation
---
-## Completed Items
-
-| Step | Description | Test / Verification |
-|------|-------------|---------------------|
-| Phase 1 | Doot dead code removal: Bug struct, BugToAtom, bug store methods, bug handlers, bug routes, bugs.html template, TypeNote, AddMealToPlanner stub | `go test ./...` in /workspace/doot — all pass (2 pre-existing failures unrelated) |
-| Phase 2 | Claudomator project registry: `task.Project` type, storage CRUD + UpsertProject, seed.go, API endpoints (GET/POST /api/projects, GET/PUT /api/projects/{id}), legacy AgentConfig.ProjectDir/RepositoryURL/SkipPlanning fields removed, container.go fallback removed, fallbackGitInit removed, processResult changestats extraction removed (pool-side only) | `TestCreateProject`, `TestListProjects`, `TestUpdateProject`, `TestProjects_CRUD` — all pass |
+## Plan: Claudomator UI behind Doot auth
+
+### Architecture
+```
+Browser → Apache (SSL) → Doot :38080 → [session auth] → Claudomator :8484
+```
+Apache currently proxies `/claudomator/*` directly to :8484 with no auth.
+Goal: move the proxy into Doot so session middleware gates it.
+Two processes, two systemd units — unchanged.
+Claudomator base-path already hardcoded to `/claudomator` in web/index.html.
+
+### Step 1 — Doot: add `ClaudomatorURL` config
+- `internal/config/config.go` — add `ClaudomatorURL string` (env: `CLAUDOMATOR_URL`, default: `http://127.0.0.1:8484`)
+- Tests: default + override
+
+### Step 2 — Doot: HTTP + WebSocket reverse proxy handler
+- New file: `internal/handlers/claudomator_proxy.go`
+- `httputil.ReverseProxy` for normal requests; WS connection hijacker for upgrades
+- Director strips `/claudomator` prefix from both `URL.Path` AND `URL.RawPath` (handles encoded chars in task names/IDs)
+- Do NOT set `ReadDeadline`/`WriteDeadline` on hijacked WS connections (kills long-lived task monitoring)
+- Preserve `Service-Worker-Allowed` response header so SW scopes correctly under `/claudomator`
+- Tests: HTTP forward, prefix strip, WS tunnel
+
+### Step 3 — Doot: restructure CSRF middleware, mount proxy
+- `cmd/dashboard/main.go`: move CSRF out of global middleware into a route group
+- `/claudomator` → redirect 301 to `/claudomator/` (trailing slash; prevents asset fetch breakage)
+- `/claudomator/api/webhooks/github` → exempt from `RequireAuth` (GitHub POSTs have no session; endpoint does its own HMAC validation)
+- `/claudomator/*` route: `RequireAuth` only (no CSRF — SPA doesn't send Doot's CSRF token)
+- All other routes: wrapped in CSRF group (behavior unchanged)
+
+### Step 4 — Apache: remove Claudomator proxy rules
+- Remove 4 lines from `/etc/apache2/sites-enabled/doot.terst.org-le-ssl.conf`
+- `apache2ctl configtest && apache2ctl graceful`
+
+### Step 5 — Smoke tests
+- Unauthenticated `/claudomator/` → 302 to `/login`
+- `/claudomator` (no slash) → 301 to `/claudomator/`
+- Authenticated: UI loads, task CRUD works, WS live updates, log streaming
+- GitHub webhook POST to `/claudomator/api/webhooks/github` → not redirected to login
+
+### Risks
+- CSRF restructure: verify all existing Doot routes still pass their tests after moving CSRF to a group
+- SecurityHeaders CSP already allows `wss: ws:` — no change needed
+- Claudomator :8484 remains accessible on localhost without auth (acceptable for now)
+- Future: `/claudomator/api/*` technically CSRF-vulnerable from other origins; mitigate later by injecting `XSRF-TOKEN` cookie
---
-## Next Steps (Claudomator tasks created)
-
-Phases 3–6 are queued as Claudomator tasks. See `ct task list` or the web UI.
+## Previous Task: ADR-007 — Epic→Story→Task hierarchy (IN_PROGRESS)
-| Task ID | Phase | Status | Depends On |
-|---------|-------|--------|------------|
-| f8829d6f-b8b6-4ff2-9c1a-e55dd3ab300e | Phase 3: Stories data model | PENDING | — |
-| c8a0dc6c-0605-4acb-a789-1155ad8824cb | Phase 4: Story execution and deploy | PENDING | Phase 3 |
-| faf5a371-8f1c-46a3-bb74-b0df1f062dee | Phase 5: Story elaboration | PENDING | Phase 3 |
-| f39af70f-72c5-4ac1-9522-83c2e11b37c9 | Phase 6: Doot — Claudomator integration | PENDING | Phase 3 |
-
-Instruction files: `scripts/.claude/phase{3,4,5,6}-*-instructions.txt`
-
-### Phase 3: Stories data model (claudomator repo)
-- `internal/task/story.go` — Story struct + ValidStoryTransition
-- `internal/storage/db.go` — stories table + story_id on tasks, CRUD + ListTasksByStory
-- `internal/api/stories.go` — story API endpoints
-- Tests: ValidStoryTransition, CRUD, depends_on auto-wire
+### Completed Items
-### Phase 4: Story execution and deploy (claudomator repo, depends Phase 3)
-- `internal/executor/executor.go` — checkStoryCompletion → SHIPPABLE
-- `internal/executor/container.go` — checkout story branch after clone
-- `internal/api/stories.go` — POST /api/stories/{id}/branch
-
-### Phase 5: Story elaboration (claudomator repo, depends Phase 3)
-- `internal/api/elaborate.go` — POST /api/stories/elaborate + approve
-- SeedProjects called at server startup
-
-### Phase 6: Doot — Claudomator integration (doot repo, depends Phase 3)
-- `internal/api/claudomator.go` — ClaudomatorClient
-- `internal/models/atom.go` — StoryToAtom, SourceClaudomator
-- `internal/handlers/atoms.go` — BuildUnifiedAtomList extended
-- `cmd/dashboard/main.go` — wire ClaudomatorURL config
+| Step | Description | Test / Verification |
+|------|-------------|---------------------|
+| Phase 1 | Doot dead code removal: Bug struct, BugToAtom, bug store methods, bug handlers, bug routes, bugs.html template, TypeNote, AddMealToPlanner stub | `go test ./...` in /workspace/doot — all pass |
+| Phase 2 | Claudomator project registry: `task.Project` type, storage CRUD + UpsertProject, seed.go, API endpoints, legacy fields removed | `TestCreateProject`, `TestListProjects`, `TestUpdateProject`, `TestProjects_CRUD` |
+| Phase 3 | Stories data model: Story struct + ValidStoryTransition, stories table, CRUD, story API endpoints | committed 5081b0c |
+| Phase 4 | Story execution and deploy: checkStoryCompletion → SHIPPABLE, story branch checkout, POST /api/stories/{id}/branch | committed 15a46b0 |
+| Phase 5 | Story elaboration: POST /api/stories/elaborate + approve, SeedProjects at startup, GetProject on executor Store interface | committed bc62c35 |
----
+### Pending (Claudomator tasks queued)
-## Key Files Changed (Phases 1-2)
+| Task ID | Phase | Status |
+|---------|-------|--------|
+| f39af70f-72c5-4ac1-9522-83c2e11b37c9 | Phase 6: Doot — Claudomator integration | QUEUED |
-### Claudomator
-- `internal/task/project.go` — new Project struct
-- `internal/task/task.go` — removed Agent.ProjectDir, Agent.RepositoryURL, Agent.SkipPlanning
-- `internal/storage/db.go` — projects table migration + CRUD
-- `internal/storage/seed.go` — SeedProjects upserts claudomator + nav on startup
-- `internal/api/projects.go` — project CRUD handlers
-- `internal/api/server.go` — project routes; processResult no longer extracts changestats
-- `internal/api/deployment.go` + `task_view.go` — use tk.RepositoryURL (was tk.Agent.ProjectDir)
-- `internal/executor/container.go` — fallback logic removed; requires t.RepositoryURL
+### Key Files Changed (Phases 1–5)
-### Doot
+#### Claudomator
+- `internal/task/project.go` — Project struct
+- `internal/task/story.go` — Story struct + ValidStoryTransition
+- `internal/task/task.go` — removed Agent.ProjectDir/RepositoryURL/SkipPlanning
+- `internal/storage/db.go` — projects + stories tables, CRUD
+- `internal/storage/seed.go` — SeedProjects
+- `internal/api/projects.go`, `stories.go`, `elaborate.go` — handlers
+- `internal/executor/executor.go` — GetProject on Store interface, RepositoryURL resolution
+- `internal/cli/serve.go` — SeedProjects at startup
+
+#### Doot
- Bug feature removed entirely (models, handlers, store, routes, template, migration)
-- `migrations/018_drop_bugs.sql` — DROP TABLE IF EXISTS bugs
-- `internal/api/interfaces.go` — AddMealToPlanner removed from PlanToEatAPI
-- `internal/api/plantoeat.go` — AddMealToPlanner stub removed
- `internal/models/atom.go` — SourceBug, TypeBug, TypeNote, BugToAtom removed
diff --git a/internal/api/server.go b/internal/api/server.go
index bb23f46..fc9bd63 100644
--- a/internal/api/server.go
+++ b/internal/api/server.go
@@ -147,6 +147,7 @@ func (s *Server) routes() {
s.mux.HandleFunc("GET /api/stories/{id}/tasks", s.handleListStoryTasks)
s.mux.HandleFunc("POST /api/stories/{id}/tasks", s.handleAddTaskToStory)
s.mux.HandleFunc("PUT /api/stories/{id}/status", s.handleUpdateStoryStatus)
+ s.mux.HandleFunc("GET /api/stories/{id}/deployment-status", s.handleStoryDeploymentStatus)
s.mux.HandleFunc("GET /api/health", s.handleHealth)
s.mux.HandleFunc("GET /api/version", s.handleVersion)
s.mux.HandleFunc("POST /api/webhooks/github", s.handleGitHubWebhook)
diff --git a/internal/api/stories.go b/internal/api/stories.go
index 459d0db..640bb0e 100644
--- a/internal/api/stories.go
+++ b/internal/api/stories.go
@@ -1,6 +1,7 @@
package api
import (
+ "database/sql"
"encoding/json"
"fmt"
"net/http"
@@ -9,22 +10,35 @@ import (
"time"
"github.com/google/uuid"
+ "github.com/thepeterstone/claudomator/internal/deployment"
"github.com/thepeterstone/claudomator/internal/task"
)
-// createStoryBranch creates a new git branch in localPath and pushes it to origin.
+// createStoryBranch creates a new git branch in localPath from origin/master (or main)
+// and pushes it to origin. Idempotent: treats "already exists" as success.
func createStoryBranch(localPath, branchName string) error {
- out, err := exec.Command("git", "-C", localPath, "checkout", "-b", branchName).CombinedOutput()
+ // Fetch latest from origin so origin/master is up to date.
+ if out, err := exec.Command("git", "-C", localPath, "fetch", "origin").CombinedOutput(); err != nil {
+ return fmt.Errorf("git fetch: %w (output: %s)", err, string(out))
+ }
+ // Try to create branch from origin/master; fall back to origin/main.
+ base := "origin/master"
+ if out, err := exec.Command("git", "-C", localPath, "rev-parse", "--verify", "origin/master").CombinedOutput(); err != nil {
+ if strings.Contains(string(out), "fatal") || err != nil {
+ base = "origin/main"
+ }
+ }
+ out, err := exec.Command("git", "-C", localPath, "checkout", "-b", branchName, base).CombinedOutput()
if err != nil {
if !strings.Contains(string(out), "already exists") {
return fmt.Errorf("git checkout -b: %w (output: %s)", err, string(out))
}
- // Branch exists; switch to it.
+ // Branch exists; switch to it — idempotent.
if out2, err2 := exec.Command("git", "-C", localPath, "checkout", branchName).CombinedOutput(); err2 != nil {
return fmt.Errorf("git checkout: %w (output: %s)", err2, string(out2))
}
}
- if out, err := exec.Command("git", "-C", localPath, "push", "-u", "origin", branchName).CombinedOutput(); err != nil {
+ if out, err := exec.Command("git", "-C", localPath, "push", "origin", branchName).CombinedOutput(); err != nil {
return fmt.Errorf("git push: %w (output: %s)", err, string(out))
}
return nil
@@ -306,3 +320,46 @@ func (s *Server) handleApproveStory(w http.ResponseWriter, r *http.Request) {
"task_ids": taskIDs,
})
}
+
+// handleStoryDeploymentStatus aggregates the deployment status across all tasks in a story.
+// GET /api/stories/{id}/deployment-status
+func (s *Server) handleStoryDeploymentStatus(w http.ResponseWriter, r *http.Request) {
+ id := r.PathValue("id")
+
+ story, err := s.store.GetStory(id)
+ if err != nil {
+ writeJSON(w, http.StatusNotFound, map[string]string{"error": "story not found"})
+ return
+ }
+
+ tasks, err := s.store.ListTasksByStory(id)
+ if err != nil {
+ writeJSON(w, http.StatusInternalServerError, map[string]string{"error": err.Error()})
+ return
+ }
+
+ // Collect all commits from the latest execution of each task.
+ var allCommits []task.GitCommit
+ for _, t := range tasks {
+ exec, err := s.store.GetLatestExecution(t.ID)
+ if err != nil {
+ if err == sql.ErrNoRows {
+ continue
+ }
+ writeJSON(w, http.StatusInternalServerError, map[string]string{"error": err.Error()})
+ return
+ }
+ allCommits = append(allCommits, exec.Commits...)
+ }
+
+ // Determine project remote URL for the deployment check.
+ projectRemoteURL := ""
+ if story.ProjectID != "" {
+ if proj, err := s.store.GetProject(story.ProjectID); err == nil {
+ projectRemoteURL = proj.RemoteURL
+ }
+ }
+
+ status := deployment.Check(allCommits, projectRemoteURL)
+ writeJSON(w, http.StatusOK, status)
+}
diff --git a/internal/api/stories_test.go b/internal/api/stories_test.go
index cf522e1..17bea07 100644
--- a/internal/api/stories_test.go
+++ b/internal/api/stories_test.go
@@ -7,7 +7,9 @@ import (
"net/http/httptest"
"strings"
"testing"
+ "time"
+ "github.com/thepeterstone/claudomator/internal/deployment"
"github.com/thepeterstone/claudomator/internal/task"
)
@@ -202,3 +204,46 @@ func TestHandleStoryApprove_WiresDepends(t *testing.T) {
t.Errorf("task3.DependsOn: want [%s], got %v", task2.ID, task3.DependsOn)
}
}
+
+func TestHandleStoryDeploymentStatus(t *testing.T) {
+ srv, store := testServer(t)
+
+ // Create a story.
+ now := time.Now().UTC()
+ story := &task.Story{
+ ID: "deploy-story-1",
+ Name: "Deploy Status Story",
+ Status: task.StoryInProgress,
+ CreatedAt: now,
+ UpdatedAt: now,
+ }
+ if err := store.CreateStory(story); err != nil {
+ t.Fatalf("CreateStory: %v", err)
+ }
+
+ // Request deployment status — no tasks yet.
+ req := httptest.NewRequest("GET", "/api/stories/deploy-story-1/deployment-status", nil)
+ w := httptest.NewRecorder()
+ srv.mux.ServeHTTP(w, req)
+
+ if w.Code != http.StatusOK {
+ t.Fatalf("expected 200, got %d: %s", w.Code, w.Body.String())
+ }
+
+ var status deployment.Status
+ if err := json.NewDecoder(w.Body).Decode(&status); err != nil {
+ t.Fatalf("decode: %v", err)
+ }
+ // No tasks → no commits → IncludesFix = false (nothing to check).
+ if status.IncludesFix {
+ t.Error("expected IncludesFix=false when no commits")
+ }
+
+ // 404 for unknown story.
+ req2 := httptest.NewRequest("GET", "/api/stories/nonexistent/deployment-status", nil)
+ w2 := httptest.NewRecorder()
+ srv.mux.ServeHTTP(w2, req2)
+ if w2.Code != http.StatusNotFound {
+ t.Errorf("expected 404 for unknown story, got %d", w2.Code)
+ }
+}
diff --git a/internal/cli/serve.go b/internal/cli/serve.go
index 3850ca9..644392e 100644
--- a/internal/cli/serve.go
+++ b/internal/cli/serve.go
@@ -91,6 +91,7 @@ func serve(addr string) error {
SSHAuthSock: cfg.SSHAuthSock,
ClaudeConfigDir: claudeConfigDir,
CredentialSyncCmd: filepath.Join(repoDir, "scripts", "sync-credentials"),
+ Store: store,
},
"gemini": &executor.ContainerRunner{
Image: cfg.GeminiImage,
@@ -101,6 +102,7 @@ func serve(addr string) error {
SSHAuthSock: cfg.SSHAuthSock,
ClaudeConfigDir: claudeConfigDir,
CredentialSyncCmd: filepath.Join(repoDir, "scripts", "sync-credentials"),
+ Store: store,
},
"container": &executor.ContainerRunner{
Image: "claudomator-agent:latest",
@@ -111,6 +113,7 @@ func serve(addr string) error {
SSHAuthSock: cfg.SSHAuthSock,
ClaudeConfigDir: claudeConfigDir,
CredentialSyncCmd: filepath.Join(repoDir, "scripts", "sync-credentials"),
+ Store: store,
},
}
diff --git a/internal/executor/container.go b/internal/executor/container.go
index d270e20..8b244c6 100644
--- a/internal/executor/container.go
+++ b/internal/executor/container.go
@@ -28,6 +28,7 @@ type ContainerRunner struct {
GeminiBinary string // optional path to gemini binary in container
ClaudeConfigDir string // host path to ~/.claude; mounted into container for auth credentials
CredentialSyncCmd string // optional path to sync-credentials script for auth-error auto-recovery
+ Store Store // optional; used to look up stories and projects for story-aware cloning
// Command allows mocking exec.CommandContext for tests.
Command func(ctx context.Context, name string, arg ...string) *exec.Cmd
}
@@ -95,21 +96,46 @@ func (r *ContainerRunner) Run(ctx context.Context, t *task.Task, e *storage.Exec
}
}()
+ // Resolve story branch and project local path if this is a story task.
+ var storyBranch string
+ var storyLocalPath string
+ if t.StoryID != "" && r.Store != nil {
+ if story, err := r.Store.GetStory(t.StoryID); err == nil && story != nil {
+ storyBranch = story.BranchName
+ if story.ProjectID != "" {
+ if proj, err := r.Store.GetProject(story.ProjectID); err == nil && proj != nil {
+ storyLocalPath = proj.LocalPath
+ }
+ }
+ }
+ }
+ // Fall back to task-level BranchName (e.g. set explicitly by executor or tests).
+ if storyBranch == "" {
+ storyBranch = t.BranchName
+ }
+
// 2. Clone repo into workspace if not resuming.
// git clone requires the target directory to not exist; remove the MkdirTemp-created dir first.
if !isResume {
if err := os.Remove(workspace); err != nil {
return fmt.Errorf("removing workspace before clone: %w", err)
}
- r.Logger.Info("cloning repository", "url", repoURL, "workspace", workspace, "branch", t.BranchName)
- cloneArgs := []string{"clone"}
- if t.BranchName != "" {
- cloneArgs = append(cloneArgs, "--branch", t.BranchName)
+ r.Logger.Info("cloning repository", "url", repoURL, "workspace", workspace)
+ var cloneArgs []string
+ if storyLocalPath != "" {
+ cloneArgs = []string{"clone", "--reference", storyLocalPath, repoURL, workspace}
+ } else {
+ cloneArgs = []string{"clone", repoURL, workspace}
}
- cloneArgs = append(cloneArgs, repoURL, workspace)
if out, err := r.command(ctx, "git", cloneArgs...).CombinedOutput(); err != nil {
return fmt.Errorf("git clone failed: %w\n%s", err, string(out))
}
+ if storyBranch != "" {
+ r.Logger.Info("checking out story branch", "branch", storyBranch)
+ if out, err := r.command(ctx, "git", "-C", workspace, "checkout", storyBranch).CombinedOutput(); err != nil {
+ return fmt.Errorf("git checkout story branch %q failed: %w\n%s", storyBranch, err, string(out))
+ }
+ }
if err = os.Chmod(workspace, 0755); err != nil {
return fmt.Errorf("chmod cloned workspace: %w", err)
}
@@ -150,7 +176,7 @@ func (r *ContainerRunner) Run(ctx context.Context, t *task.Task, e *storage.Exec
}
// Run container (with auth retry on failure).
- runErr := r.runContainer(ctx, t, e, workspace, agentHome, isResume)
+ runErr := r.runContainer(ctx, t, e, workspace, agentHome, isResume, storyBranch)
if runErr != nil && isAuthError(runErr) && r.CredentialSyncCmd != "" {
r.Logger.Warn("auth failure detected, syncing credentials and retrying once", "taskID", t.ID)
syncOut, syncErr := r.command(ctx, r.CredentialSyncCmd).CombinedOutput()
@@ -164,7 +190,7 @@ func (r *ContainerRunner) Run(ctx context.Context, t *task.Task, e *storage.Exec
if srcData, readErr := os.ReadFile(filepath.Join(r.ClaudeConfigDir, ".claude.json")); readErr == nil {
_ = os.WriteFile(filepath.Join(agentHome, ".claude.json"), srcData, 0644)
}
- runErr = r.runContainer(ctx, t, e, workspace, agentHome, isResume)
+ runErr = r.runContainer(ctx, t, e, workspace, agentHome, isResume, storyBranch)
}
if runErr == nil {
@@ -180,7 +206,7 @@ func (r *ContainerRunner) Run(ctx context.Context, t *task.Task, e *storage.Exec
// runContainer runs the docker container for the given task and handles log setup,
// environment files, instructions, and post-execution git operations.
-func (r *ContainerRunner) runContainer(ctx context.Context, t *task.Task, e *storage.Execution, workspace, agentHome string, isResume bool) error {
+func (r *ContainerRunner) runContainer(ctx context.Context, t *task.Task, e *storage.Execution, workspace, agentHome string, isResume bool, storyBranch string) error {
repoURL := t.RepositoryURL
image := t.Agent.ContainerImage
@@ -327,8 +353,12 @@ func (r *ContainerRunner) runContainer(ctx context.Context, t *task.Task, e *sto
}
if hasCommits {
- r.Logger.Info("pushing changes back to remote", "url", repoURL)
- if out, err := r.command(ctx, "git", "-C", workspace, "push", "origin", "HEAD").CombinedOutput(); err != nil {
+ pushRef := "HEAD"
+ if storyBranch != "" {
+ pushRef = storyBranch
+ }
+ r.Logger.Info("pushing changes back to remote", "url", repoURL, "ref", pushRef)
+ if out, err := r.command(ctx, "git", "-C", workspace, "push", "origin", pushRef).CombinedOutput(); err != nil {
r.Logger.Warn("git push failed", "error", err, "output", string(out))
return fmt.Errorf("git push failed: %w\n%s", err, string(out))
}
diff --git a/internal/executor/container_test.go b/internal/executor/container_test.go
index c56d1b2..15c147f 100644
--- a/internal/executor/container_test.go
+++ b/internal/executor/container_test.go
@@ -518,18 +518,23 @@ func TestContainerRunner_AuthError_SyncsAndRetries(t *testing.T) {
func TestContainerRunner_ClonesStoryBranch(t *testing.T) {
logger := slog.New(slog.NewTextHandler(io.Discard, nil))
- var cloneArgs []string
+ var checkoutArgs []string
runner := &ContainerRunner{
Logger: logger,
Image: "busybox",
Command: func(ctx context.Context, name string, arg ...string) *exec.Cmd {
if name == "git" && len(arg) > 0 && arg[0] == "clone" {
- cloneArgs = append([]string{}, arg...)
dir := arg[len(arg)-1]
os.MkdirAll(dir, 0755)
return exec.Command("true")
}
- // docker run fails so the test exits quickly
+ // Capture checkout calls: both "git checkout <branch>" and "git -C <dir> checkout <branch>"
+ for i, a := range arg {
+ if a == "checkout" {
+ checkoutArgs = append([]string{}, arg[i:]...)
+ break
+ }
+ }
if name == "docker" {
return exec.Command("sh", "-c", "exit 1")
}
@@ -548,19 +553,19 @@ func TestContainerRunner_ClonesStoryBranch(t *testing.T) {
runner.Run(context.Background(), tk, e)
os.RemoveAll(e.SandboxDir)
- // Assert git clone was called with --branch <branchName>
- if len(cloneArgs) < 3 {
- t.Fatalf("expected clone args, got %v", cloneArgs)
+ // Assert git checkout was called with the story branch name.
+ if len(checkoutArgs) == 0 {
+ t.Fatal("expected git checkout to be called for story branch, but it was not")
}
found := false
- for i, a := range cloneArgs {
- if a == "--branch" && i+1 < len(cloneArgs) && cloneArgs[i+1] == "story/my-feature" {
+ for _, a := range checkoutArgs {
+ if a == "story/my-feature" {
found = true
break
}
}
if !found {
- t.Errorf("expected git clone --branch story/my-feature, got args: %v", cloneArgs)
+ t.Errorf("expected git checkout story/my-feature, got args: %v", checkoutArgs)
}
}
diff --git a/internal/executor/executor.go b/internal/executor/executor.go
index 6489060..8dfb196 100644
--- a/internal/executor/executor.go
+++ b/internal/executor/executor.go
@@ -34,6 +34,8 @@ type Store interface {
RecordAgentEvent(e storage.AgentEvent) error
GetProject(id string) (*task.Project, error)
GetStory(id string) (*task.Story, error)
+ ListTasksByStory(storyID string) ([]*task.Task, error)
+ UpdateStoryStatus(id string, status task.StoryState) error
}
// LogPather is an optional interface runners can implement to provide the log
@@ -406,6 +408,9 @@ func (p *Pool) handleRunResult(ctx context.Context, t *task.Task, exec *storage.
}
p.maybeUnblockParent(t.ParentTaskID)
}
+ if t.StoryID != "" {
+ go p.checkStoryCompletion(ctx, t.StoryID)
+ }
}
summary := exec.Summary
@@ -437,6 +442,29 @@ func (p *Pool) handleRunResult(ctx context.Context, t *task.Task, exec *storage.
p.resultCh <- &Result{TaskID: t.ID, Execution: exec, Err: err}
}
+// checkStoryCompletion checks whether all tasks in a story have reached a terminal
+// success state and transitions the story to SHIPPABLE if so.
+func (p *Pool) checkStoryCompletion(ctx context.Context, storyID string) {
+ tasks, err := p.store.ListTasksByStory(storyID)
+ if err != nil {
+ p.logger.Error("checkStoryCompletion: failed to list tasks", "storyID", storyID, "error", err)
+ return
+ }
+ if len(tasks) == 0 {
+ return
+ }
+ for _, t := range tasks {
+ if t.State != task.StateCompleted && t.State != task.StateReady {
+ return // not all tasks done
+ }
+ }
+ if err := p.store.UpdateStoryStatus(storyID, task.StoryShippable); err != nil {
+ p.logger.Error("checkStoryCompletion: failed to update story status", "storyID", storyID, "error", err)
+ return
+ }
+ p.logger.Info("story transitioned to SHIPPABLE", "storyID", storyID)
+}
+
// UndrainingAgent resets the drain state and failure counter for the given agent type.
func (p *Pool) UndrainingAgent(agentType string) {
p.mu.Lock()
diff --git a/internal/executor/executor_test.go b/internal/executor/executor_test.go
index 2e01230..b93e819 100644
--- a/internal/executor/executor_test.go
+++ b/internal/executor/executor_test.go
@@ -1056,9 +1056,11 @@ func (m *minimalMockStore) UpdateExecutionChangestats(execID string, stats *task
m.mu.Unlock()
return nil
}
-func (m *minimalMockStore) RecordAgentEvent(_ storage.AgentEvent) error { return nil }
-func (m *minimalMockStore) GetProject(_ string) (*task.Project, error) { return nil, nil }
-func (m *minimalMockStore) GetStory(_ string) (*task.Story, error) { return nil, nil }
+func (m *minimalMockStore) RecordAgentEvent(_ storage.AgentEvent) error { return nil }
+func (m *minimalMockStore) GetProject(_ string) (*task.Project, error) { return nil, nil }
+func (m *minimalMockStore) GetStory(_ string) (*task.Story, error) { return nil, nil }
+func (m *minimalMockStore) ListTasksByStory(_ string) ([]*task.Task, error) { return nil, nil }
+func (m *minimalMockStore) UpdateStoryStatus(_ string, _ task.StoryState) error { return nil }
func (m *minimalMockStore) lastStateUpdate() (string, task.State, bool) {
m.mu.Lock()
@@ -1625,6 +1627,94 @@ func TestPool_ConsecutiveFailures_ResetOnSuccess(t *testing.T) {
}
}
+func TestPool_CheckStoryCompletion_AllComplete(t *testing.T) {
+ store := testStore(t)
+ logger := slog.New(slog.NewTextHandler(os.Stderr, &slog.HandlerOptions{Level: slog.LevelError}))
+ pool := NewPool(2, map[string]Runner{"claude": &mockRunner{}}, store, logger)
+
+ // Create a story in IN_PROGRESS state.
+ now := time.Now().UTC()
+ story := &task.Story{
+ ID: "story-comp-1",
+ Name: "Completion Test",
+ Status: task.StoryInProgress,
+ CreatedAt: now,
+ UpdatedAt: now,
+ }
+ if err := store.CreateStory(story); err != nil {
+ t.Fatalf("CreateStory: %v", err)
+ }
+
+ // Create two story tasks and drive them through valid transitions to COMPLETED.
+ for i, id := range []string{"sctask-1", "sctask-2"} {
+ tk := makeTask(id)
+ tk.StoryID = "story-comp-1"
+ tk.ParentTaskID = "fake-parent" // so it goes to COMPLETED
+ tk.State = task.StatePending
+ if err := store.CreateTask(tk); err != nil {
+ t.Fatalf("CreateTask %d: %v", i, err)
+ }
+ for _, s := range []task.State{task.StateQueued, task.StateRunning, task.StateCompleted} {
+ if err := store.UpdateTaskState(id, s); err != nil {
+ t.Fatalf("UpdateTaskState %s → %s: %v", id, s, err)
+ }
+ }
+ }
+
+ pool.checkStoryCompletion(context.Background(), "story-comp-1")
+
+ got, err := store.GetStory("story-comp-1")
+ if err != nil {
+ t.Fatalf("GetStory: %v", err)
+ }
+ if got.Status != task.StoryShippable {
+ t.Errorf("story status: want SHIPPABLE, got %v", got.Status)
+ }
+}
+
+func TestPool_CheckStoryCompletion_PartialComplete(t *testing.T) {
+ store := testStore(t)
+ logger := slog.New(slog.NewTextHandler(os.Stderr, &slog.HandlerOptions{Level: slog.LevelError}))
+ pool := NewPool(2, map[string]Runner{"claude": &mockRunner{}}, store, logger)
+
+ now := time.Now().UTC()
+ story := &task.Story{
+ ID: "story-partial-1",
+ Name: "Partial Test",
+ Status: task.StoryInProgress,
+ CreatedAt: now,
+ UpdatedAt: now,
+ }
+ if err := store.CreateStory(story); err != nil {
+ t.Fatalf("CreateStory: %v", err)
+ }
+
+ // First task driven to COMPLETED.
+ tk1 := makeTask("sptask-1")
+ tk1.StoryID = "story-partial-1"
+ tk1.ParentTaskID = "fake-parent"
+ store.CreateTask(tk1)
+ for _, s := range []task.State{task.StateQueued, task.StateRunning, task.StateCompleted} {
+ store.UpdateTaskState("sptask-1", s)
+ }
+
+ // Second task still in PENDING (not done).
+ tk2 := makeTask("sptask-2")
+ tk2.StoryID = "story-partial-1"
+ tk2.ParentTaskID = "fake-parent"
+ store.CreateTask(tk2)
+
+ pool.checkStoryCompletion(context.Background(), "story-partial-1")
+
+ got, err := store.GetStory("story-partial-1")
+ if err != nil {
+ t.Fatalf("GetStory: %v", err)
+ }
+ if got.Status != task.StoryInProgress {
+ t.Errorf("story status: want IN_PROGRESS (no transition), got %v", got.Status)
+ }
+}
+
func TestPool_Undrain_ResumesExecution(t *testing.T) {
store := testStore(t)