summaryrefslogtreecommitdiff
path: root/scripts/check-token
diff options
context:
space:
mode:
Diffstat (limited to 'scripts/check-token')
0 files changed, 0 insertions, 0 deletions
r> -rw-r--r--docs/adr/006-containerized-execution.md51
-rw-r--r--images/agent-base/Dockerfile58
-rw-r--r--images/agent-base/tools/ct210
-rw-r--r--internal/api/server.go7
-rw-r--r--internal/api/server_test.go82
-rw-r--r--internal/api/webhook.go17
-rw-r--r--internal/api/webhook_test.go8
-rw-r--r--internal/cli/run.go32
-rw-r--r--internal/cli/serve.go46
-rw-r--r--internal/config/config.go6
-rw-r--r--internal/executor/claude.go714
-rw-r--r--internal/executor/claude_test.go882
-rw-r--r--internal/executor/container.go380
-rw-r--r--internal/executor/container_test.go244
-rw-r--r--internal/executor/executor_test.go11
-rw-r--r--internal/executor/gemini.go228
-rw-r--r--internal/executor/gemini_test.go179
-rw-r--r--internal/executor/helpers.go174
-rw-r--r--internal/executor/stream_test.go25
-rw-r--r--internal/notify/vapid.go16
-rw-r--r--internal/notify/vapid_test.go21
-rw-r--r--internal/storage/db.go19
-rw-r--r--internal/task/task.go7
-rw-r--r--scripts/drain-failed-tasks22
-rw-r--r--web/app.js350
-rw-r--r--web/index.html13
-rw-r--r--web/test/tab-persistence.test.mjs58
-rw-r--r--web/test/task-panel-summary.test.mjs143
30 files changed, 1777 insertions, 2303 deletions
diff --git a/CLAUDE.md b/CLAUDE.md
index 2cb37a8..d804a96 100644
--- a/CLAUDE.md
+++ b/CLAUDE.md
@@ -53,14 +53,14 @@ Config defaults to `~/.claudomator/config.toml`. Data is stored in `~/.claudomat
## Architecture
-**Pipeline:** CLI/API → `executor.Pool` → `executor.ClaudeRunner` → `claude -p` subprocess → SQLite + log files
+**Pipeline:** CLI/API → `executor.Pool` → `executor.ContainerRunner` → Docker container → SQLite + log files
### Packages
| Package | Role |
|---|---|
| `internal/task` | `Task` struct, YAML parsing, state machine, validation |
-| `internal/executor` | `Pool` (bounded goroutine pool) + `ClaudeRunner` (subprocess manager) |
+| `internal/executor` | `Pool` (bounded goroutine pool) + `ContainerRunner` (Docker-based executor) |
| `internal/storage` | SQLite wrapper; stores tasks and execution records |
| `internal/api` | HTTP server (REST + WebSocket via `internal/api.Hub`) |
| `internal/reporter` | Formats and emits execution results |
@@ -72,9 +72,9 @@ Config defaults to `~/.claudomator/config.toml`. Data is stored in `~/.claudomat
**Task execution:**
1. Task created via `POST /api/tasks` or YAML file (`task.ParseFile`)
2. `POST /api/tasks/{id}/run` → `executor.Pool.Submit()` → goroutine in pool
-3. `ClaudeRunner.Run()` invokes `claude -p <instructions> --output-format stream-json`
-4. stdout streamed to `~/.claudomator/executions/<exec-id>/stdout.log`; cost parsed from stream-json
-5. Execution result written to SQLite; broadcast via WebSocket to connected clients
+3. `ContainerRunner.Run()` clones `repository_url`, runs `docker run claudomator-agent:latest`
+4. Agent runs `claude -p` inside the container; stdout streamed to `executions/<exec-id>/stdout.log`
+5. On success, runner pushes commits back to the remote; execution result written to SQLite + WebSocket broadcast
**State machine** (`task.ValidTransition`):
`PENDING` → `QUEUED` → `RUNNING` → `COMPLETED | FAILED | TIMED_OUT | CANCELLED | BUDGET_EXCEEDED`
@@ -166,6 +166,48 @@ A task is created for:
Tasks are tagged `["ci", "auto"]`, capped at $3 USD, and use tools: Read, Edit, Bash, Glob, Grep.
+## Agent Tooling (`ct` CLI)
+
+Agents running inside containers have access to `ct`, a pre-built CLI for interacting with the Claudomator API. It is installed at `/usr/local/bin/ct` in the container image. **Use `ct` to create and manage subtasks — do not attempt raw `curl` API calls.**
+
+### Environment (injected automatically)
+
+| Variable | Purpose |
+|---|---|
+| `CLAUDOMATOR_API_URL` | Base URL of the Claudomator API (e.g. `http://host.docker.internal:8484`) |
+| `CLAUDOMATOR_TASK_ID` | ID of the currently-running task; used as the default `parent_task_id` for new subtasks |
+
+### Commands
+
+```bash
+# Create a subtask and immediately queue it (returns task ID)
+ct task submit --name "Fix tests" --instructions "Run tests and fix any failures." [--model sonnet] [--budget 3.0]
+
+# Create, queue, and wait for completion (exits 0=COMPLETED, 1=FAILED, 2=BLOCKED)
+ct task submit --name "Fix tests" --instructions "..." --wait
+
+# Read instructions from a file instead of inline
+ct task submit --name "Fix tests" --file /workspace/subtask-instructions.txt --wait
+
+# Lower-level: create only (returns task ID), then run separately
+TASK_ID=$(ct task create --name "..." --instructions "...")
+ct task run "$TASK_ID"
+ct task wait "$TASK_ID" --timeout 600
+
+# Check status of any task
+ct task status <task-id>
+
+# List recent tasks
+ct task list
+```
+
+### Notes
+
+- Default model is `sonnet`; default budget is `$3.00 USD`. Override with `--model` / `--budget`.
+- `ct task wait` polls every 5 seconds and exits with the task's terminal state on stdout.
+- Subtasks inherit the current task as their parent automatically (via `$CLAUDOMATOR_TASK_ID`).
+- Override parent with `--parent <task-id>` if needed.
+
## ADRs
See `docs/adr/001-language-and-architecture.md` for the Go + SQLite + WebSocket rationale.
diff --git a/docs/adr/005-sandbox-execution-model.md b/docs/adr/005-sandbox-execution-model.md
index b374561..0c9ef14 100644
--- a/docs/adr/005-sandbox-execution-model.md
+++ b/docs/adr/005-sandbox-execution-model.md
@@ -1,7 +1,7 @@
# ADR-005: Git Sandbox Execution Model
## Status
-Accepted
+Superseded by [ADR-006](006-containerized-execution.md)
## Context
@@ -69,9 +69,13 @@ state), the sandbox is **not** torn down. The preserved sandbox allows the
resumed execution to pick up the same working tree state, including any
in-progress file changes made before the agent asked its question.
-Resume executions (`SubmitResume`) skip sandbox setup entirely and run
-directly in `project_dir`, passing `--resume <session-id>` to the agent
-so Claude can continue its previous conversation.
+**Known Risk: Resume skips sandbox.** Current implementation of
+Resume executions (`SubmitResume`) skips sandbox setup entirely and runs
+directly in `project_dir`. This is a significant behavioral divergence: if a
+resumed task makes further changes, they land directly in the canonical working
+copy, reintroducing the concurrent corruption and partial-work leak risks
+identified in the Context section. A future iteration should ensure resumed
+tasks pick up the preserved sandbox instead.
### Session ID propagation on resume
@@ -113,10 +117,15 @@ The fix is in `ClaudeRunner.Run`: if `e.ResumeSessionID != ""`, use it as
directory the server process inherited.
- If a sandbox's push repeatedly fails (e.g. due to a bare repo that is
itself broken), the task is failed with the sandbox preserved.
-- If `/tmp` runs out of space (many large sandboxes), tasks will fail at
- clone time. This is a known operational risk with no current mitigation.
-- The `project_dir` field in task YAML must point to a git repository with
- a configured `"local"` or `"origin"` remote that accepts pushes.
+- **If `/tmp` runs out of space** (many large sandboxes), tasks will fail at
+ clone time. This is a known operational risk. Mitigations such as periodic
+ cleanup of old sandboxes (cron) or pre-clone disk space checks are required
+ as follow-up items.
+- **The `project_dir` field in task YAML** must point to a git repository with
+ a configured `"local"` or `"origin"` remote that accepts pushes. If neither
+ remote exists or the push is rejected for other reasons, the task will be
+ marked as `FAILED` and the sandbox will be preserved for manual recovery.
+
## Relevant Code Locations
diff --git a/docs/adr/006-containerized-execution.md b/docs/adr/006-containerized-execution.md
new file mode 100644
index 0000000..cdd1cc2
--- /dev/null
+++ b/docs/adr/006-containerized-execution.md
@@ -0,0 +1,51 @@
+# ADR-006: Containerized Repository-Based Execution Model
+
+## Status
+Accepted (Supersedes ADR-005)
+
+## Context
+ADR-005 introduced a sandbox execution model based on local git clones and pushes back to a local project directory. While this provided isolation, it had several flaws identified during early adoption:
+1. **Host pollution**: Build dependencies (Node, Go, etc.) had to be installed on the host and were subject to permission issues (e.g., `/root/.nvm` access for `www-data`).
+2. **Fragile Pushes**: Pushing to a checked-out local branch is non-standard and requires risky git configs.
+3. **Resume Divergence**: Resumed tasks bypassed the sandbox, reintroducing corruption risks.
+4. **Scale**: Local directory-based "project selection" is a hack that doesn't scale to multiple repos or environments.
+
+## Decision
+We will move to a containerized execution model where projects are defined by canonical repository URLs and executed in isolated containers.
+
+### 1. Repository-Based Projects
+- The `Task` model now uses `RepositoryURL` as the source of truth for the codebase.
+- This replaces the fragile reliance on local `ProjectDir` paths.
+
+### 2. Containerized Sandboxes
+- Each task execution runs in a fresh container (Docker/Podman).
+- The runner clones the repository into a host-side temporary workspace and mounts it into the container.
+- The container provides a "bare system" with the full build stack (Node, Go, etc.) pre-installed, isolating the host from build dependencies.
+
+### 3. Unified Workspace Management (including RESUME)
+- Unlike ADR-005, the containerized model is designed to handle **Resume** by re-attaching to or re-mounting the same host-side workspace.
+- This ensures that resumed tasks **do not** bypass the sandbox and never land directly in a production directory.
+
+### 4. Push to Actual Remotes
+- Agents commit changes within the sandbox.
+- The runner pushes these commits directly to the `RepositoryURL` (actual remote).
+- If the remote is missing or the push fails, the task is marked `FAILED` and the host-side workspace is preserved for inspection.
+
+## Rationale
+- **Isolation**: Containers prevent host pollution and ensure a consistent build environment.
+- **Safety**: Repository URLs provide a standard way to manage codebases across environments.
+- **Consistency**: Unified workspace management for initial runs and resumes eliminates the behavioral divergence found in ADR-005.
+
+## Consequences
+- Requires a container runtime (Docker) on the host.
+- Requires pre-built agent images (e.g., `claudomator-agent:latest`).
+- **Disk Space Risk**: Host-side clones still consume `/tmp` space. Mitigation requires periodic cleanup of old workspaces or disk-space monitoring.
+- **Git Config**: Repositories no longer require `receive.denyCurrentBranch = updateInstead` because we push to the remote, not a local worktree.
+
+## Relevant Code Locations
+| Concern | File |
+|---|---|
+| Container Lifecycle | `internal/executor/container.go` |
+| Runner Registration | `internal/cli/serve.go` |
+| Task Model | `internal/task/task.go` |
+| API Integration | `internal/api/server.go` |
diff --git a/images/agent-base/Dockerfile b/images/agent-base/Dockerfile
new file mode 100644
index 0000000..0e8057c
--- /dev/null
+++ b/images/agent-base/Dockerfile
@@ -0,0 +1,58 @@
+# Claudomator Agent Base Image
+FROM ubuntu:24.04
+
+ENV DEBIAN_FRONTEND=noninteractive
+
+# Base system tools
+RUN apt-get update && apt-get install -y \
+ git \
+ curl \
+ make \
+ wget \
+ sqlite3 \
+ jq \
+ sudo \
+ ca-certificates \
+ && rm -rf /var/lib/apt/lists/*
+
+# Node.js 22 via NodeSource
+RUN curl -fsSL https://deb.nodesource.com/setup_22.x | bash - \
+ && apt-get install -y nodejs \
+ && rm -rf /var/lib/apt/lists/*
+
+# Go 1.24
+RUN wget -q https://go.dev/dl/go1.24.1.linux-amd64.tar.gz && \
+ tar -C /usr/local -xzf go1.24.1.linux-amd64.tar.gz && \
+ rm go1.24.1.linux-amd64.tar.gz
+ENV PATH=$PATH:/usr/local/go/bin
+
+# Claude Code CLI
+RUN npm install -g @anthropic-ai/claude-code
+
+# Gemini CLI
+RUN npm install -g @google/gemini-cli
+
+# CSS build tools (for claudomator itself)
+RUN npm install -g postcss-cli tailwindcss autoprefixer
+
+# Git: allow operations on any directory (agents clone into /workspace/*)
+RUN git config --system safe.directory '*'
+
+# Claudomator agent CLI tools (ct)
+COPY tools/ct /usr/local/bin/ct
+RUN chmod +x /usr/local/bin/ct
+
+# Setup workspace
+WORKDIR /workspace
+
+# Agent user with passwordless sudo
+RUN useradd -m claudomator-agent && \
+ echo "claudomator-agent ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
+
+USER claudomator-agent
+
+# Create a default empty config to satisfy the CLI if no mount is provided
+RUN mkdir -p /home/claudomator-agent/.claude && \
+ echo '{}' > /home/claudomator-agent/.claude.json
+
+CMD ["/bin/bash"]
diff --git a/images/agent-base/tools/ct b/images/agent-base/tools/ct
new file mode 100644
index 0000000..46d9613
--- /dev/null
+++ b/images/agent-base/tools/ct
@@ -0,0 +1,210 @@
+#!/bin/bash
+# ct - Claudomator CLI for agents running inside containers
+#
+# Usage:
+# ct task create --name "..." --instructions "..." # create subtask (parent auto-set)
+# ct task run <task-id> # queue a task for execution
+# ct task wait <task-id> [--timeout 300] # poll until done, print status
+# ct task status <task-id> # print current state
+# ct task list # list recent tasks
+#
+# Environment (injected by ContainerRunner):
+# CLAUDOMATOR_API_URL base URL of the Claudomator API
+# CLAUDOMATOR_TASK_ID ID of the currently running task (used as default parent)
+
+set -euo pipefail
+
+API="${CLAUDOMATOR_API_URL:-http://host.docker.internal:8484}"
+PARENT="${CLAUDOMATOR_TASK_ID:-}"
+
+_api() {
+ local method="$1"; shift
+ local path="$1"; shift
+ curl -sf -X "$method" "${API}${path}" \
+ -H "Content-Type: application/json" \
+ "$@"
+}
+
+_require() {
+ if ! command -v "$1" &>/dev/null; then
+ echo "ct: required tool '$1' not found" >&2
+ exit 1
+ fi
+}
+
+_require curl
+_require jq
+
+cmd_task_create() {
+ local name="" instructions="" instructions_file="" model="" budget="" parent="$PARENT"
+
+ while [[ $# -gt 0 ]]; do
+ case "$1" in
+ --name) name="$2"; shift 2 ;;
+ --instructions) instructions="$2"; shift 2 ;;
+ --file) instructions_file="$2"; shift 2 ;;
+ --model) model="$2"; shift 2 ;;
+ --budget) budget="$2"; shift 2 ;;
+ --parent) parent="$2"; shift 2 ;;
+ *) echo "ct task create: unknown flag $1" >&2; exit 1 ;;
+ esac
+ done
+
+ if [[ -z "$name" ]]; then
+ echo "ct task create: --name is required" >&2; exit 1
+ fi
+
+ if [[ -n "$instructions_file" ]]; then
+ instructions=$(cat "$instructions_file")
+ fi
+
+ if [[ -z "$instructions" ]]; then
+ echo "ct task create: --instructions or --file is required" >&2; exit 1
+ fi
+
+ local payload
+ payload=$(jq -n \
+ --arg name "$name" \
+ --arg instructions "$instructions" \
+ --arg parent "$parent" \
+ --arg model "${model:-sonnet}" \
+ --argjson budget "${budget:-3.0}" \
+ '{
+ name: $name,
+ parent_task_id: $parent,
+ agent: {
+ type: "claude",
+ model: $model,
+ instructions: $instructions,
+ max_budget_usd: $budget
+ }
+ }')
+
+ local response
+ response=$(_api POST /api/tasks -d "$payload")
+ local task_id
+ task_id=$(echo "$response" | jq -r '.id // empty')
+
+ if [[ -z "$task_id" ]]; then
+ echo "ct task create: API error: $(echo "$response" | jq -r '.error // .')" >&2
+ exit 1
+ fi
+
+ echo "$task_id"
+}
+
+cmd_task_run() {
+ local task_id="${1:-}"
+ if [[ -z "$task_id" ]]; then
+ echo "ct task run: task-id required" >&2; exit 1
+ fi
+
+ local response
+ response=$(_api POST "/api/tasks/${task_id}/run")
+ echo "$response" | jq -r '.message // .error // .'
+}
+
+cmd_task_wait() {
+ local task_id="${1:-}"
+ local timeout=300
+ shift || true
+
+ while [[ $# -gt 0 ]]; do
+ case "$1" in
+ --timeout) timeout="$2"; shift 2 ;;
+ *) echo "ct task wait: unknown flag $1" >&2; exit 1 ;;
+ esac
+ done
+
+ if [[ -z "$task_id" ]]; then
+ echo "ct task wait: task-id required" >&2; exit 1
+ fi
+
+ local deadline=$(( $(date +%s) + timeout ))
+ local interval=5
+
+ while true; do
+ local response
+ response=$(_api GET "/api/tasks/${task_id}" 2>/dev/null) || true
+
+ local state
+ state=$(echo "$response" | jq -r '.state // "UNKNOWN"')
+
+ case "$state" in
+ COMPLETED|FAILED|TIMED_OUT|CANCELLED|BUDGET_EXCEEDED)
+ echo "$state"
+ [[ "$state" == "COMPLETED" ]] && exit 0 || exit 1
+ ;;
+ BLOCKED)
+ echo "BLOCKED"
+ exit 2
+ ;;
+ esac
+
+ if [[ $(date +%s) -ge $deadline ]]; then
+ echo "ct task wait: timed out after ${timeout}s (state: $state)" >&2
+ exit 1
+ fi
+
+ sleep "$interval"
+ done
+}
+
+cmd_task_status() {
+ local task_id="${1:-}"
+ if [[ -z "$task_id" ]]; then
+ echo "ct task status: task-id required" >&2; exit 1
+ fi
+ _api GET "/api/tasks/${task_id}" | jq -r '.state'
+}
+
+cmd_task_list() {
+ _api GET "/api/tasks" | jq -r '.[] | "\(.state)\t\(.id)\t\(.name)"' | sort
+}
+
+# create-and-run shorthand: create a subtask and immediately queue it, then optionally wait
+cmd_task_submit() {
+ local wait=false
+ local args=()
+
+ while [[ $# -gt 0 ]]; do
+ case "$1" in
+ --wait) wait=true; shift ;;
+ *) args+=("$1"); shift ;;
+ esac
+ done
+
+ local task_id
+ task_id=$(cmd_task_create "${args[@]}")
+ cmd_task_run "$task_id" >/dev/null
+ echo "$task_id"
+
+ if $wait; then
+ cmd_task_wait "$task_id"
+ fi
+}
+
+# Dispatch
+if [[ $# -lt 2 ]]; then
+ echo "Usage: ct <resource> <command> [args...]"
+ echo " ct task create --name NAME --instructions TEXT [--file FILE] [--model MODEL] [--budget N]"
+ echo " ct task submit --name NAME --instructions TEXT [--wait]"
+ echo " ct task run <id>"
+ echo " ct task wait <id> [--timeout 300]"
+ echo " ct task status <id>"
+ echo " ct task list"
+ exit 1
+fi
+
+resource="$1"; shift
+command="$1"; shift
+
+case "${resource}/${command}" in
+ task/create) cmd_task_create "$@" ;;
+ task/run) cmd_task_run "$@" ;;
+ task/wait) cmd_task_wait "$@" ;;
+ task/status) cmd_task_status "$@" ;;
+ task/list) cmd_task_list ;;
+ task/submit) cmd_task_submit "$@" ;;
+ *) echo "ct: unknown command: ${resource} ${command}" >&2; exit 1 ;;
+esac
diff --git a/internal/api/server.go b/internal/api/server.go
index 48440e1..e5d0ba6 100644
--- a/internal/api/server.go
+++ b/internal/api/server.go
@@ -424,6 +424,7 @@ func (s *Server) handleCreateTask(w http.ResponseWriter, r *http.Request) {
Description string `json:"description"`
ElaborationInput string `json:"elaboration_input"`
Project string `json:"project"`
+ RepositoryURL string `json:"repository_url"`
Agent task.AgentConfig `json:"agent"`
Claude task.AgentConfig `json:"claude"` // legacy alias
Timeout string `json:"timeout"`
@@ -448,6 +449,7 @@ func (s *Server) handleCreateTask(w http.ResponseWriter, r *http.Request) {
Description: input.Description,
ElaborationInput: input.ElaborationInput,
Project: input.Project,
+ RepositoryURL: input.RepositoryURL,
Agent: input.Agent,
Priority: task.Priority(input.Priority),
Tags: input.Tags,
@@ -458,6 +460,11 @@ func (s *Server) handleCreateTask(w http.ResponseWriter, r *http.Request) {
UpdatedAt: now,
ParentTaskID: input.ParentTaskID,
}
+
+ // Fallback for repository_url if only provided in Agent config
+ if t.RepositoryURL == "" && input.Agent.ProjectDir != "" {
+ t.RepositoryURL = input.Agent.ProjectDir
+ }
if t.Agent.Type == "" {
t.Agent.Type = "claude"
}
diff --git a/internal/api/server_test.go b/internal/api/server_test.go
index 696aca3..8ff4227 100644
--- a/internal/api/server_test.go
+++ b/internal/api/server_test.go
@@ -16,6 +16,7 @@ import (
"context"
+ "github.com/google/uuid"
"github.com/thepeterstone/claudomator/internal/executor"
"github.com/thepeterstone/claudomator/internal/notify"
"github.com/thepeterstone/claudomator/internal/storage"
@@ -89,6 +90,9 @@ func testServerWithRunner(t *testing.T, runner executor.Runner) (*Server, *stora
t.Cleanup(func() { store.Close() })
logger := slog.New(slog.NewTextHandler(os.Stderr, &slog.HandlerOptions{Level: slog.LevelError}))
+ if mr, ok := runner.(*mockRunner); ok {
+ mr.logDir = t.TempDir()
+ }
runners := map[string]executor.Runner{
"claude": runner,
"gemini": runner,
@@ -99,11 +103,39 @@ func testServerWithRunner(t *testing.T, runner executor.Runner) (*Server, *stora
}
type mockRunner struct {
- err error
- sleep time.Duration
+ err error
+ sleep time.Duration
+ logDir string
+ onRun func(*task.Task, *storage.Execution) error
}
-func (m *mockRunner) Run(ctx context.Context, _ *task.Task, _ *storage.Execution) error {
+func (m *mockRunner) ExecLogDir(execID string) string {
+ if m.logDir == "" {
+ return ""
+ }
+ return filepath.Join(m.logDir, execID)
+}
+
+func (m *mockRunner) Run(ctx context.Context, t *task.Task, e *storage.Execution) error {
+ if e.ID == "" {
+ e.ID = uuid.New().String()
+ }
+ if m.logDir != "" {
+ dir := m.ExecLogDir(e.ID)
+ if err := os.MkdirAll(dir, 0755); err != nil {
+ return err
+ }
+ e.StdoutPath = filepath.Join(dir, "stdout.log")
+ e.StderrPath = filepath.Join(dir, "stderr.log")
+ e.ArtifactDir = dir
+ // Create an empty file at least
+ os.WriteFile(e.StdoutPath, []byte(""), 0644)
+ }
+ if m.onRun != nil {
+ if err := m.onRun(t, e); err != nil {
+ return err
+ }
+ }
if m.sleep > 0 {
select {
case <-time.After(m.sleep):
@@ -143,40 +175,26 @@ func testServerWithGeminiMockRunner(t *testing.T) (*Server, *storage.DB) {
logger := slog.New(slog.NewTextHandler(os.Stderr, &slog.HandlerOptions{Level: slog.LevelDebug}))
- // Create the mock gemini binary script.
- mockBinDir := t.TempDir()
- mockGeminiPath := filepath.Join(mockBinDir, "mock-gemini-binary.sh")
- mockScriptContent := `#!/bin/bash
-OUTPUT_FILE=$(mktemp)
-echo "` + "```json" + `" > "$OUTPUT_FILE"
-echo "{\"type\":\"content_block_start\",\"content_block\":{\"text\":\"Hello, Gemini!\",\"type\":\"text\"}}" >> "$OUTPUT_FILE"
-echo "{\"type\":\"content_block_delta\",\"content_block\":{\"text\":\" How are you?\"}}" >> "$OUTPUT_FILE"
-echo "{\"type\":\"content_block_end\"}" >> "$OUTPUT_FILE"
-echo "{\"type\":\"message_delta\",\"message\":{\"role\":\"model\"}}" >> "$OUTPUT_FILE"
-echo "{\"type\":\"message_end\"}" >> "$OUTPUT_FILE"
-echo "` + "```" + `" >> "$OUTPUT_FILE"
-cat "$OUTPUT_FILE"
-rm "$OUTPUT_FILE"
-exit 0
-`
- if err := os.WriteFile(mockGeminiPath, []byte(mockScriptContent), 0755); err != nil {
- t.Fatalf("writing mock gemini script: %v", err)
- }
-
- // Configure GeminiRunner to use the mock script.
- geminiRunner := &executor.GeminiRunner{
- BinaryPath: mockGeminiPath,
- Logger: logger,
- LogDir: t.TempDir(), // Ensure log directory is temporary for test
- APIURL: "http://localhost:8080", // Placeholder, not used by this mock
+ mr := &mockRunner{
+ logDir: t.TempDir(),
+ onRun: func(t *task.Task, e *storage.Execution) error {
+ lines := []string{
+ `{"type":"content_block_start","content_block":{"text":"Hello, Gemini!","type":"text"}}`,
+ `{"type":"content_block_delta","content_block":{"text":" How are you?"}}`,
+ `{"type":"content_block_end"}`,
+ `{"type":"message_delta","message":{"role":"model"}}`,
+ `{"type":"message_end"}`,
+ }
+ return os.WriteFile(e.StdoutPath, []byte(strings.Join(lines, "\n")), 0644)
+ },
}
runners := map[string]executor.Runner{
- "claude": &mockRunner{}, // Keep mock for claude to not interfere
- "gemini": geminiRunner,
+ "claude": mr,
+ "gemini": mr,
}
pool := executor.NewPool(2, runners, store, logger)
- srv := NewServer(store, pool, logger, "claude", "gemini") // Pass original binary paths
+ srv := NewServer(store, pool, logger, "claude", "gemini")
return srv, store
}
diff --git a/internal/api/webhook.go b/internal/api/webhook.go
index 0530f3e..141224f 100644
--- a/internal/api/webhook.go
+++ b/internal/api/webhook.go
@@ -210,16 +210,17 @@ func (s *Server) createCIFailureTask(w http.ResponseWriter, repoName, fullName,
MaxBudgetUSD: 3.0,
AllowedTools: []string{"Read", "Edit", "Bash", "Glob", "Grep"},
},
- Priority: task.PriorityNormal,
- Tags: []string{"ci", "auto"},
- DependsOn: []string{},
- Retry: task.RetryConfig{MaxAttempts: 1, Backoff: "exponential"},
- State: task.StatePending,
- CreatedAt: now,
- UpdatedAt: now,
+ Priority: task.PriorityNormal,
+ Tags: []string{"ci", "auto"},
+ DependsOn: []string{},
+ Retry: task.RetryConfig{MaxAttempts: 1, Backoff: "exponential"},
+ State: task.StatePending,
+ CreatedAt: now,
+ UpdatedAt: now,
+ RepositoryURL: fmt.Sprintf("https://github.com/%s.git", fullName),
}
if project != nil {
- t.Agent.ProjectDir = project.Dir
+ t.Project = project.Name
}
if err := s.store.CreateTask(t); err != nil {
diff --git a/internal/api/webhook_test.go b/internal/api/webhook_test.go
index 1bc4aaa..0fc9664 100644
--- a/internal/api/webhook_test.go
+++ b/internal/api/webhook_test.go
@@ -124,8 +124,8 @@ func TestGitHubWebhook_CheckRunFailure_CreatesTask(t *testing.T) {
if !strings.Contains(tk.Name, "main") {
t.Errorf("task name %q does not contain branch", tk.Name)
}
- if tk.Agent.ProjectDir != "/workspace/myrepo" {
- t.Errorf("task project dir = %q, want /workspace/myrepo", tk.Agent.ProjectDir)
+ if tk.RepositoryURL != "https://github.com/owner/myrepo.git" {
+ t.Errorf("task repository url = %q, want https://github.com/owner/myrepo.git", tk.RepositoryURL)
}
if !contains(tk.Tags, "ci") || !contains(tk.Tags, "auto") {
t.Errorf("task tags %v missing expected ci/auto tags", tk.Tags)
@@ -375,8 +375,8 @@ func TestGitHubWebhook_FallbackToSingleProject(t *testing.T) {
if err != nil {
t.Fatalf("task not found: %v", err)
}
- if tk.Agent.ProjectDir != "/workspace/someapp" {
- t.Errorf("expected fallback to /workspace/someapp, got %q", tk.Agent.ProjectDir)
+ if tk.RepositoryURL != "https://github.com/owner/myrepo.git" {
+ t.Errorf("expected fallback repository url, got %q", tk.RepositoryURL)
}
}
diff --git a/internal/cli/run.go b/internal/cli/run.go
index 49aa28e..cfac893 100644
--- a/internal/cli/run.go
+++ b/internal/cli/run.go
@@ -72,18 +72,34 @@ func runTasks(file string, parallel int, dryRun bool) error {
logger := newLogger(verbose)
+ apiURL := "http://localhost" + cfg.ServerAddr
+ if len(cfg.ServerAddr) > 0 && cfg.ServerAddr[0] != ':' {
+ apiURL = "http://" + cfg.ServerAddr
+ }
+
runners := map[string]executor.Runner{
- "claude": &executor.ClaudeRunner{
- BinaryPath: cfg.ClaudeBinaryPath,
- Logger: logger,
- LogDir: cfg.LogDir,
+ "claude": &executor.ContainerRunner{
+ Image: cfg.ClaudeImage,
+ Logger: logger,
+ LogDir: cfg.LogDir,
+ APIURL: apiURL,
+ DropsDir: cfg.DropsDir,
+ SSHAuthSock: cfg.SSHAuthSock,
+ ClaudeBinary: cfg.ClaudeBinaryPath,
+ GeminiBinary: cfg.GeminiBinaryPath,
},
- "gemini": &executor.GeminiRunner{
- BinaryPath: cfg.GeminiBinaryPath,
- Logger: logger,
- LogDir: cfg.LogDir,
+ "gemini": &executor.ContainerRunner{
+ Image: cfg.GeminiImage,
+ Logger: logger,
+ LogDir: cfg.LogDir,
+ APIURL: apiURL,
+ DropsDir: cfg.DropsDir,
+ SSHAuthSock: cfg.SSHAuthSock,
+ ClaudeBinary: cfg.ClaudeBinaryPath,
+ GeminiBinary: cfg.GeminiBinaryPath,
},
}
+
pool := executor.NewPool(parallel, runners, store, logger)
if cfg.GeminiBinaryPath != "" {
pool.Classifier = &executor.Classifier{GeminiBinaryPath: cfg.GeminiBinaryPath}
diff --git a/internal/cli/serve.go b/internal/cli/serve.go
index efac719..98e7524 100644
--- a/internal/cli/serve.go
+++ b/internal/cli/serve.go
@@ -35,6 +35,8 @@ func newServeCmd() *cobra.Command {
cmd.Flags().StringVar(&addr, "addr", ":8484", "listen address")
cmd.Flags().StringVar(&workspaceRoot, "workspace-root", "/workspace", "root directory for listing workspaces")
+ cmd.Flags().StringVar(&cfg.ClaudeImage, "claude-image", cfg.ClaudeImage, "docker image for claude agents")
+ cmd.Flags().StringVar(&cfg.GeminiImage, "gemini-image", cfg.GeminiImage, "docker image for gemini agents")
return cmd
}
@@ -54,7 +56,7 @@ func serve(addr string) error {
if cfg.VAPIDPublicKey == "" || cfg.VAPIDPrivateKey == "" {
pub, _ := store.GetSetting("vapid_public_key")
priv, _ := store.GetSetting("vapid_private_key")
- if pub == "" || priv == "" {
+ if pub == "" || priv == "" || !notify.ValidateVAPIDPublicKey(pub) {
pub, priv, err = notify.GenerateVAPIDKeys()
if err != nil {
return fmt.Errorf("generating VAPID keys: %w", err)
@@ -73,20 +75,38 @@ func serve(addr string) error {
apiURL = "http://" + addr
}
+ // Resolve the claude config dir from HOME so the container can mount credentials.
+ claudeConfigDir := filepath.Join(os.Getenv("HOME"), ".claude")
+
runners := map[string]executor.Runner{
- "claude": &executor.ClaudeRunner{
- BinaryPath: cfg.ClaudeBinaryPath,
- Logger: logger,
- LogDir: cfg.LogDir,
- APIURL: apiURL,
- DropsDir: cfg.DropsDir,
+ // ContainerRunner: binaries are resolved via PATH inside the container image,
+ // so ClaudeBinary/GeminiBinary are left empty (host paths would not exist inside).
+ "claude": &executor.ContainerRunner{
+ Image: cfg.ClaudeImage,
+ Logger: logger,
+ LogDir: cfg.LogDir,
+ APIURL: apiURL,
+ DropsDir: cfg.DropsDir,
+ SSHAuthSock: cfg.SSHAuthSock,
+ ClaudeConfigDir: claudeConfigDir,
+ },
+ "gemini": &executor.ContainerRunner{
+ Image: cfg.GeminiImage,
+ Logger: logger,
+ LogDir: cfg.LogDir,
+ APIURL: apiURL,
+ DropsDir: cfg.DropsDir,
+ SSHAuthSock: cfg.SSHAuthSock,
+ ClaudeConfigDir: claudeConfigDir,
},
- "gemini": &executor.GeminiRunner{
- BinaryPath: cfg.GeminiBinaryPath,
- Logger: logger,
- LogDir: cfg.LogDir,
- APIURL: apiURL,
- DropsDir: cfg.DropsDir,
+ "container": &executor.ContainerRunner{
+ Image: "claudomator-agent:latest",
+ Logger: logger,
+ LogDir: cfg.LogDir,
+ APIURL: apiURL,
+ DropsDir: cfg.DropsDir,
+ SSHAuthSock: cfg.SSHAuthSock,
+ ClaudeConfigDir: claudeConfigDir,
},
}
diff --git a/internal/config/config.go b/internal/config/config.go
index a3c37fb..fa76b1b 100644
--- a/internal/config/config.go
+++ b/internal/config/config.go
@@ -20,8 +20,11 @@ type Config struct {
DBPath string `toml:"-"`
LogDir string `toml:"-"`
DropsDir string `toml:"-"`
+ SSHAuthSock string `toml:"ssh_auth_sock"`
ClaudeBinaryPath string `toml:"claude_binary_path"`
GeminiBinaryPath string `toml:"gemini_binary_path"`
+ ClaudeImage string `toml:"claude_image"`
+ GeminiImage string `toml:"gemini_image"`
MaxConcurrent int `toml:"max_concurrent"`
DefaultTimeout string `toml:"default_timeout"`
ServerAddr string `toml:"server_addr"`
@@ -48,8 +51,11 @@ func Default() (*Config, error) {
DBPath: filepath.Join(dataDir, "claudomator.db"),
LogDir: filepath.Join(dataDir, "executions"),
DropsDir: filepath.Join(dataDir, "drops"),
+ SSHAuthSock: os.Getenv("SSH_AUTH_SOCK"),
ClaudeBinaryPath: "claude",
GeminiBinaryPath: "gemini",
+ ClaudeImage: "claudomator-agent:latest",
+ GeminiImage: "claudomator-agent:latest",
MaxConcurrent: 3,
DefaultTimeout: "15m",
ServerAddr: ":8484",
diff --git a/internal/executor/claude.go b/internal/executor/claude.go
deleted file mode 100644
index 6346aa8..0000000
--- a/internal/executor/claude.go
+++ /dev/null
@@ -1,714 +0,0 @@
-package executor
-
-import (
- "bufio"
- "context"
- "encoding/json"
- "fmt"
- "io"
- "log/slog"
- "os"
- "os/exec"
- "path/filepath"
- "strings"
- "sync"
- "syscall"
- "time"
-
- "github.com/thepeterstone/claudomator/internal/storage"
- "github.com/thepeterstone/claudomator/internal/task"
-)
-
-// ClaudeRunner spawns the `claude` CLI in non-interactive mode.
-type ClaudeRunner struct {
- BinaryPath string // defaults to "claude"
- Logger *slog.Logger
- LogDir string // base directory for execution logs
- APIURL string // base URL of the Claudomator API, passed to subprocesses
- DropsDir string // path to the drops directory, passed to subprocesses
-}
-
-// BlockedError is returned by Run when the agent wrote a question file and exited.
-// The pool transitions the task to BLOCKED and stores the question for the user.
-type BlockedError struct {
- QuestionJSON string // raw JSON from the question file
- SessionID string // claude session to resume once the user answers
- SandboxDir string // preserved sandbox path; resume must run here so Claude finds its session files
-}
-
-func (e *BlockedError) Error() string { return fmt.Sprintf("task blocked: %s", e.QuestionJSON) }
-
-// ExecLogDir returns the log directory for the given execution ID.
-// Implements LogPather so the pool can persist paths before execution starts.
-func (r *ClaudeRunner) ExecLogDir(execID string) string {
- if r.LogDir == "" {
- return ""
- }
- return filepath.Join(r.LogDir, execID)
-}
-
-func (r *ClaudeRunner) binaryPath() string {
- if r.BinaryPath != "" {
- return r.BinaryPath
- }
- return "claude"
-}
-
-// Run executes a claude -p invocation, streaming output to log files.
-// It retries up to 3 times on rate-limit errors using exponential backoff.
-// If the agent writes a question file and exits, Run returns *BlockedError.
-//
-// When project_dir is set and this is not a resume execution, Run clones the
-// project into a temp sandbox, runs the agent there, then merges committed
-// changes back to project_dir. On failure the sandbox is preserved and its
-// path is included in the error.
-func (r *ClaudeRunner) Run(ctx context.Context, t *task.Task, e *storage.Execution) error {
- projectDir := t.Agent.ProjectDir
-
- // Validate project_dir exists when set.
- if projectDir != "" {
- if _, err := os.Stat(projectDir); err != nil {
- return fmt.Errorf("project_dir %q: %w", projectDir, err)
- }
- }
-
- // Setup log directory once; retries overwrite the log files.
- logDir := r.ExecLogDir(e.ID)
- if logDir == "" {
- logDir = e.ID // fallback for tests without LogDir set
- }
- if err := os.MkdirAll(logDir, 0700); err != nil {
- return fmt.Errorf("creating log dir: %w", err)
- }
- if e.StdoutPath == "" {
- e.StdoutPath = filepath.Join(logDir, "stdout.log")
- e.StderrPath = filepath.Join(logDir, "stderr.log")
- e.ArtifactDir = logDir
- }
-
- // Pre-assign session ID so we can resume after a BLOCKED state.
- // For resume executions, the claude session continues under the original
- // session ID (the one passed to --resume). Using the new exec's own UUID
- // would cause a second block-and-resume cycle to pass the wrong --resume
- // argument.
- if e.SessionID == "" {
- if e.ResumeSessionID != "" {
- e.SessionID = e.ResumeSessionID
- } else {
- e.SessionID = e.ID // reuse execution UUID as session UUID (both are UUIDs)
- }
- }
-
- // For new (non-resume) executions with a project_dir, clone into a sandbox.
- // Resume executions run in the preserved sandbox (e.SandboxDir) so Claude
- // finds its session files under the same project slug. If no sandbox was
- // preserved (e.g. task had no project_dir), fall back to project_dir.
- var sandboxDir string
- var startHEAD string
- effectiveWorkingDir := projectDir
- if e.ResumeSessionID != "" {
- if e.SandboxDir != "" {
- if _, statErr := os.Stat(e.SandboxDir); statErr == nil {
- effectiveWorkingDir = e.SandboxDir
- } else {
- // Preserved sandbox was cleaned up (e.g. /tmp purge after reboot).
- // Clone a fresh sandbox so the task can run rather than fail immediately.
- r.Logger.Warn("preserved sandbox missing, cloning fresh", "sandbox", e.SandboxDir, "project_dir", projectDir)
- e.SandboxDir = ""
- if projectDir != "" {
- var err error
- sandboxDir, err = setupSandbox(t.Agent.ProjectDir, r.Logger)
- if err != nil {
- return fmt.Errorf("setting up sandbox: %w", err)
- }
-
- effectiveWorkingDir = sandboxDir
- r.Logger.Info("fresh sandbox created for resume", "sandbox", sandboxDir, "project_dir", projectDir)
- }
- }
- }
- } else if projectDir != "" {
- var err error
- sandboxDir, err = setupSandbox(t.Agent.ProjectDir, r.Logger)
- if err != nil {
- return fmt.Errorf("setting up sandbox: %w", err)
- }
-
- effectiveWorkingDir = sandboxDir
- r.Logger.Info("sandbox created", "sandbox", sandboxDir, "project_dir", projectDir)
- }
-
- if effectiveWorkingDir != "" {
- // Capture the initial HEAD so we can identify new commits later.
- headOut, _ := exec.Command("git", gitSafe("-C", effectiveWorkingDir, "rev-parse", "HEAD")...).Output()
- startHEAD = strings.TrimSpace(string(headOut))
- }
-
- questionFile := filepath.Join(logDir, "question.json")
- args := r.buildArgs(t, e, questionFile)
-
- attempt := 0
- err := runWithBackoff(ctx, 3, 5*time.Second, func() error {
- if attempt > 0 {
- delay := 5 * time.Second * (1 << (attempt - 1))
- r.Logger.Warn("rate-limited by Claude API, retrying",
- "attempt", attempt,
- "delay", delay,
- )
- }
- attempt++
- return r.execOnce(ctx, args, effectiveWorkingDir, projectDir, e)
- })
- if err != nil {
- if sandboxDir != "" {
- return fmt.Errorf("%w (sandbox preserved at %s)", err, sandboxDir)
- }
- return err
- }
-
- // Check whether the agent left a question before exiting.
- data, readErr := os.ReadFile(questionFile)
- if readErr == nil {
- os.Remove(questionFile) // consumed
- questionJSON := strings.TrimSpace(string(data))
- // If the agent wrote a completion report instead of a real question,
- // extract the text as the summary and fall through to normal completion.
- if isCompletionReport(questionJSON) {
- r.Logger.Info("treating question file as completion report", "taskID", e.TaskID)
- e.Summary = extractQuestionText(questionJSON)
- } else {
- // Preserve sandbox on BLOCKED — agent may have partial work and its
- // Claude session files are stored under the sandbox's project slug.
- // The resume execution must run in the same directory.
- return &BlockedError{QuestionJSON: questionJSON, SessionID: e.SessionID, SandboxDir: sandboxDir}
- }
- }
-
- // Read agent summary if written.
- summaryFile := filepath.Join(logDir, "summary.txt")
- if summaryData, readErr := os.ReadFile(summaryFile); readErr == nil {
- os.Remove(summaryFile) // consumed
- e.Summary = strings.TrimSpace(string(summaryData))
- }
-
- // Merge sandbox back to project_dir and clean up.
- if sandboxDir != "" {
- if mergeErr := teardownSandbox(projectDir, sandboxDir, startHEAD, r.Logger, e); mergeErr != nil {
- return fmt.Errorf("sandbox teardown: %w (sandbox preserved at %s)", mergeErr, sandboxDir)
- }
- }
- return nil
-}
-
-// isCompletionReport returns true when a question-file JSON looks like a
-// completion report rather than a real user question. Heuristic: no options
-// (or empty options) and no "?" anywhere in the text.
-func isCompletionReport(questionJSON string) bool {
- var q struct {
- Text string `json:"text"`
- Options []string `json:"options"`
- }
- if err := json.Unmarshal([]byte(questionJSON), &q); err != nil {
- return false
- }
- return len(q.Options) == 0 && !strings.Contains(q.Text, "?")
-}
-
-// extractQuestionText returns the "text" field from a question-file JSON, or
-// the raw string if parsing fails.
-func extractQuestionText(questionJSON string) string {
- var q struct {
- Text string `json:"text"`
- }
- if err := json.Unmarshal([]byte(questionJSON), &q); err != nil {
- return questionJSON
- }
- return strings.TrimSpace(q.Text)
-}
-
-// gitSafe returns git arguments that prepend "-c safe.directory=*" so that
-// commands succeed regardless of the repository owner. This is needed when
-// claudomator operates on project directories owned by a different OS user.
-func gitSafe(args ...string) []string {
- return append([]string{"-c", "safe.directory=*"}, args...)
-}
-
-// sandboxCloneSource returns the URL to clone the sandbox from. It prefers a
-// remote named "local" (a local bare repo that accepts pushes cleanly), then
-// falls back to "origin", then to the working copy path itself.
-func sandboxCloneSource(projectDir string) string {
- // Prefer "local" remote, but only if it points to a local path (accepts pushes).
- if out, err := exec.Command("git", gitSafe("-C", projectDir, "remote", "get-url", "local")...).Output(); err == nil {
- u := strings.TrimSpace(string(out))
- if u != "" && (strings.HasPrefix(u, "/") || strings.HasPrefix(u, "file://")) {
- return u
- }
- }
- // Fall back to "origin" — any URL scheme is acceptable for cloning.
- if out, err := exec.Command("git", gitSafe("-C", projectDir, "remote", "get-url", "origin")...).Output(); err == nil {
- if u := strings.TrimSpace(string(out)); u != "" {
- return u
- }
- }
- return projectDir
-}
-
-// setupSandbox prepares a temporary git clone of projectDir.
-// If projectDir is not a git repo it is initialised with an initial commit first.
-func setupSandbox(projectDir string, logger *slog.Logger) (string, error) {
- // Ensure projectDir is a git repo; initialise if not.
- if err := exec.Command("git", gitSafe("-C", projectDir, "rev-parse", "--git-dir")...).Run(); err != nil {
- cmds := [][]string{
- gitSafe("-C", projectDir, "init"),
- gitSafe("-C", projectDir, "add", "-A"),
- gitSafe("-C", projectDir, "commit", "--allow-empty", "-m", "chore: initial commit"),
- }
- for _, args := range cmds {
- if out, err := exec.Command("git", args...).CombinedOutput(); err != nil { //nolint:gosec
- return "", fmt.Errorf("git init %s: %w\n%s", projectDir, err, out)
- }
- }
- }
-
- src := sandboxCloneSource(projectDir)
-
- tempDir, err := os.MkdirTemp("", "claudomator-sandbox-*")
- if err != nil {
- return "", fmt.Errorf("creating sandbox dir: %w", err)
- }
- // git clone requires the target to not exist; remove the placeholder first.
- if err := os.Remove(tempDir); err != nil {
- return "", fmt.Errorf("removing temp dir placeholder: %w", err)
- }
- out, err := exec.Command("git", gitSafe("clone", "--no-hardlinks", src, tempDir)...).CombinedOutput()
- if err != nil {
- return "", fmt.Errorf("git clone: %w\n%s", err, out)
- }
- return tempDir, nil
-}
-
-// teardownSandbox verifies the sandbox is clean and pushes new commits to the
-// canonical bare repo. If the push is rejected because another task pushed
-// concurrently, it fetches and rebases then retries once.
-//
-// The working copy (projectDir) is NOT updated automatically — it is the
-// developer's workspace and is pulled manually. This avoids permission errors
-// from mixed-owner .git/objects directories.
-func teardownSandbox(projectDir, sandboxDir, startHEAD string, logger *slog.Logger, execRecord *storage.Execution) error {
- // Automatically commit uncommitted changes.
- out, err := exec.Command("git", "-C", sandboxDir, "status", "--porcelain").Output()
- if err != nil {
- return fmt.Errorf("git status: %w", err)
- }
- if len(strings.TrimSpace(string(out))) > 0 {
- logger.Info("autocommitting uncommitted changes", "sandbox", sandboxDir)
-
- // Run build before autocommitting.
- if _, err := os.Stat(filepath.Join(sandboxDir, "Makefile")); err == nil {
- logger.Info("running 'make build' before autocommit", "sandbox", sandboxDir)
- if buildOut, buildErr := exec.Command("make", "-C", sandboxDir, "build").CombinedOutput(); buildErr != nil {
- return fmt.Errorf("build failed before autocommit: %w\n%s", buildErr, buildOut)
- }
- } else if _, err := os.Stat(filepath.Join(sandboxDir, "gradlew")); err == nil {
- logger.Info("running './gradlew build' before autocommit", "sandbox", sandboxDir)
- cmd := exec.Command("./gradlew", "build")
- cmd.Dir = sandboxDir
- if buildOut, buildErr := cmd.CombinedOutput(); buildErr != nil {
- return fmt.Errorf("build failed before autocommit: %w\n%s", buildErr, buildOut)
- }
- } else if _, err := os.Stat(filepath.Join(sandboxDir, "go.mod")); err == nil {
- logger.Info("running 'go build ./...' before autocommit", "sandbox", sandboxDir)
- cmd := exec.Command("go", "build", "./...")
- cmd.Dir = sandboxDir
- if buildOut, buildErr := cmd.CombinedOutput(); buildErr != nil {
- return fmt.Errorf("build failed before autocommit: %w\n%s", buildErr, buildOut)
- }
- }
-
- cmds := [][]string{
- gitSafe("-C", sandboxDir, "add", "-A"),
- gitSafe("-C", sandboxDir, "commit", "-m", "chore: autocommit uncommitted changes"),
- }
- for _, args := range cmds {
- if out, err := exec.Command("git", args...).CombinedOutput(); err != nil {
- return fmt.Errorf("autocommit failed (%v): %w\n%s", args, err, out)
- }
- }
- }
-
- // Capture commits before pushing/deleting.
- // Use startHEAD..HEAD to find all commits made during this execution.
- logRange := "origin/HEAD..HEAD"
- if startHEAD != "" && startHEAD != "HEAD" {
- logRange = startHEAD + "..HEAD"
- }
-
- logCmd := exec.Command("git", gitSafe("-C", sandboxDir, "log", logRange, "--pretty=format:%H|%s")...)
- logOut, logErr := logCmd.CombinedOutput()
- if logErr == nil {
- lines := strings.Split(strings.TrimSpace(string(logOut)), "\n")
- logger.Debug("captured commits", "count", len(lines), "range", logRange)
- for _, line := range lines {
- if line == "" {
- continue
- }
- parts := strings.SplitN(line, "|", 2)
- if len(parts) == 2 {
- execRecord.Commits = append(execRecord.Commits, task.GitCommit{
- Hash: parts[0],
- Message: parts[1],
- })
- }
- }
- } else {
- logger.Warn("failed to capture commits", "err", logErr, "range", logRange, "output", string(logOut))
- }
-
- // Check whether there are any new commits to push.
- ahead, err := exec.Command("git", gitSafe("-C", sandboxDir, "rev-list", "--count", logRange)...).Output()
- if err != nil {
- logger.Warn("could not determine commits ahead of origin; proceeding", "err", err, "range", logRange)
- }
- if strings.TrimSpace(string(ahead)) == "0" {
- os.RemoveAll(sandboxDir)
- return nil
- }
-
- // Push from sandbox → bare repo (sandbox's origin is the bare repo).
- if out, err := exec.Command("git", "-C", sandboxDir, "push", "origin", "HEAD").CombinedOutput(); err != nil {
- // If rejected due to concurrent push, fetch+rebase and retry once.
- if strings.Contains(string(out), "fetch first") || strings.Contains(string(out), "non-fast-forward") {
- logger.Info("push rejected (concurrent task); rebasing and retrying", "sandbox", sandboxDir)
- if out2, err2 := exec.Command("git", "-C", sandboxDir, "pull", "--rebase", "origin", "master").CombinedOutput(); err2 != nil {
- return fmt.Errorf("git rebase before retry push: %w\n%s", err2, out2)
- }
- // Re-capture commits after rebase (hashes might have changed)
- execRecord.Commits = nil
- logOut, logErr = exec.Command("git", "-C", sandboxDir, "log", logRange, "--pretty=format:%H|%s").Output()
- if logErr == nil {
- lines := strings.Split(strings.TrimSpace(string(logOut)), "\n")
- for _, line := range lines {
- parts := strings.SplitN(line, "|", 2)
- if len(parts) == 2 {
- execRecord.Commits = append(execRecord.Commits, task.GitCommit{
- Hash: parts[0],
- Message: parts[1],
- })
- }
- }
- }
-
- if out3, err3 := exec.Command("git", "-C", sandboxDir, "push", "origin", "HEAD").CombinedOutput(); err3 != nil {
- return fmt.Errorf("git push to origin (after rebase): %w\n%s", err3, out3)
- }
- } else {
- return fmt.Errorf("git push to origin: %w\n%s", err, out)
- }
- }
-
- logger.Info("sandbox pushed to bare repo", "sandbox", sandboxDir)
- os.RemoveAll(sandboxDir)
- return nil
-}
-
-// execOnce runs the claude subprocess once, streaming output to e's log paths.
-func (r *ClaudeRunner) execOnce(ctx context.Context, args []string, workingDir, projectDir string, e *storage.Execution) error {
- cmd := exec.CommandContext(ctx, r.binaryPath(), args...)
- cmd.Env = append(os.Environ(),
- "CLAUDOMATOR_API_URL="+r.APIURL,
- "CLAUDOMATOR_TASK_ID="+e.TaskID,
- "CLAUDOMATOR_PROJECT_DIR="+projectDir,
- "CLAUDOMATOR_QUESTION_FILE="+filepath.Join(e.ArtifactDir, "question.json"),
- "CLAUDOMATOR_SUMMARY_FILE="+filepath.Join(e.ArtifactDir, "summary.txt"),
- "CLAUDOMATOR_DROP_DIR="+r.DropsDir,
- )
- // Put the subprocess in its own process group so we can SIGKILL the entire
- // group (MCP servers, bash children, etc.) on cancellation.
- cmd.SysProcAttr = &syscall.SysProcAttr{Setpgid: true}
- if workingDir != "" {
- cmd.Dir = workingDir
- }
-
- stdoutFile, err := os.Create(e.StdoutPath)
- if err != nil {
- return fmt.Errorf("creating stdout log: %w", err)
- }
- defer stdoutFile.Close()
-
- stderrFile, err := os.Create(e.StderrPath)
- if err != nil {
- return fmt.Errorf("creating stderr log: %w", err)
- }
- defer stderrFile.Close()
-
- // Use os.Pipe for stdout so we own the read-end lifetime.
- // cmd.StdoutPipe() would add the read-end to closeAfterWait, causing
- // cmd.Wait() to close it before our goroutine finishes reading.
- stdoutR, stdoutW, err := os.Pipe()
- if err != nil {
- return fmt.Errorf("creating stdout pipe: %w", err)
- }
- cmd.Stdout = stdoutW // *os.File — not added to closeAfterStart/Wait
- cmd.Stderr = stderrFile
-
- if err := cmd.Start(); err != nil {
- stdoutW.Close()
- stdoutR.Close()
- return fmt.Errorf("starting claude: %w", err)
- }
- // Close our write-end immediately; the subprocess holds its own copy.
- // The goroutine below gets EOF when the subprocess exits.
- stdoutW.Close()
-
- // killDone is closed when cmd.Wait() returns, stopping the pgid-kill goroutine.
- //
- // Safety: this goroutine cannot block indefinitely. The select has two arms:
- // • ctx.Done() — fires if the caller cancels (e.g. timeout, user cancel).
- // The goroutine sends SIGKILL and exits immediately.
- // • killDone — closed by close(killDone) below, immediately after cmd.Wait()
- // returns. This fires when the process exits for any reason (natural exit,
- // SIGKILL from the ctx arm, or any other signal). The goroutine exits without
- // doing anything.
- //
- // Therefore: for a task that completes normally with a long-lived (non-cancelled)
- // context, the killDone arm fires and the goroutine exits. There is no path where
- // this goroutine outlives execOnce().
- killDone := make(chan struct{})
- go func() {
- select {
- case <-ctx.Done():
- // SIGKILL the entire process group to reap orphan children.
- syscall.Kill(-cmd.Process.Pid, syscall.SIGKILL)
- case <-killDone:
- }
- }()
-
- // Stream stdout to the log file and parse cost/errors.
- // wg ensures costUSD and streamErr are fully written before we read them after cmd.Wait().
- var costUSD float64
- var streamErr error
- var wg sync.WaitGroup
- wg.Add(1)
- go func() {
- defer wg.Done()
- costUSD, streamErr = parseStream(stdoutR, stdoutFile, r.Logger)
- stdoutR.Close()
- }()
-
- waitErr := cmd.Wait()
- close(killDone) // stop the pgid-kill goroutine
- wg.Wait() // drain remaining stdout before reading costUSD/streamErr
-
- e.CostUSD = costUSD
-
- if waitErr != nil {
- if exitErr, ok := waitErr.(*exec.ExitError); ok {
- e.ExitCode = exitErr.ExitCode()
- }
- // If the stream captured a rate-limit or quota message, return it
- // so callers can distinguish it from a generic exit-status failure.
- if isRateLimitError(streamErr) || isQuotaExhausted(streamErr) {
- return streamErr
- }
- if tail := tailFile(e.StderrPath, 20); tail != "" {
- return fmt.Errorf("claude exited with error: %w\nstderr:\n%s", waitErr, tail)
- }
- return fmt.Errorf("claude exited with error: %w", waitErr)
- }
-
- e.ExitCode = 0
- if streamErr != nil {
- return streamErr
- }
- return nil
-}
-
-func (r *ClaudeRunner) buildArgs(t *task.Task, e *storage.Execution, questionFile string) []string {
- // Resume execution: the agent already has context; just deliver the answer.
- if e.ResumeSessionID != "" {
- args := []string{
- "-p", e.ResumeAnswer,
- "--resume", e.ResumeSessionID,
- "--output-format", "stream-json",
- "--verbose",
- }
- permMode := t.Agent.PermissionMode
- if permMode == "" {
- permMode = "bypassPermissions"
- }
- args = append(args, "--permission-mode", permMode)
- if t.Agent.Model != "" {
- args = append(args, "--model", t.Agent.Model)
- }
- return args
- }
-
- instructions := t.Agent.Instructions
- allowedTools := t.Agent.AllowedTools
-
- if !t.Agent.SkipPlanning {
- instructions = withPlanningPreamble(instructions)
- // Ensure Bash is available so the agent can POST subtasks and ask questions.
- hasBash := false
- for _, tool := range allowedTools {
- if tool == "Bash" {
- hasBash = true
- break
- }
- }
- if !hasBash {
- allowedTools = append(allowedTools, "Bash")
- }
- }
-
- args := []string{
- "-p", instructions,
- "--session-id", e.SessionID,
- "--output-format", "stream-json",
- "--verbose",
- }
-
- if t.Agent.Model != "" {
- args = append(args, "--model", t.Agent.Model)
- }
- if t.Agent.MaxBudgetUSD > 0 {
- args = append(args, "--max-budget-usd", fmt.Sprintf("%.2f", t.Agent.MaxBudgetUSD))
- }
- // Default to bypassPermissions — claudomator runs tasks unattended, so
- // prompting for write access would always stall execution. Tasks that need
- // a more restrictive mode can set permission_mode explicitly.
- permMode := t.Agent.PermissionMode
- if permMode == "" {
- permMode = "bypassPermissions"
- }
- args = append(args, "--permission-mode", permMode)
- if t.Agent.SystemPromptAppend != "" {
- args = append(args, "--append-system-prompt", t.Agent.SystemPromptAppend)
- }
- for _, tool := range allowedTools {
- args = append(args, "--allowedTools", tool)
- }
- for _, tool := range t.Agent.DisallowedTools {
- args = append(args, "--disallowedTools", tool)
- }
- for _, f := range t.Agent.ContextFiles {
- args = append(args, "--add-dir", f)
- }
- args = append(args, t.Agent.AdditionalArgs...)
-
- return args
-}
-
-// parseStream reads streaming JSON from claude, writes to w, and returns
-// (costUSD, error). error is non-nil if the stream signals task failure:
-// - result message has is_error:true
-// - a tool_result was denied due to missing permissions
-func parseStream(r io.Reader, w io.Writer, logger *slog.Logger) (float64, error) {
- tee := io.TeeReader(r, w)
- scanner := bufio.NewScanner(tee)
- scanner.Buffer(make([]byte, 1024*1024), 1024*1024) // 1MB buffer for large lines
-
- var totalCost float64
- var streamErr error
-
- for scanner.Scan() {
- line := scanner.Bytes()
- var msg map[string]interface{}
- if err := json.Unmarshal(line, &msg); err != nil {
- continue
- }
-
- msgType, _ := msg["type"].(string)
- switch msgType {
- case "rate_limit_event":
- if info, ok := msg["rate_limit_info"].(map[string]interface{}); ok {
- status, _ := info["status"].(string)
- if status == "rejected" {
- streamErr = fmt.Errorf("claude rate limit reached (rejected): %v", msg)
- // Immediately break since we can't continue anyway
- break
- }
- }
- case "assistant":
- if errStr, ok := msg["error"].(string); ok && errStr == "rate_limit" {
- streamErr = fmt.Errorf("claude rate limit reached: %v", msg)
- }
- case "result":
- if isErr, _ := msg["is_error"].(bool); isErr {
- result, _ := msg["result"].(string)
- if result != "" {
- streamErr = fmt.Errorf("claude task failed: %s", result)
- } else {
- streamErr = fmt.Errorf("claude task failed (is_error=true in result)")
- }
- }
- // Prefer total_cost_usd from result message; fall through to legacy check below.
- if cost, ok := msg["total_cost_usd"].(float64); ok {
- totalCost = cost
- }
- case "user":
- // Detect permission-denial tool_results. These occur when permission_mode
- // is not bypassPermissions and claude exits 0 without completing its task.
- if err := permissionDenialError(msg); err != nil && streamErr == nil {
- streamErr = err
- }
- }
-
- // Legacy cost field used by older claude versions.
- if cost, ok := msg["cost_usd"].(float64); ok {
- totalCost = cost
- }
- }
-
- return totalCost, streamErr
-}
-
-// permissionDenialError inspects a "user" stream message for tool_result entries
-// that were denied due to missing permissions. Returns an error if found.
-func permissionDenialError(msg map[string]interface{}) error {
- message, ok := msg["message"].(map[string]interface{})
- if !ok {
- return nil
- }
- content, ok := message["content"].([]interface{})
- if !ok {
- return nil
- }
- for _, item := range content {
- itemMap, ok := item.(map[string]interface{})
- if !ok {
- continue
- }
- if itemMap["type"] != "tool_result" {
- continue
- }
- if isErr, _ := itemMap["is_error"].(bool); !isErr {
- continue
- }
- text, _ := itemMap["content"].(string)
- if strings.Contains(text, "requested permissions") || strings.Contains(text, "haven't granted") {
- return fmt.Errorf("permission denied by host: %s", text)
- }
- }
- return nil
-}
-
-// tailFile returns the last n lines of the file at path, or empty string if
-// the file cannot be read. Used to surface subprocess stderr on failure.
-func tailFile(path string, n int) string {
- f, err := os.Open(path)
- if err != nil {
- return ""
- }
- defer f.Close()
-
- var lines []string
- scanner := bufio.NewScanner(f)
- for scanner.Scan() {
- lines = append(lines, scanner.Text())
- if len(lines) > n {
- lines = lines[1:]
- }
- }
- return strings.Join(lines, "\n")
-}
diff --git a/internal/executor/claude_test.go b/internal/executor/claude_test.go
deleted file mode 100644
index e76fbf2..0000000
--- a/internal/executor/claude_test.go
+++ /dev/null
@@ -1,882 +0,0 @@
-package executor
-
-import (
- "context"
- "errors"
- "fmt"
- "io"
- "log/slog"
- "os"
- "os/exec"
- "path/filepath"
- "runtime"
- "strings"
- "testing"
- "time"
-
- "github.com/thepeterstone/claudomator/internal/storage"
- "github.com/thepeterstone/claudomator/internal/task"
-)
-
-func TestClaudeRunner_BuildArgs_BasicTask(t *testing.T) {
- r := &ClaudeRunner{}
- tk := &task.Task{
- Agent: task.AgentConfig{
- Type: "claude",
- Instructions: "fix the bug",
- Model: "sonnet",
- SkipPlanning: true,
- },
- }
-
- args := r.buildArgs(tk, &storage.Execution{ID: "test-exec"}, "/tmp/q.json")
-
- argMap := make(map[string]bool)
- for _, a := range args {
- argMap[a] = true
- }
- for _, want := range []string{"-p", "fix the bug", "--output-format", "stream-json", "--verbose", "--model", "sonnet"} {
- if !argMap[want] {
- t.Errorf("missing arg %q in %v", want, args)
- }
- }
-}
-
-func TestClaudeRunner_BuildArgs_FullConfig(t *testing.T) {
- r := &ClaudeRunner{}
- tk := &task.Task{
- Agent: task.AgentConfig{
- Type: "claude",
- Instructions: "implement feature",
- Model: "opus",
- MaxBudgetUSD: 5.0,
- PermissionMode: "bypassPermissions",
- SystemPromptAppend: "Follow TDD",
- AllowedTools: []string{"Bash", "Edit"},
- DisallowedTools: []string{"Write"},
- ContextFiles: []string{"/src"},
- AdditionalArgs: []string{"--verbose"},
- SkipPlanning: true,
- },
- }
-
- args := r.buildArgs(tk, &storage.Execution{ID: "test-exec"}, "/tmp/q.json")
-
- // Check key args are present.
- argMap := make(map[string]bool)
- for _, a := range args {
- argMap[a] = true
- }
-
- requiredArgs := []string{
- "-p", "implement feature", "--output-format", "stream-json",
- "--model", "opus", "--max-budget-usd", "5.00",
- "--permission-mode", "bypassPermissions",
- "--append-system-prompt", "Follow TDD",
- "--allowedTools", "Bash", "Edit",
- "--disallowedTools", "Write",
- "--add-dir", "/src",
- "--verbose",
- }
- for _, req := range requiredArgs {
- if !argMap[req] {
- t.Errorf("missing arg %q in %v", req, args)
- }
- }
-}
-
-func TestClaudeRunner_BuildArgs_DefaultsToBypassPermissions(t *testing.T) {
- r := &ClaudeRunner{}
- tk := &task.Task{
- Agent: task.AgentConfig{
- Type: "claude",
- Instructions: "do work",
- SkipPlanning: true,
- // PermissionMode intentionally not set
- },
- }
-
- args := r.buildArgs(tk, &storage.Execution{ID: "test-exec"}, "/tmp/q.json")
-
- found := false
- for i, a := range args {
- if a == "--permission-mode" && i+1 < len(args) && args[i+1] == "bypassPermissions" {
- found = true
- }
- }
- if !found {
- t.Errorf("expected --permission-mode bypassPermissions when PermissionMode is empty, args: %v", args)
- }
-}
-
-func TestClaudeRunner_BuildArgs_RespectsExplicitPermissionMode(t *testing.T) {
- r := &ClaudeRunner{}
- tk := &task.Task{
- Agent: task.AgentConfig{
- Type: "claude",
- Instructions: "do work",
- PermissionMode: "default",
- SkipPlanning: true,
- },
- }
-
- args := r.buildArgs(tk, &storage.Execution{ID: "test-exec"}, "/tmp/q.json")
-
- for i, a := range args {
- if a == "--permission-mode" && i+1 < len(args) {
- if args[i+1] != "default" {
- t.Errorf("expected --permission-mode default, got %q", args[i+1])
- }
- return
- }
- }
- t.Errorf("--permission-mode flag not found in args: %v", args)
-}
-
-func TestClaudeRunner_BuildArgs_AlwaysIncludesVerbose(t *testing.T) {
- r := &ClaudeRunner{}
- tk := &task.Task{
- Agent: task.AgentConfig{
- Type: "claude",
- Instructions: "do something",
- SkipPlanning: true,
- },
- }
-
- args := r.buildArgs(tk, &storage.Execution{ID: "test-exec"}, "/tmp/q.json")
-
- found := false
- for _, a := range args {
- if a == "--verbose" {
- found = true
- break
- }
- }
- if !found {
- t.Errorf("--verbose missing from args: %v", args)
- }
-}
-
-func TestClaudeRunner_BuildArgs_PreamblePrepended(t *testing.T) {
- r := &ClaudeRunner{}
- tk := &task.Task{
- Agent: task.AgentConfig{
- Type: "claude",
- Instructions: "fix the bug",
- SkipPlanning: false,
- },
- }
-
- args := r.buildArgs(tk, &storage.Execution{ID: "test-exec"}, "/tmp/q.json")
-
- // The -p value should start with the preamble and end with the original instructions.
- if len(args) < 2 || args[0] != "-p" {
- t.Fatalf("expected -p as first arg, got: %v", args)
- }
- if !strings.HasPrefix(args[1], "## Runtime Environment") {
- t.Errorf("instructions should start with planning preamble, got prefix: %q", args[1][:min(len(args[1]), 20)])
- }
- if !strings.Contains(args[1], "$CLAUDOMATOR_PROJECT_DIR") {
- t.Errorf("preamble should mention $CLAUDOMATOR_PROJECT_DIR")
- }
- if !strings.HasSuffix(args[1], "fix the bug") {
- t.Errorf("instructions should end with original instructions")
- }
-}
-
-func TestClaudeRunner_BuildArgs_PreambleAddsBash(t *testing.T) {
- r := &ClaudeRunner{}
- tk := &task.Task{
- Agent: task.AgentConfig{
- Type: "claude",
- Instructions: "do work",
- AllowedTools: []string{"Read"},
- SkipPlanning: false,
- },
- }
-
- args := r.buildArgs(tk, &storage.Execution{ID: "test-exec"}, "/tmp/q.json")
-
- // Bash should be appended to allowed tools.
- foundBash := false
- for i, a := range args {
- if a == "--allowedTools" && i+1 < len(args) && args[i+1] == "Bash" {
- foundBash = true
- }
- }
- if !foundBash {
- t.Errorf("Bash should be added to --allowedTools when preamble is active: %v", args)
- }
-}
-
-func TestClaudeRunner_BuildArgs_PreambleBashNotDuplicated(t *testing.T) {
- r := &ClaudeRunner{}
- tk := &task.Task{
- Agent: task.AgentConfig{
- Type: "claude",
- Instructions: "do work",
- AllowedTools: []string{"Bash", "Read"},
- SkipPlanning: false,
- },
- }
-
- args := r.buildArgs(tk, &storage.Execution{ID: "test-exec"}, "/tmp/q.json")
-
- // Count Bash occurrences in --allowedTools values.
- bashCount := 0
- for i, a := range args {
- if a == "--allowedTools" && i+1 < len(args) && args[i+1] == "Bash" {
- bashCount++
- }
- }
- if bashCount != 1 {
- t.Errorf("Bash should appear exactly once in --allowedTools, got %d: %v", bashCount, args)
- }
-}
-
-// TestClaudeRunner_Run_ResumeSetsSessionIDFromResumeSession verifies that when a
-// resume execution is itself blocked again, the stored SessionID is the original
-// resumed session, not the new execution's own UUID. Without this, a second
-// block-and-resume cycle passes the wrong --resume session ID and fails.
-func TestClaudeRunner_Run_ResumeSetsSessionIDFromResumeSession(t *testing.T) {
- logDir := t.TempDir()
- r := &ClaudeRunner{
- BinaryPath: "true", // exits 0, no output
- Logger: slog.New(slog.NewTextHandler(io.Discard, nil)),
- LogDir: logDir,
- }
- tk := &task.Task{
- Agent: task.AgentConfig{
- Type: "claude",
- Instructions: "continue",
- SkipPlanning: true,
- },
- }
- exec := &storage.Execution{
- ID: "resume-exec-uuid",
- TaskID: "task-1",
- ResumeSessionID: "original-session-uuid",
- ResumeAnswer: "yes",
- }
-
- // Run completes successfully (binary is "true").
- _ = r.Run(context.Background(), tk, exec)
-
- // SessionID must be the original session (ResumeSessionID), not the new
- // exec's own ID. If it were exec.ID, a second blocked-then-resumed cycle
- // would use the wrong --resume argument and fail.
- if exec.SessionID != "original-session-uuid" {
- t.Errorf("SessionID after resume Run: want %q, got %q", "original-session-uuid", exec.SessionID)
- }
-}
-
-func TestClaudeRunner_Run_InaccessibleWorkingDir_ReturnsError(t *testing.T) {
- r := &ClaudeRunner{
- BinaryPath: "true", // would succeed if it ran
- Logger: slog.New(slog.NewTextHandler(io.Discard, nil)),
- LogDir: t.TempDir(),
- }
- tk := &task.Task{
- Agent: task.AgentConfig{
- Type: "claude",
- ProjectDir: "/nonexistent/path/does/not/exist",
- SkipPlanning: true,
- },
- }
- exec := &storage.Execution{ID: "test-exec"}
-
- err := r.Run(context.Background(), tk, exec)
-
- if err == nil {
- t.Fatal("expected error for inaccessible working_dir, got nil")
- }
- if !strings.Contains(err.Error(), "project_dir") {
- t.Errorf("expected 'project_dir' in error, got: %v", err)
- }
-}
-
-func TestClaudeRunner_BinaryPath_Default(t *testing.T) {
- r := &ClaudeRunner{}
- if r.binaryPath() != "claude" {
- t.Errorf("want 'claude', got %q", r.binaryPath())
- }
-}
-
-func TestClaudeRunner_BinaryPath_Custom(t *testing.T) {
- r := &ClaudeRunner{BinaryPath: "/usr/local/bin/claude"}
- if r.binaryPath() != "/usr/local/bin/claude" {
- t.Errorf("want custom path, got %q", r.binaryPath())
- }
-}
-
-// TestExecOnce_NoGoroutineLeak_OnNaturalExit verifies that execOnce does not
-// leave behind any goroutines when the subprocess exits normally (no context
-// cancellation). Both the pgid-kill goroutine and the parseStream goroutine
-// must have exited before execOnce returns.
-func TestExecOnce_NoGoroutineLeak_OnNaturalExit(t *testing.T) {
- logDir := t.TempDir()
- r := &ClaudeRunner{
- BinaryPath: "true", // exits immediately with status 0, produces no output
- Logger: slog.New(slog.NewTextHandler(io.Discard, nil)),
- LogDir: logDir,
- }
- e := &storage.Execution{
- ID: "goroutine-leak-test",
- TaskID: "task-id",
- StdoutPath: filepath.Join(logDir, "stdout.log"),
- StderrPath: filepath.Join(logDir, "stderr.log"),
- ArtifactDir: logDir,
- }
-
- // Let any goroutines from test infrastructure settle before sampling.
- runtime.Gosched()
- baseline := runtime.NumGoroutine()
-
- if err := r.execOnce(context.Background(), []string{}, "", "", e); err != nil {
- t.Fatalf("execOnce failed: %v", err)
- }
-
- // Give the scheduler a moment to let any leaked goroutines actually exit.
- // In correct code the goroutines exit before execOnce returns, so this is
- // just a safety buffer for the scheduler.
- time.Sleep(10 * time.Millisecond)
- runtime.Gosched()
-
- after := runtime.NumGoroutine()
- if after > baseline {
- t.Errorf("goroutine leak: %d goroutines before execOnce, %d after (leaked %d)",
- baseline, after, after-baseline)
- }
-}
-
-// initGitRepo creates a git repo in dir with one commit so it is clonable.
-func initGitRepo(t *testing.T, dir string) {
- t.Helper()
- cmds := [][]string{
- {"git", "-c", "safe.directory=*", "-C", dir, "init", "-b", "main"},
- {"git", "-c", "safe.directory=*", "-C", dir, "config", "user.email", "test@test"},
- {"git", "-c", "safe.directory=*", "-C", dir, "config", "user.name", "test"},
- }
- for _, args := range cmds {
- if out, err := exec.Command(args[0], args[1:]...).CombinedOutput(); err != nil {
- t.Fatalf("%v: %v\n%s", args, err, out)
- }
- }
- if err := os.WriteFile(filepath.Join(dir, "init.txt"), []byte("init"), 0644); err != nil {
- t.Fatal(err)
- }
- if out, err := exec.Command("git", "-c", "safe.directory=*", "-C", dir, "add", ".").CombinedOutput(); err != nil {
- t.Fatalf("git add: %v\n%s", err, out)
- }
- if out, err := exec.Command("git", "-c", "safe.directory=*", "-C", dir, "commit", "-m", "init").CombinedOutput(); err != nil {
- t.Fatalf("git commit: %v\n%s", err, out)
- }
-}
-
-func TestSandboxCloneSource_PrefersLocalRemote(t *testing.T) {
- dir := t.TempDir()
- initGitRepo(t, dir)
- // Add a "local" remote pointing to a bare repo.
- bare := t.TempDir()
- exec.Command("git", "init", "--bare", bare).Run()
- exec.Command("git", "-C", dir, "remote", "add", "local", bare).Run()
- exec.Command("git", "-C", dir, "remote", "add", "origin", "https://example.com/repo").Run()
-
- got := sandboxCloneSource(dir)
- if got != bare {
- t.Errorf("expected bare repo path %q, got %q", bare, got)
- }
-}
-
-func TestSandboxCloneSource_FallsBackToOrigin(t *testing.T) {
- dir := t.TempDir()
- initGitRepo(t, dir)
- originURL := "https://example.com/origin-repo"
- exec.Command("git", "-C", dir, "remote", "add", "origin", originURL).Run()
-
- got := sandboxCloneSource(dir)
- if got != originURL {
- t.Errorf("expected origin URL %q, got %q", originURL, got)
- }
-}
-
-func TestSandboxCloneSource_FallsBackToProjectDir(t *testing.T) {
- dir := t.TempDir()
- initGitRepo(t, dir)
- // No remotes configured.
- got := sandboxCloneSource(dir)
- if got != dir {
- t.Errorf("expected projectDir %q (no remotes), got %q", dir, got)
- }
-}
-
-func TestSetupSandbox_ClonesGitRepo(t *testing.T) {
- src := t.TempDir()
- initGitRepo(t, src)
-
- sandbox, err := setupSandbox(src, slog.New(slog.NewTextHandler(io.Discard, nil)))
- if err != nil {
- t.Fatalf("setupSandbox: %v", err)
- }
- t.Cleanup(func() { os.RemoveAll(sandbox) })
-
- // Force sandbox to master if it cloned as main
- exec.Command("git", gitSafe("-C", sandbox, "checkout", "master")...).Run()
-
- // Debug sandbox
- logOut, _ := exec.Command("git", "-C", sandbox, "log", "-1").CombinedOutput()
- fmt.Printf("DEBUG: sandbox log: %s\n", string(logOut))
-
- // Verify sandbox is a git repo with at least one commit.
- out, err := exec.Command("git", "-C", sandbox, "log", "--oneline").Output()
- if err != nil {
- t.Fatalf("git log in sandbox: %v", err)
- }
- if len(strings.TrimSpace(string(out))) == 0 {
- t.Error("expected at least one commit in sandbox, got empty log")
- }
-}
-
-func TestSetupSandbox_InitialisesNonGitDir(t *testing.T) {
- // A plain directory (not a git repo) should be initialised then cloned.
- src := t.TempDir()
-
- sandbox, err := setupSandbox(src, slog.New(slog.NewTextHandler(io.Discard, nil)))
- if err != nil {
- t.Fatalf("setupSandbox on plain dir: %v", err)
- }
- t.Cleanup(func() { os.RemoveAll(sandbox) })
-
- if _, err := os.Stat(filepath.Join(sandbox, ".git")); err != nil {
- t.Errorf("sandbox should be a git repo: %v", err)
- }
-}
-
-func TestTeardownSandbox_AutocommitsChanges(t *testing.T) {
- // Create a bare repo as origin so push succeeds.
- bare := t.TempDir()
- if out, err := exec.Command("git", "init", "--bare", bare).CombinedOutput(); err != nil {
- t.Fatalf("git init bare: %v\n%s", err, out)
- }
-
- // Create a sandbox directly.
- sandbox := t.TempDir()
- initGitRepo(t, sandbox)
- if out, err := exec.Command("git", "-c", "safe.directory=*", "-C", sandbox, "remote", "add", "origin", bare).CombinedOutput(); err != nil {
- t.Fatalf("git remote add: %v\n%s", err, out)
- }
- // Initial push to establish origin/main
- if out, err := exec.Command("git", "-c", "safe.directory=*", "-C", sandbox, "push", "origin", "main").CombinedOutput(); err != nil {
- t.Fatalf("git push initial: %v\n%s", err, out)
- }
-
- // Capture startHEAD
- headOut, err := exec.Command("git", "-c", "safe.directory=*", "-C", sandbox, "rev-parse", "HEAD").Output()
- if err != nil {
- t.Fatalf("rev-parse HEAD: %v", err)
- }
- startHEAD := strings.TrimSpace(string(headOut))
-
- // Leave an uncommitted file in the sandbox.
- if err := os.WriteFile(filepath.Join(sandbox, "dirty.txt"), []byte("autocommit me"), 0644); err != nil {
- t.Fatal(err)
- }
-
- logger := slog.New(slog.NewTextHandler(os.Stderr, &slog.HandlerOptions{Level: slog.LevelDebug}))
- execRecord := &storage.Execution{}
-
- err = teardownSandbox("", sandbox, startHEAD, logger, execRecord)
- if err != nil {
- t.Fatalf("expected autocommit to succeed, got error: %v", err)
- }
-
- // Sandbox should be removed after successful autocommit and push.
- if _, statErr := os.Stat(sandbox); !os.IsNotExist(statErr) {
- t.Error("sandbox should have been removed after successful autocommit and push")
- }
-
- // Verify the commit exists in the bare repo.
- out, err := exec.Command("git", "-C", bare, "log", "-1", "--pretty=%B").Output()
- if err != nil {
- t.Fatalf("git log in bare repo: %v", err)
- }
- if !strings.Contains(string(out), "chore: autocommit uncommitted changes") {
- t.Errorf("expected autocommit message in log, got: %q", string(out))
- }
-
- // Verify the commit was captured in execRecord.
- if len(execRecord.Commits) == 0 {
- t.Error("expected at least one commit in execRecord")
- } else if !strings.Contains(execRecord.Commits[0].Message, "chore: autocommit uncommitted changes") {
- t.Errorf("unexpected commit message: %q", execRecord.Commits[0].Message)
- }
-}
-
-func TestTeardownSandbox_BuildFailure_BlocksAutocommit(t *testing.T) {
- bare := t.TempDir()
- if out, err := exec.Command("git", "init", "--bare", bare).CombinedOutput(); err != nil {
- t.Fatalf("git init bare: %v\n%s", err, out)
- }
-
- sandbox := t.TempDir()
- initGitRepo(t, sandbox)
- if out, err := exec.Command("git", "-c", "safe.directory=*", "-C", sandbox, "remote", "add", "origin", bare).CombinedOutput(); err