Compare commits

...
Sign in to create a new pull request.

52 commits

Author SHA1 Message Date
Janpeter Visser
fba2d67796
fix(update_job_status): status-gedreven lifecycle-timestamps (#51)
Een job kon CLAIMED -> done/failed/skipped gaan zonder ooit `running` te
rapporteren, waardoor started_at NULL bleef terwijl finished_at wel gezet
werd. Dat brak de invariant claimed_at <= started_at <= finished_at en
elke duur-analyse.

Nieuwe pure helper resolveJobTimestamps zet de lifecycle-timestamps
set-once op basis van de status: started_at wordt gebackfild bij een
terminale overgang, claimed_at defensief gevuld als die ontbreekt. De
running-tak is nu set-once i.p.v. bij elke call overschrijven.

Co-authored-by: Madhura68 <ID+Madhura68@users.noreply.github.com>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-14 23:21:44 +02:00
Janpeter Visser
51fc65e715
fix(update_idea_plan_reviewed): nooit stilzwijgend goedkeuren (IDEA-066) (#50)
De status-logica sprak z'n eigen tool-beschrijving tegen. De code deed:
  approved  -> PLAN_REVIEWED
  rejected  -> PLAN_REVIEW_FAILED
  else      -> PLAN_REVIEWED   // "Default to approved if not specified"

Een review die 'pending' (needs manual approval) of helemaal geen
approval_status teruggaf, markeerde het idee dus als PLAN_REVIEWED
(goedgekeurd) — precies omgekeerd aan wat de beschrijving belooft.

Fix: alleen een expliciete approval_status='approved' brengt het idee
naar PLAN_REVIEWED; 'rejected', 'pending' én een weggelaten
approval_status gaan allemaal naar PLAN_REVIEW_FAILED (mens beslist).
Nooit stilzwijgend goedkeuren.

Verder:
- Handler geextraheerd naar handleUpdateIdeaPlanReviewed + inputSchema
  geexporteerd, conform het create-sprint/update-sprint-patroon, zodat
  de logica zonder McpServer-wrapper testbaar is.
- Tool-beschrijving + header-comment aangescherpt zodat code en docs
  niet meer divergeren.
- Nieuw test-bestand: 6 tests (approved/rejected/pending/omitted
  status-transitie, not-found, log-persistentie).

Build groen, 379 tests groen.

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-14 19:46:31 +02:00
Janpeter Visser
84c194d4e5
fix(cross-repo): per-repo worktree-branch + PR resolutie (IDEA-062) (#49)
Cross-repo sprints (sprint-product = repo X, maar een taak heeft
task.repo_url naar repo Y) faalden op twee plekken omdat sprint-brede
beslissingen werden toegepast op per-repo git-state.

1. createWorktreeForJob (src/git/worktree.ts)
   reuseBranch wordt sprint-breed bepaald in wait-for-job.ts. De eerste
   job die repo Y target krijgt reuseBranch=true terwijl de branch daar
   nooit is aangemaakt -> `git worktree add <path> <branch>` faalt met
   "invalid reference" -> job vast, worker UNHEALTHY. Idem na een
   container-recreate (clone is dan vers).
   Fix: 3-weg fallback in het reuseBranch-pad:
   - lokale branch bestaat   -> hergebruik
   - alleen op origin        -> recreate lokaal vanaf origin/<branch>
   - nergens                 -> fresh vanaf baseRef
   Lost ook het container-recreate-verlies op.

2. maybeCreateAutoPr (src/tools/update-job-status.ts)
   De sprint/story sibling-lookup voor pr_url-hergebruik filterde niet
   op repo. Een repo-Y-job erfde de pr_url van een repo-X-sibling ->
   job.pr_url wees naar de verkeerde repo en er werd nooit een PR voor
   de repo-Y-branch aangemaakt (branch wel gepusht, maar PR-loos).
   Fix: siblings groeperen per repo-bucket ((task.repo_url ?? null));
   alleen een sibling uit dezelfde bucket levert een herbruikbare
   pr_url. Geldt voor SPRINT- en STORY-mode. createPullRequest zelf was
   al repo-correct (gh pr create draait in de worktree).

Tests: 3 nieuwe in worktree.test.ts (reuse-local / recreate-from-origin
/ fresh-fallback), 2 nieuwe in update-job-status-auto-pr.test.ts
(cross-repo story + sprint). update-job-status-mock omgezet naar
findMany. Alle 373 tests groen, build groen.

package-lock.json: version 0.7.0 -> 0.8.0 (was niet mee-gesynced in de
v0.8.0-bump commit 55fa133).

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-14 19:16:15 +02:00
Janpeter Visser
55fa133150
feat: IDEA_REVIEW_PLAN-wiring + create_story sprint_id (v0.8.0) (#48)
* feat(PBI-12 T-51): voeg create_sprint tool toe

Maakt een sprint aan met status=OPEN. Code auto-gegenereerd als
S-{YYYY-MM-DD}-{N} per product per datum als niet meegegeven, met retry
bij race-conflict op @@unique([product_id, code]). Volgt create-pbi.ts
template.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(PBI-12 T-52): voeg update_sprint tool toe

Generieke update voor status, sprint_goal, start_date en end_date.
Géén state-machine validatie — last-write-wins. Bij status →
CLOSED/FAILED/ARCHIVED zonder expliciete end_date wordt end_date
automatisch op vandaag gezet. Minimaal één veld vereist (handmatige check
in handler i.p.v. zod-refine want McpServer.inputSchema accepteert geen
ZodEffects).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(PBI-12 T-53): registreer sprint-tools + unit-tests

- Imports + register-calls toegevoegd in src/index.ts (groep met andere
  authoring-tools, comment "PBI-12: sprint lifecycle tools")
- Refactor: create-sprint en update-sprint exporteren nu handleX +
  inputSchema apart (pattern van set-pbi-pr.ts) zodat de logica
  zonder McpServer wrapper testbaar is
- 6 unit-tests voor create_sprint (happy path, custom code,
  auto-increment, P2002-retry, access-denied, explicit start_date)
- 11 unit-tests voor update_sprint (no-fields-error, status-only,
  auto-end_date voor CLOSED/FAILED/ARCHIVED, geen auto voor OPEN,
  expliciete end_date respect, multi-field, not-found, access-denied,
  any-status-transition)
- Defensive date-check in generateNextSprintCode tegen
  filter-veranderingen of mock-data anomalieën
- 363 tests groen (was 346 + 17 nieuwe)

DB-smoke-test (MCP-server vs dev-DB) overgeslagen want unit-coverage
dekt het gedrag volledig; mock-vrije integratie volgt automatisch bij
eerstvolgende productie-aanroep van create_sprint via een echte agent.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: untrack .claude/worktrees gitlinks + ignore pad

Per ongeluk in adbea3f meegenomen via 'git add -A'; deze embedded worktree-
clones horen niet in de repo. Ook .gitignore aangevuld zodat dit niet
opnieuw gebeurt.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(PBI-12): update_sprint zet completed_at op CLOSED — parity met cascade

Codex-review op #47: bij status → CLOSED werd alleen end_date gezet, niet
completed_at. Dat is divergeert van src/lib/tasks-status-update.ts dat
completed_at = new Date() zet bij automatische sluiting via task-status-
cascade. Reporting en UI die op completed_at filteren zagen handmatig
gesloten sprints als 'never completed'.

Fix:
- update_sprint zet nu data.completed_at = new Date() wanneer status === 'CLOSED'
- FAILED/ARCHIVED raken completed_at NIET (parity met bestaand patroon)
- Test-coverage uitgebreid:
  - CLOSED zet end_date EN completed_at
  - FAILED zet end_date, completed_at blijft undefined
  - ARCHIVED zet end_date, completed_at blijft undefined
  - OPEN zet noch end_date noch completed_at
  - Expliciete end_date wordt gerespecteerd, completed_at wordt nog steeds gezet
- Tool description vermeldt nu de completed_at-side-effect

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* PBI-67 Phase 2: Add update-idea-plan-reviewed MCP tool

- Create src/tools/update-idea-plan-reviewed.ts: saves review-log and transitions idea status
- Register tool in src/index.ts
- Update Prisma schema: add plan_review_log and reviewed_at fields to Idea model
- Add PLAN_REVIEW_RESULT to IdeaLogType enum
- Add REVIEWING_PLAN, PLAN_REVIEW_FAILED, PLAN_REVIEWED to IdeaStatus enum
- Add IDEA_REVIEW_PLAN to ClaudeJobKind enum
- Build successful with all type checks passing

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>

* feat(PBI-67): bedraad IDEA_REVIEW_PLAN prompt + job-context

- src/prompts/idea/review-plan.md: prompt voor IDEA_REVIEW_PLAN-jobs —
  iteratieve 3-ronden plan-review met convergentie-detectie
- kind-prompts.ts: koppel IDEA_REVIEW_PLAN aan de prompt + getIdeaPromptText
- wait-for-job.ts: getFullJobContext handelt IDEA_REVIEW_PLAN-jobs af

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(create_story): optionele sprint_id om story aan sprint te koppelen

create_story accepteert nu een optionele sprint_id; bij meegeven wordt de
story aangemaakt met status=IN_SPRINT (sprint moet bij hetzelfde product
horen als de PBI). Handler geextraheerd naar handleCreateStory voor
testbaarheid; nieuwe unit-tests in __tests__/create-story.test.ts.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(test): maak create-sprint auto-code test datum-onafhankelijk

De test hardcodede 2026-05-11-datums maar berekende "today" dynamisch,
waardoor hij alleen op die datum slaagde. Mock-codes nu relatief aan today.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: bump version 0.7.0 -> 0.8.0

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: bump vendor/scrum4me submodule naar app-main (7bb252c)

De submodule stond 27 commits achter (3c77342, v1.0.0-147), waardoor
sync-schema.sh prisma/schema.prisma terugzette naar een versie zonder
IDEA_REVIEW_PLAN. Bumpt naar huidige app-main + re-synct het schema;
enige inhoudelijke wijziging is het nieuwe User.settings-veld.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Madhura68 <ID+Madhura68@users.noreply.github.com>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-14 16:30:17 +02:00
Janpeter Visser
93d881318d
feat(PBI-12): create_sprint + update_sprint MCP-tools (#47)
* feat(PBI-12 T-51): voeg create_sprint tool toe

Maakt een sprint aan met status=OPEN. Code auto-gegenereerd als
S-{YYYY-MM-DD}-{N} per product per datum als niet meegegeven, met retry
bij race-conflict op @@unique([product_id, code]). Volgt create-pbi.ts
template.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(PBI-12 T-52): voeg update_sprint tool toe

Generieke update voor status, sprint_goal, start_date en end_date.
Géén state-machine validatie — last-write-wins. Bij status →
CLOSED/FAILED/ARCHIVED zonder expliciete end_date wordt end_date
automatisch op vandaag gezet. Minimaal één veld vereist (handmatige check
in handler i.p.v. zod-refine want McpServer.inputSchema accepteert geen
ZodEffects).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(PBI-12 T-53): registreer sprint-tools + unit-tests

- Imports + register-calls toegevoegd in src/index.ts (groep met andere
  authoring-tools, comment "PBI-12: sprint lifecycle tools")
- Refactor: create-sprint en update-sprint exporteren nu handleX +
  inputSchema apart (pattern van set-pbi-pr.ts) zodat de logica
  zonder McpServer wrapper testbaar is
- 6 unit-tests voor create_sprint (happy path, custom code,
  auto-increment, P2002-retry, access-denied, explicit start_date)
- 11 unit-tests voor update_sprint (no-fields-error, status-only,
  auto-end_date voor CLOSED/FAILED/ARCHIVED, geen auto voor OPEN,
  expliciete end_date respect, multi-field, not-found, access-denied,
  any-status-transition)
- Defensive date-check in generateNextSprintCode tegen
  filter-veranderingen of mock-data anomalieën
- 363 tests groen (was 346 + 17 nieuwe)

DB-smoke-test (MCP-server vs dev-DB) overgeslagen want unit-coverage
dekt het gedrag volledig; mock-vrije integratie volgt automatisch bij
eerstvolgende productie-aanroep van create_sprint via een echte agent.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: untrack .claude/worktrees gitlinks + ignore pad

Per ongeluk in adbea3f meegenomen via 'git add -A'; deze embedded worktree-
clones horen niet in de repo. Ook .gitignore aangevuld zodat dit niet
opnieuw gebeurt.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(PBI-12): update_sprint zet completed_at op CLOSED — parity met cascade

Codex-review op #47: bij status → CLOSED werd alleen end_date gezet, niet
completed_at. Dat is divergeert van src/lib/tasks-status-update.ts dat
completed_at = new Date() zet bij automatische sluiting via task-status-
cascade. Reporting en UI die op completed_at filteren zagen handmatig
gesloten sprints als 'never completed'.

Fix:
- update_sprint zet nu data.completed_at = new Date() wanneer status === 'CLOSED'
- FAILED/ARCHIVED raken completed_at NIET (parity met bestaand patroon)
- Test-coverage uitgebreid:
  - CLOSED zet end_date EN completed_at
  - FAILED zet end_date, completed_at blijft undefined
  - ARCHIVED zet end_date, completed_at blijft undefined
  - OPEN zet noch end_date noch completed_at
  - Expliciete end_date wordt gerespecteerd, completed_at wordt nog steeds gezet
- Tool description vermeldt nu de completed_at-side-effect

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Madhura68 <ID+Madhura68@users.noreply.github.com>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-11 21:37:05 +02:00
Janpeter Visser
9ffa25f053
fix(verify/classify): negeer pseudo-paths in plan (geen PARTIAL meer voor delete-only) (#46)
extractPlanPaths beschouwde tokens als `data-debug-label="..."` als file-paden
omdat ze een dot bevatten en geen spaties. Resultaat: het pseudo-pad werd nooit
in de diff gevonden → coverage < 1 → PARTIAL → met verify_required=ALIGNED
faalde de job, ondanks dat het werk volledig gedaan was.

Concreet incident T-815 (sprint cmoyiu4yd, 2026-05-09):
- 17/17 files data-debug-label verwijderd, grep 0 hits, typecheck groen
- Verifier zei PARTIAL → Claude rapporteerde failed → propagateStatusUpwards
  + cancelPbiOnFailure cancelden 12 siblings + deleten feat/sprint-acq9twtr
- T-814's al-gepushte werk verloren

Fix: nieuwe `looksLikePath`-helper die backtick-tokens verwerpt als ze
operator/quote/bracket chars bevatten, een ellipsis (`..`/`...`) hebben,
of geen `/` én geen herkenbare file-extensie hebben. Bullet-extractor blijft
onveranderd — die parseert al expliciet op `.ext`.

Tests: 5 nieuwe regression-cases + alle 18 bestaande blijven groen.

Co-authored-by: Madhura68 <ID+Madhura68@users.noreply.github.com>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-09 20:30:17 +02:00
Janpeter Visser
da1fe415c4
fix(cleanup): keepBranch + sprint-scope siblings voor SPRINT pr_strategy (#45)
Symptoom: in een sprint met pr_strategy=SPRINT (5 tasks, 3 stories)
werden de eerste twee tasks SKIPPED door Claude (werk al in main na een
externe PR). De derde task crashte op:

  git worktree add /home/agent/.scrum4me-agent-worktrees/<id> feat/sprint-uhrbtc8z
  fatal: invalid reference: feat/sprint-uhrbtc8z

Root cause: cleanupWorktreeForTerminalStatus checkte op active siblings
binnen dezelfde **story** + verwijderde de branch bij keepBranch=false.
Voor SPRINT pr_strategy delen alle stories in de sprint één branch
(feat/sprint-<id>). Eerste task SKIPPED, story ST-1304 had geen actieve
siblings meer (T-807 was ook al SKIPPED), branch werd verwijderd. T-808
in story ST-1305 wilde reuse'n maar branch bestond niet meer.

Fix:
1. Sibling-check verbreden voor SPRINT pr_strategy: kijk naar alle
   actieve jobs in dezelfde sprint_run_id (niet alleen story_id).
2. keepBranch=true voor SKIPPED bij SPRINT pr_strategy: andere stories
   in dezelfde sprint hebben de branch nog nodig.

Tests: 341 passed (38 files). Typecheck OK.

Co-authored-by: Madhura68 <ID+Madhura68@users.noreply.github.com>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-09 16:29:41 +02:00
Janpeter Visser
7d217cf443
Merge pull request #44 from madhura68/fix/attach-worktree-writes-branch
fix(attachWorktreeToJob): schrijf branch naar claudeJob.branch in DB
2026-05-09 14:07:32 +02:00
Madhura68
51533cf48e fix(attachWorktreeToJob): schrijf branch naar claudeJob.branch in DB
Symptoom: TASK_IMPLEMENTATION jobs in een sprint-run met pr_strategy=
SPRINT kregen branch=null in claudeJob.branch, ook al maakte
attachWorktreeToJob de juiste worktree-branch (feat/sprint-<id>) aan en
returnde die in de payload-response.

Gevolg: update_job_status (na PR #43-fix) leest claudeJob.branch uit de
DB → null → valt terug op legacy `feat/job-<8>` → `git push` faalt met
"src refspec feat/job-xxx does not match any" → job FAILED → cascade-
cancel van sibling-tasks in dezelfde sprint-run. Live waargenomen voor
sprint-run cmoy9irr8000ci017fvy30lvv (T-806 FAILED, T-807-T-811
CANCELLED) ondanks dat Claude PR #174 op feat/sprint-fvy30lvv had
gemaakt.

Root cause: attachWorktreeToJob (wait-for-job.ts:205-209) update'de
alleen base_sha. Voor SPRINT_IMPLEMENTATION-kind wordt branch wel naar
DB geschreven (regel 655) maar voor TASK_IMPLEMENTATION-pad zat dat gat.

Fix: altijd branch + (indien aanwezig) base_sha schrijven naar
claudeJob in de update aan het eind van attachWorktreeToJob.

Tests: __tests__/wait-for-job-worktree.test.ts mock-prisma uitgebreid
met `claudeJob.update`. 341 tests in 38 files passed.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-09 14:05:59 +02:00
Janpeter Visser
ed94d5b7e1
Merge pull request #43 from madhura68/fix/prepare-done-uses-db-branch
fix(update_job_status): gebruik DB-branch ipv legacy feat/job-<8> fallback
2026-05-09 13:56:25 +02:00
Madhura68
0a18f565d2 fix(update_job_status): gebruik DB-branch ipv legacy feat/job-<8> fallback
Symptoom: TASK_IMPLEMENTATION job T-806 in een SPRINT-strategy sprint
faalde met:

  push failed (unknown): error: src refspec feat/job-us3aqoup does not
  match any
  error: failed to push some refs to 'https://github.com/.../Scrum4Me.git'

Maar de PR was wel succesvol aangemaakt door Claude (PR #174 op
feat/sprint-fvy30lvv) — Claude commit'te in de juiste worktree-branch,
maar update_job_status's prepareDoneUpdate probeerde te pushen op een
niet-bestaande branch.

Root cause: prepareDoneUpdate(jobId, branch) accepteert een branch-arg
(meestal undefined want Claude geeft 'm niet mee) en valt terug op
`feat/job-${jobId.slice(-8)}`. Dat is het legacy pre-PBI-50 pad — voor
sprint-jobs is de werkelijke branch `feat/sprint-<id>` (PR_strategy=SPRINT)
of `feat/story-<id>` (STORY), opgeslagen in ClaudeJob.branch door
attachWorktreeToJob.

Fix:
- prepareDoneUpdate leest nu eerst ClaudeJob.branch uit de DB als de
  expliciete branch-arg ontbreekt.
- Pas daarna fallback op `feat/job-<8>` (zou niet moeten voorkomen na PBI-50).

Tests: vi.mock('../src/prisma.js') toegevoegd voor de findUnique-stub.
Bestaande test "derives branchName from jobId when branch is undefined"
hernoemd naar "reads branchName from DB" met DB-mock returnt
'feat/sprint-fvy30lvv'. Plus extra test voor de legacy fallback wanneer
DB.branch ook null is.

341 tests in 38 files passed (was 340, +1).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-09 13:53:43 +02:00
Janpeter Visser
32929eee93
Merge pull request #42 from madhura68/chore/sync-schema
chore: sync schema + adapt to ACTIVE→OPEN, COMPLETED→CLOSED, EXCLUDED-task
2026-05-09 13:40:16 +02:00
Madhura68
233e0ef3b6 chore: sync schema + adapt to ACTIVE→OPEN, COMPLETED→CLOSED, EXCLUDED-task
Webapp had Prisma-schema migrated: SprintStatus.ACTIVE→OPEN,
SprintStatus.COMPLETED→CLOSED, plus new SprintStatus.ARCHIVED. Also new
TaskStatus.EXCLUDED. scrum4me-mcp Prisma client was 110 commits behind,
causing runtime errors when reading sprint.status from the live DB:

  Value 'OPEN' not found in enum 'SprintStatus'

Symptom: TASK_IMPLEMENTATION jobs in QUEUED status were claimed by
tryClaimJob (raw SQL succeeds), then getFullJobContext crashed on the
findUnique with the enum error → rollbackClaim → loop forever until
UNHEALTHY (5 consecutive failures).

Fix:
- Updated vendor/scrum4me submodule to current main (3c77342).
- Re-ran sync-schema.sh → prisma/schema.prisma now has
  SprintStatus { OPEN, CLOSED, ARCHIVED, FAILED } and
  TaskStatus including EXCLUDED.
- src/lib/tasks-status-update.ts: ACTIVE→OPEN, COMPLETED→CLOSED.
- src/status.ts: TASK_DB_TO_API + TASK_API_TO_DB krijgen EXCLUDED entry.
- src/tools/get-claude-context.ts: status: 'ACTIVE' → status: 'OPEN'.

Tests: 340 passed (38 files). Typecheck OK.

Na merge + docker rebuild met cache-bust pakt de runner sprint-tasks weer
op zonder enum-error.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-09 13:39:13 +02:00
Janpeter Visser
69fabc58f6
Merge pull request #41 from madhura68/fix/idea-prompts-payload-path
fix(prompts): idea-prompts gebruiken $PAYLOAD_PATH ipv onvervangen placeholders
2026-05-09 11:59:59 +02:00
Madhura68
ae017b8644 fix(prompts): idea-prompts gebruiken $PAYLOAD_PATH ipv onvervangen placeholders
Symptoom: IDEA_GRILL en IDEA_MAKE_PLAN jobs hingen 11+ minuten zonder
update_job_status aan te roepen. Claude zag in de prompt:

- "Je bent een grill-agent voor Scrum4Me-idee {idea_code}" — letterlijke
  string omdat run-one-job.ts alleen $PAYLOAD_PATH substitueert, geen
  {idea_*}-vars.
- "context (meegegeven in wait_for_job-payload)" — maar Claude krijgt geen
  wait_for_job-respons, want die tool zit niet meer in allowed_tools voor
  idea-kinds (de runner claimt al).
- Geen instructie om $PAYLOAD_PATH te lezen — de placeholder ontbrak in
  beide idea-prompts (alleen task/sprint/plan-chat hadden 'm).

Resultaat: Claude wist niet wat het te doen had, kon geen idea_id of
job_id achterhalen, en draaide tot de natuurlijke session-cap zonder
ooit de juiste tools aan te roepen.

Fix:
- grill.md en make-plan.md: vervang `wait_for_job`-references door
  `scrum4me-docker/bin/run-one-job.ts` (de daadwerkelijke runner).
- Beide prompts beginnen nu met "Lees $PAYLOAD_PATH met de Read-tool"
  als verplichte eerste actie. Lijst van velden die uit de payload moeten
  worden bewaard (idea.id, idea.code, job_id, product.id, etc.).
- {idea_code} / {idea_title} placeholders verwijderd — alle benodigde
  velden komen uit de payload, geen runner-side substitution meer nodig.
- Update_job_status-stap expliciet als "verplicht, ook bij failure".

Tests: kind-prompts.test.ts uitgebreid:
- Alle 5 kinds moeten $PAYLOAD_PATH bevatten (was alleen task/sprint/
  plan-chat).
- IDEA_GRILL en IDEA_MAKE_PLAN mogen geen wait_for_job meer noemen.
- IDEA_GRILL en IDEA_MAKE_PLAN mogen geen {idea_*} placeholders meer
  bevatten.

19 tests in kind-prompts.test.ts passed (was 13).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-09 11:55:27 +02:00
Janpeter Visser
e13a470024
Merge pull request #40 from madhura68/fix/idea-kinds-permission-mode
fix(KIND_DEFAULTS): permission_mode acceptEdits voor idea-kinds + PLAN_CHAT
2026-05-09 11:30:18 +02:00
Madhura68
e64ece3d41 fix(KIND_DEFAULTS): permission_mode acceptEdits voor idea-kinds + PLAN_CHAT
Symptoom: IDEA_GRILL job IDEA-047 werd 3x geclaimd, Claude liep telkens
succesvol (exit 0 na 600-900s) maar deed nooit update_job_status('done').
Lease verliep, retry_count >= 2 → status FAILED met "agent did not
complete job within 2 attempts".

Root cause: KIND_DEFAULTS.permission_mode='plan' voor idea-kinds en
PLAN_CHAT. In autonome batch-mode wacht plan-mode op een human "go" na
elke planning-fase — er is geen mens in de loop om te approven, dus
Claude blijft hangen en sluit netjes maar onvolledig af.

Fix:
- IDEA_GRILL.permission_mode: plan → acceptEdits
- IDEA_MAKE_PLAN.permission_mode: plan → acceptEdits
- PLAN_CHAT.permission_mode: plan → acceptEdits

De allowed_tools-lijsten doen de echte sandboxing (geen Bash, geen Edit
voor IDEA_GRILL/PLAN_CHAT, alleen Write voor IDEA_MAKE_PLAN). De
"veiligheid" van plan-mode wordt dus al door tool-allowlists geleverd —
acceptEdits is hier puur om Claude door zijn own update_job_status loop
te laten lopen zonder approval-wachttijd.

Plus: PLAN_CHAT.allowed_tools krijgt nu ook update_job_status (ontbrak,
zou het kind ook in acceptEdits-mode niet kunnen afsluiten).

Tests: KIND_EXPECTED in __tests__/job-config.test.ts bijgewerkt.
334 tests in 38 files passed.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-09 11:28:31 +02:00
Janpeter Visser
52c167c0b3
Merge pull request #39 from madhura68/feat/queue-loop-extraction
feat(PBI-4/ST-004): publieke API + KIND_DEFAULTS + per-kind prompts
2026-05-09 07:10:34 +02:00
Madhura68
96f5b0dd03 feat(PBI-4/ST-004): publieke API + KIND_DEFAULTS + per-kind prompts
Voorbereidende wijzigingen voor de queue-loop-refactor (zie
docs/plans/queue-loop-extraction.md in Scrum4Me-repo). Maakt scrum4me-mcp
geschikt als gedeelde library voor de nieuwe scrum4me-docker runner.

- T-13: export getFullJobContext uit src/tools/wait-for-job.ts
- T-14: mapBudgetToEffort(budget) → --effort {medium,high,xhigh,max} mapping
  voor Claude CLI 2.1.x (heeft geen --thinking-budget). Comment in header
  documenteert dat max_turns audit-only is en de CLI-flag-mapping.
- T-15: KIND_DEFAULTS.allowed_tools van null → expliciete lijsten zonder
  wait_for_job/check_queue_empty/get_idea_context. Vangrail tegen recursieve
  claims. SPRINT_IMPLEMENTATION mist bewust job_heartbeat (runner doet
  lease-renewal).
- T-16: src/lib/idea-prompts.ts → src/lib/kind-prompts.ts. Nieuwe export
  getKindPromptText voor alle 5 kinds. Back-compat re-export
  getIdeaPromptText behouden zodat wait-for-job.ts:508 ongewijzigd werkt.
- T-17: nieuwe prompts src/prompts/task/implementation.md,
  sprint/implementation.md, plan-chat/chat.md. Idea-prompts (M12) ongewijzigd.

Tests: 334 passed (38 files). 27 nieuwe asserts: mapBudgetToEffort
grenswaarden (14), KIND_DEFAULTS.allowed_tools structurele checks (6),
kind-prompts loading + verboden-tool-mentions (13).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-08 17:15:21 +02:00
Janpeter Visser
2fbb36bdbe
Merge pull request #38 from madhura68/feat/pbi-67-job-config
feat(PBI-67): job-config resolver + wait_for_job-config + thinking-tokens
2026-05-08 11:21:06 +02:00
Madhura68
1c0f41687b feat(PBI-67/ST-1300/T-791): persist actual_thinking_tokens in update_job_status
Workers kunnen voortaan het werkelijk verbruikte thinking-budget
meegeven via `actual_thinking_tokens`. Identiek aan de bestaande
input/output/cache_*-velden: optioneel + conditional update.

Backwards-compatible: oude workers zonder deze veld blijven werken.
57 update-job-status tests groen.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-08 11:12:31 +02:00
Madhura68
e2963d58fb feat(PBI-67/ST-1299/T-788): wait_for_job retourneert config
Roept resolveJobConfig aan na het claimen van een job en voegt het
resultaat toe als `config: JobConfig` aan de response payload. Werkt
voor alle 3 return-paden (IDEA_*, SPRINT_IMPLEMENTATION, default
TASK_IMPLEMENTATION).

Schema-velden lokaal toegevoegd ter ondersteuning van het Prisma-include
(preferred_*, requires_opus, requested_*, actual_thinking_tokens). De
sync-schema.sh-flow refresht ze later vanuit het scrum4me-submodule
zodra PBI-67/ST-1297 in main is.

Pure additief — oude clients negeren `config` en blijven werken op
Claude Code defaults uit ~/.claude/settings.json.

301 tests slagen onveranderd.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-08 11:11:29 +02:00
Madhura68
070c039740 feat(PBI-67/ST-1298): job-config resolver + kind-default-matrix
Nieuwe centrale resolver `resolveJobConfig(job, product, task?)` die
per ClaudeJob bepaalt welk model + thinking-budget + permission-mode +
max_turns + allowed_tools de worker moet gebruiken.

Override-cascade (eerste match wint):
  task.requires_opus → job.requested_* → product.preferred_* → kind-default

Kind-defaults:
  IDEA_GRILL            sonnet-4-6  thinking 12k  plan
  IDEA_MAKE_PLAN        opus-4-7    thinking 24k  plan
  PLAN_CHAT             sonnet-4-6  thinking 6k   plan (max 5 turns)
  TASK_IMPLEMENTATION   sonnet-4-6  thinking 6k   bypassPermissions
  SPRINT_IMPLEMENTATION sonnet-4-6  thinking 6k   bypassPermissions

19 unit tests (alle 5 kinds × cascade-niveaus). Geen externe deps.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-08 11:03:15 +02:00
Janpeter Visser
85a95e5bba
Merge pull request #37 from madhura68/feat/sprint-aolrn6ui
Sprint: pbi-55
2026-05-07 21:46:51 +02:00
Scrum4Me Agent
6aa43ff7dd PBI-55: .env.example descriptive push placeholders + README push-integration section
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-07 21:44:41 +02:00
Scrum4Me Agent
ab32a72ce0 PBI-55: update-job-status – NOTIFY payload-fix (kind/idea_id) + triggerPush on done/failed
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-07 21:42:19 +02:00
Scrum4Me Agent
4c476464ec PBI-55: ask-user-question – triggerPush na claudeQuestion.create
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-07 21:37:20 +02:00
Scrum4Me Agent
18c34b63de PBI-55: src/lib/push-trigger.ts – fire-and-forget push helper with 5s AbortController timeout
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-07 21:34:41 +02:00
Janpeter Visser
0d6bf8dd0d
Merge pull request #36 from madhura68/feat/skipped-exit-route
PBI-57: 'skipped' no-op exit + cascade preserves original error
2026-05-07 17:37:52 +02:00
Madhura68
458b7a7d45 PBI-57: 'skipped' no-op exit + cascade preserves original error
When verify_task_against_plan returns EMPTY because the requested changes
already live in origin/main (parallel work, earlier PR, race between
siblings), the worker had no clean exit: update_job_status only accepted
running|done|failed. 'failed' triggered the PBI fail-cascade which then
overwrote the error column with 'cancelled_by_self' and cancelled all
sibling tasks of the PBI — see Scrum4Me job cmovkur8 / T-695 for the
reference incident.

This change introduces a fourth status and tightens the cascade:

ST-1273 — 'skipped' exit in update_job_status (T-706 + T-707)
- src/tools/update-job-status.ts: status enum + DB_STATUS_MAP +
  resolveNextAction now include 'skipped'. cleanupWorktreeForTerminalStatus
  signature widened to ('done'|'failed'|'skipped'); SKIPPED uses keepBranch
  semantics identical to FAILED (no push, no branch keep). New input guard:
  'skipped' is only valid for TASK_IMPLEMENTATION jobs and requires a
  non-empty error (≥10 chars) explaining the reason — it bypasses the
  verify-gate, the auto-PR, the SprintRun finalize/fail paths and the
  PBI fail-cascade. Locks are still released on terminal exit.
- Tool description spells out when to pick 'skipped' so MCP clients see it.
- New __tests__/update-job-status-skipped.test.ts: resolveNextAction with
  'skipped' (wait_for_job_again / queue_empty), and cleanupWorktreeForTerminalStatus
  with status='skipped' (keepBranch=false even with a branch reported,
  defers cleanup with active siblings).

ST-1274 — cascade ignores SKIPPED + appends trace (T-708 + T-709)
- src/cancel/pbi-cascade.ts: runCascade reads job.status, returns EMPTY
  when status === 'SKIPPED' (no sibling cancel). Trace persistence now
  reads the current error first and writes `${original}\n---\n${trace}`
  (truncated at 1900 chars), so the original failure cause is preserved
  for forensics instead of being overwritten.
- New cases in __tests__/cancel-pbi-cascade.test.ts: SKIPPED entry-guard
  (no findMany / updateMany / update), original error preserved with
  trace appended after '---', trace-only fallback when no original
  error, 1900-char truncation keeps the head of the original.

All 282 scrum4me-mcp tests pass; tsc build clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 17:10:02 +02:00
Janpeter Visser
8ffb680a1a
Merge pull request #35 from madhura68/feat/sprint-batch-mcp
PBI-50: SPRINT_IMPLEMENTATION single-session sprint runner (MCP-side)
2026-05-07 13:01:40 +02:00
Madhura68
98786f763f PBI-50 F5: README — verify_sprint_task, update_task_execution, job_heartbeat
Drie nieuwe tools voor SPRINT_IMPLEMENTATION-flow toegevoegd aan tool-tabel.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 12:56:22 +02:00
Madhura68
b80264c26c PBI-50 F5: tests voor SPRINT_IMPLEMENTATION-tools
- update-job-status-sprint-gate: checkSprintVerifyGate per-row
  blockers, SKIPPED-policy, finalizeSprintRunOnDone idempotentie.
- update-task-execution: token-coupling, lifecycle (RUNNING zet
  started_at, DONE/FAILED/SKIPPED zet finished_at), skip_reason.
- job-heartbeat: token-mismatch error, non-SPRINT vs SPRINT
  response-shape, tolerantie voor pause_context=null.
- verify-sprint-task: PARTIAL+summary gate-pass, PARTIAL zonder
  summary gate-fail, DIVERGENT met ALIGNED gate-fail, base_sha
  auto-fill via vorige DONE execution head_sha + persistence,
  MISSING_BASE_SHA error.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 12:53:04 +02:00
Madhura68
876a7ad5d9 PBI-50 F4: SPRINT_IMPLEMENTATION DONE/FAILED-paden + quota-pause
- checkSprintVerifyGate: aggregate verify-gate via SprintTaskExecution.
  Per row: DONE → checkVerifyGate met snapshot-velden, SKIPPED →
  alleen toegestaan bij verify_required=ANY, FAILED/PENDING/RUNNING →
  blocker. Toolerror met opsomming bij faal.
- finalizeSprintRunOnDone: idempotent SprintRun → DONE wanneer alle
  stories DONE/FAILED zijn.
- maybeCreateSprintBatchPr: één draft-PR per sprint met sprint_goal
  als title. Hergebruikt bestaande PR via SprintRunChain bij resume.
- DONE-pad: na update markPullRequestReady wanneer SprintRun DONE.
- FAILED-pad: detect QUOTA_PAUSE: prefix → SprintRun PAUSED met
  pause_context (resume-instructions + last-completed-task); anders
  → FAILED met failure_reason + failed_task_id (uit error-string).
- cancelPbiOnFailure overslaan voor SPRINT-jobs (geen task_id).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 12:48:04 +02:00
Madhura68
25ab68073a PBI-50 F3: nieuwe MCP-tools voor SPRINT_IMPLEMENTATION-flow
Vier nieuwe tools + propagateStatusUpwards uitbreiding:

T1 — verify_sprint_task (src/tools/verify-sprint-task.ts):
  Execution-aware verify met frozen plan_snapshot. Input: execution_id +
  worktree_path + optionele summary (voor PARTIAL/DIVERGENT-rationale).
  Vult base_sha dynamisch voor task[1..N] op basis van vorige DONE-execution's
  head_sha. Schrijft verify_result + verify_summary op execution-row.
  Returns { result, reasoning, base_sha, allowed_for_done, reason? } —
  allowed_for_done via standaard checkVerifyGate met snapshot-velden.

T2 — update_task_execution (src/tools/update-task-execution.ts):
  Lifecycle-tool voor SprintTaskExecution: PENDING/RUNNING/DONE/FAILED/SKIPPED
  + base_sha/head_sha/skip_reason. Idempotent. Token-check via
  execution.sprint_job.claimed_by_token_id. started_at/finished_at automatisch.

T3 — job_heartbeat (src/tools/job-heartbeat.ts):
  Verlengt ClaudeJob.lease_until met 5 min via atomic conditional UPDATE
  (token-check + status-check in WHERE). Voor SPRINT-jobs: response bevat
  sprint_run_status + sprint_run_pause_reason zodat worker op UI-side cancel
  of MERGE_CONFLICT-pause kan breken zonder extra query.

T4 — update_task_status sprint_run_id-arg + token-coupling
  (src/tools/update-task-status.ts):
  Optionele sprint_run_id-arg voor expliciete cascade. Validaties: SprintRun
  bestaat + actief, task in deze sprint, current token heeft een actieve
  ClaudeJob in deze run geclaimd (403 anders). Response uitgebreid met
  sprint_run_status_change.

T5 — propagateStatusUpwards sprintRunId-param
  (src/lib/tasks-status-update.ts):
  Optionele sprintRunId-parameter. Resolve-volgorde: expliciete arg →
  ClaudeJob.task_id-lookup → Story → Sprint → SprintRun.findFirst({active}).
  De derde fallback dekt SPRINT_IMPLEMENTATION (geen task_id-koppeling) én
  handmatige task-statuswijzigingen via UI. cancelExceptJobId voor
  sibling-cancel; null voor SPRINT-job betekent geen siblings te cancellen.

src/index.ts: drie nieuwe tools geregistreerd.

Tests: 31 files, 243 passing (geen tests voor nieuwe tools nog — F5).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 12:40:18 +02:00
Madhura68
35601e8e4b PBI-50 F2-T2/T3: SPRINT_IMPLEMENTATION-pad in getFullJobContext + lease-driven stale-reset
F2-T2 — getFullJobContext branche voor `kind === 'SPRINT_IMPLEMENTATION'`:
- Fetch sprint_run met deep include (sprint → product + stories → pbi + tasks).
- resolveRepoRoot via product; rollbackClaim bij faal.
- Branch-resolutie: previous_run_id + branch → reuse (resume-pad), anders
  verse `feat/sprint-<run_id-suffix>`. createWorktreeForJob met juiste
  reuseBranch-flag.
- Capture base_sha via `git rev-parse HEAD` na worktree-add.
- Frozen scope-snapshot: SprintTaskExecution.createMany met plan_snapshot,
  verify_required_snapshot, verify_only_snapshot per task in scope. Order
  is PBI→Story→Task. base_sha alleen op task[0] (rest fillt verify-tool).
- Update job.branch + job.base_sha + sprint_run.branch in één transactie.
- Lookup execution_ids voor response shape.

F2-T3 — resetStaleClaimedJobs lease-driven:
- WHERE-clause uitgebreid naar `status IN ('CLAIMED','RUNNING')` met OR-clause
  `lease_until < NOW() OR (lease_until IS NULL AND claimed_at < NOW() - 30min)`.
  Legacy jobs zonder lease blijven via claimed_at-pad werken; nieuwe jobs
  via lease_until.
- RETURNING uitgebreid met kind, sprint_run_id, branch.
- Bij stale FAILED SPRINT_IMPLEMENTATION: push branch (geen mark-ready,
  geen PR-promotie) zodat werk niet verloren gaat. Vul SprintRun.failure_reason
  met laatst-RUNNING execution voor diagnose.

Imports: getWorktreeRoot uit worktree-paths.js, pushBranchForJob uit push.js.

Tests: 31 files, 243 passing.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 12:33:55 +02:00
Madhura68
de6bbd4edd PBI-50 F2-T1: claim-filter kind-based + lease_until persisten
Schema-sync vanaf Scrum4Me (PBI-50 F1):
- PrStrategy.SPRINT_BATCH, ClaudeJobKind.SPRINT_IMPLEMENTATION
- enum SprintTaskExecutionStatus, model SprintTaskExecution
- ClaudeJob.lease_until + status_lease_until index
- SprintRun.previous_run_id (self-relation)

tryClaimJob in src/tools/wait-for-job.ts:
- WHERE-clause refactor naar kind-based discriminatie. NULL-checks vervangen
  door expliciete `cj.kind IN (...)`. SPRINT_IMPLEMENTATION en TASK_IMPLEMENTATION
  vereisen beide actieve SprintRun (QUEUED/RUNNING). Idea-kinds blijven
  standalone claimable.
- UPDATE op claim zet `lease_until = NOW() + INTERVAL '5 minutes'`.

Tests: 19 wait-for-job tests groen.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 12:27:48 +02:00
Janpeter Visser
7dbc9fe249
Merge pull request #34 from madhura68/feat/sprint-worker
PBI-49: review-fixes (primary_worktree order, idea rollback, sprint mark-ready fallback)
2026-05-07 12:19:45 +02:00
Madhura68
d2f43fe8e6 PBI-49: review-fixes — primary_worktree order, idea-claim rollback, sprint mark-ready fallback
Three findings from PBI-47 review:

P1 — primary_worktree_path scheiden van lock-volgorde
  setupProductWorktrees acquired locks in alphabetical order (deadlock prevention)
  but also returned worktrees in that order, so worktrees[0] could point at a
  secondary product when its id sorted before the primary's. Lock-acquire stays
  sorted; output now preserves caller's input order so worktrees[0] is always
  the primary.

P1 — Idea-claim rollback bij worktree setup failure
  setupProductWorktrees runs after tryClaimJob has already flipped the job to
  CLAIMED. A failure in lock-acquire/git-fetch/reset/sync left the job hanging
  until the 30-min stale-reset and the lock-map populated. Wrapped in try/catch
  with releaseLocksOnTerminal + rollbackClaim mirror of the task-pad behaviour.

P2 — SPRINT mark-ready fallback when last task didn't push
  The mark-ready path used updated.pr_url, which is null when the closing task
  was verify-only or had no diff. Now falls back to a Prisma findFirst on the
  SprintRun's earliest job with pr_url IS NOT NULL.

Tests: 31 files, 243 passing (incl. new input-order regression for setupProductWorktrees).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 11:02:23 +02:00
Janpeter Visser
eccc75ca56
Merge pull request #33 from madhura68/feat/sprint-worker
PBI-9 + PBI-47: worktree foundation, product-worktrees, P0 fixes, PAUSED flow
2026-05-06 21:35:47 +02:00
Madhura68
f7f5a487ec PBI-9 + PBI-47: worktree foundation, product-worktrees, P0 fixes, PAUSED flow
Adds two interlocking PBIs:

PBI-9 — Worktree foundation + persistent product-worktrees for idea-jobs
  - src/git/worktree-paths.ts: centralised root + skip-set + lock-path helpers
  - src/git/file-lock.ts: proper-lockfile wrapper, deadlock-safe ordered acquire
  - src/git/product-worktree.ts: detached-HEAD worktree per product, .scratch/
    excluded via git rev-parse --git-path (handles linked .git file)
  - src/git/job-locks.ts: setupProductWorktrees + releaseLocksOnTerminal
  - wait-for-job.ts: idea-branch wires product-worktrees for IDEA_GRILL/MAKE_PLAN
  - update-job-status.ts + pbi-cascade.ts + stale-reset: release on all four
    server-side terminal transitions (DONE/FAILED/CANCELLED/stale)
  - cleanup-my-worktrees: skip _products/ + *.lock
  - README: worktrees section with single-host invariant + advisory-lock path

PBI-47 — Sprint-flow P0 corrections + PAUSED flow with rich pause_context
  - prisma schema: ClaudeJob.{base_sha,head_sha} + SprintRun.pause_context
  - tryClaimJob captures base_sha; prepareDoneUpdate captures head_sha
  - verify-task-against-plan diffs vs base_sha (no more origin/main fallback);
    rejects with MISSING_BASE_SHA when null — fixes per-task verify-scope P0
  - pr.ts: createPullRequest enableAutoMerge default false; new
    enableAutoMergeOnPr with --match-head-commit guard + 5-category typed
    EnableAutoMergeResult — fixes STORY auto-merge timing P0
  - src/flow/{effects,worktree-lease,pr-flow,sprint-run}.ts: pure transition
    modules + idempotent declarative effects executor
  - update-job-status: STORY auto-merge fires only on the last task of the
    story (story.status === DONE), with head_sha as merge guard; MERGE_CONFLICT
    routes to sprint-run flow which produces CREATE_CLAUDE_QUESTION +
    SET_SPRINT_RUN_STATUS effects with rich pause_context

Tests: 31 test files, 242 passing. Pure-transition tests cover STORY 3-tasks
auto-merge timing, SPRINT draft→ready, MERGE_CONFLICT pause/resume, file-lock
deadlock prevention, worktree-lease lifecycle, delete-only verify (ALIGNED),
per-job verify scope (base_sha isolation), 5-category auto-merge errors.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-06 21:09:48 +02:00
Janpeter Visser
4598efde58
Merge pull request #32 from madhura68/feat/sprint-worker
PBI-8 (worker): sprint-aware branch + SPRINT-mode draft-PR
2026-05-06 17:18:07 +02:00
Madhura68
454d96ee04 PBI-8 (vervolg): Sprint-aware branch + SPRINT-mode draft-PR
T-22 — sprint-aware branch-resolutie (resolveBranchForJob):
  - SPRINT-mode  → feat/sprint-<sprint_run_id-suffix> (één branch voor hele run)
  - STORY-mode   → feat/story-<story_id-suffix>      (één per story)
  - Legacy (zonder sprint_run_id): bestaand gedrag
  Sibling-detection herbruikt branch wanneer een eerdere job in dezelfde
  scope al de branch heeft.

T-24 — SPRINT-mode draft-PR + ready-bij-DONE:
  - createPullRequest accepteert nu draft + enableAutoMerge flags
  - Nieuwe markPullRequestReady-helper voor draft → ready transitie
  - maybeCreateAutoPr in SPRINT-mode: opent één draft-PR per SprintRun met
    sprint_goal als titel; geen auto-merge; sibling-tasks hergebruiken de
    PR
  - update-job-status detecteert sprint-DONE via PropagationResult en zet
    de draft-PR via markPullRequestReady ready-for-review (mens reviewt en
    mergt zelf)

T-23 — STORY-mode dekking: bestaande createPullRequest + auto-merge gedrag
ongewijzigd. Tests uitgebreid met sprint-aware mocks; 6 nieuwe
branch-resolution tests + 2 sprint-mode auto-pr tests + 4 markPullRequest
Ready/draft-PR tests.

Tests: 195/195 groen (180 → 195; 15 nieuwe scenario's voor sprint-aware
branch + SPRINT-mode draft-PR + markPullRequestReady).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-06 17:15:04 +02:00
Janpeter Visser
7b135e12dd
Merge pull request #31 from madhura68/feat/sprint-flow
PBI-8: Sprint-flow MCP-orkestratie + verifier delete-only fix
2026-05-06 17:02:27 +02:00
Madhura68
5c5ae20f10 PBI-8: Sprint-flow MCP-orkestratie + verifier-fix
Schema sync vanaf upstream Scrum4Me (v77617e8): FAILED toegevoegd aan
Task/Story/Pbi/SprintStatus, nieuw SprintRunStatus + PrStrategy enums,
SprintRun model, ClaudeJob.sprint_run_id, Product.pr_strategy.

T-18 — propagateStatusUpwards in src/lib/tasks-status-update.ts.
Real-time cascade Task → Story → PBI → Sprint → SprintRun bij elke
task-statuswijziging. Bij FAILED cancelt sibling-jobs in dezelfde
SprintRun. PBI-status BLOCKED blijft handmatig. Houd deze helper bit-
voor-bit synchroon met Scrum4Me/lib/tasks-status-update.ts.
updateTaskStatusWithStoryPromotion blijft als BC-wrapper.

T-19 — wait-for-job.ts claim-filter. Task-jobs worden alleen geclaimd
als hun SprintRun status QUEUED of RUNNING heeft. Idea-jobs blijven
ongefilterd. Bij eerste claim van een QUEUED SprintRun → RUNNING
binnen dezelfde tx (race-safe).

T-20 — update-job-status.ts roept propagateStatusUpwards aan na elke
task DONE/FAILED. Bestaande cancelPbiOnFailure-aanroep blijft voor
PR-cleanup; sibling-cancellation overlap is harmless (idempotent).

T-21 — classify.ts (verifier) leest nu ook "--- a/<path>" zodat
delete-only commits niet meer als EMPTY worden geclassificeerd.
Bug had eerder geleid tot ten onrechte FAILED-status op cmotto5h en
cmotto5i (06-05-2026); zou met cascade-flow een hele sprint laten
falen.

Cleanup: create-todo.ts en open_todos in get-claude-context.ts
verwijderd (Todo-model is op main gedropt). Endpoint geeft nu
open_ideas terug — ideeën die niet PLANNED zijn.

Status-mappers (src/status.ts) uitgebreid met failed.

Tests: 184/184 groen (180 → 184; vier nieuwe delete-only classify-tests
en herwerkte propagate-status tests).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-06 16:59:58 +02:00
Janpeter Visser
c63e2c6730
Merge pull request #29 from madhura68/feat/pbi-fail-cascade
feat: PBI fail-cascade — cancel siblings + undo commits
2026-05-06 10:10:26 +02:00
Janpeter Visser
b2318aa910
Merge pull request #28 from madhura68/chore/sync-idea-prompts
chore: sync make-plan.md prompt from scrum4me upstream
2026-05-06 10:10:09 +02:00
Madhura68
70e58f8b28 feat: PBI fail-cascade — cancel siblings + undo commits
Wanneer een TASK_IMPLEMENTATION-job FAILED wordt, cancelt
cancelPbiOnFailure alle queued/claimed/running siblings binnen
dezelfde PBI (over alle stories heen) en draait gepushte commits
ongedaan:

- Open PR → gh pr close --delete-branch (PR-close + remote-branch-
  delete in één).
- Gemergde PR → revert-PR via git revert -m 1 <mergeSha> in een
  korte worktree, gepusht naar revert/<orig>-<jobid>, gh pr create
  zonder auto-merge (mens reviewed).
- Branch zonder PR → best-effort git push origin --delete.

Race-protectie: update_job_status weigert nu een statuswijziging op
een job die al CANCELLED is met een specifieke JOB_CANCELLED-error,
zodat een parallelle worker zijn lokale werk weggooit ipv een DONE
te forceren. Idempotent — een tweede cascade voor dezelfde PBI is
een no-op. Non-blocking — alle fouten worden warnings in de trace
op de oorspronkelijke failed job zijn error-veld; cascade throwt
nooit naar de caller.

Niet in scope: per-product opt-out, sprint-niveau cascade,
idea-job cascade.

11 nieuwe vitest-cases dekken DB-cascade, branch-grouping, open/
merged/no-PR paden, repo-root-mismatch en de never-throws-garantie.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-06 10:08:31 +02:00
Madhura68
3bee6e5080 chore: sync make-plan.md prompt from scrum4me upstream
Bron-aanpassing in scrum4me/lib/idea-prompts/make-plan.md (PR
madhura68/Scrum4Me#130) wordt hierin gesynced naar de embedded kopie
die de worker daadwerkelijk leest via getIdeaPromptText.

Inhoudelijke wijziging: nieuwe verplichte stap-3 in de werkwijze ("Bij
removal/refactor: doe een dependency-cascade-grep") + complete sectie
met grep-protocol per type wijziging (Prisma-model, component, type,
hernoemen, veld) + eind-taak `npm run typecheck`.

Achtergrond: tijdens ST-1236 (Todo-applicatielaag verwijderen) miste het
plan de cascade naar 4 consumer-bestanden + de v3-landing. Lint en tests
slaagden, next build brak. De upstream prompt-update voorkomt dit voor
toekomstige IDEA_MAKE_PLAN-jobs — deze sync zorgt dat workers het ook
echt zien.

Verified: tsc + vitest (153/153) groen.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-06 10:05:03 +02:00
Janpeter Visser
066a9acc48
Merge pull request #27 from madhura68/chore/heartbeat-10s
chore(presence): heartbeat interval 5s -> 10s
2026-05-06 08:07:28 +02:00
Madhura68
7d5fcde10c chore(presence): heartbeat interval 5s -> 10s
Verlaagt het schrijfvolume naar claude_workers met factor 2. CLAUDE.md noot
toegevoegd dat de Scrum4Me NavBar-drempel (last_seen_at < now() - 15s)
bij 10s interval krap is — daar kan 25-30s een veiliger marge zijn.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-06 08:03:14 +02:00
Janpeter Visser
800f4135d1
Merge pull request #26 from madhura68/claude/hardcore-shtern-3d6db0
feat: per-job token-usage capture via PostToolUse hook
2026-05-06 07:55:42 +02:00
82 changed files with 8122 additions and 547 deletions

View file

@ -3,3 +3,9 @@ DATABASE_URL="postgresql://user:pass@host:5432/dbname"
# API token from Scrum4Me → /settings/tokens # API token from Scrum4Me → /settings/tokens
SCRUM4ME_TOKEN="" SCRUM4ME_TOKEN=""
# Internal push endpoint (main-app) for web-push notifications
# Set to the main-app /api/internal/push/send URL; leave empty to disable push (feature-gated).
INTERNAL_PUSH_URL="https://scrum4me.example.com/api/internal/push/send"
# Shared secret (≥32 chars) — must match INTERNAL_PUSH_SECRET in the main-app env.
INTERNAL_PUSH_SECRET="<generate-with: openssl rand -hex 32>"

3
.gitignore vendored
View file

@ -12,3 +12,6 @@ prisma/generated
# Editor # Editor
.vscode .vscode
.idea .idea
# Claude Code worktrees (per-session, never tracked)
.claude/worktrees/

View file

@ -26,6 +26,16 @@ MCP server that exposes the Scrum4Me dev-flow as native tools for Claude Code.
A story with 3 sub-tasks lands as **1 branch** with 3 commits and **1 PR** (assuming `auto_pr=true`). Sibling sub-tasks share the same `pr_url``maybeCreateAutoPr` reuses an existing PR from a sibling job instead of opening duplicates. Story-level PR title (`<story-code>: <story-title>`) so the GitHub view reads as one logical change rather than per-task fragments. A story with 3 sub-tasks lands as **1 branch** with 3 commits and **1 PR** (assuming `auto_pr=true`). Sibling sub-tasks share the same `pr_url``maybeCreateAutoPr` reuses an existing PR from a sibling job instead of opening duplicates. Story-level PR title (`<story-code>: <story-title>`) so the GitHub view reads as one logical change rather than per-task fragments.
### PBI fail-cascade
When a `TASK_IMPLEMENTATION` job ends in `FAILED`, `cancelPbiOnFailure` (`src/cancel/pbi-cascade.ts`) cancels every queued/claimed/running sibling under the **same PBI** (across all stories) and undoes already-pushed commits:
- **Open PR**`gh pr close --delete-branch` with a cascade-comment.
- **Merged PR** → revert-PR opened against the base branch via `git revert -m 1 <mergeSha>`. **No** auto-merge on the revert PR — review by hand.
- **Branch without PR** → best-effort `git push origin --delete <branch>`.
A trace (cancelled job count, closed/reverted PRs, deleted branches) is written to the original failed job's `error` column. Race-protection: if a parallel worker tries to `update_job_status` on a job that the cascade already set to `CANCELLED`, the call is rejected with a `JOB_CANCELLED` error so the agent discards local work and calls `wait_for_job` again. The cascade is idempotent and never throws — failures become warnings on the failed-job's trace.
### Required configuration ### Required configuration
Set env var per product: Set env var per product:
@ -69,12 +79,12 @@ Run `cleanup_my_worktrees` (no arguments) to scan `~/.scrum4me-agent-worktrees/`
## Worker presence ## Worker presence
Server-startup registers a `ClaudeWorker` record + starts a 5 s heartbeat; SIGTERM/SIGINT cleans it up. The Scrum4Me NavBar counts active workers via `last_seen_at < now() - 15s`. Server-startup registers a `ClaudeWorker` record + starts a 10 s heartbeat; SIGTERM/SIGINT cleans it up. The Scrum4Me NavBar counts active workers via `last_seen_at < now() - 15s` — at 10 s interval one missed tick + jitter can flicker the indicator; bump that threshold in Scrum4Me to ≥ 25 s if needed.
| File | Purpose | | File | Purpose |
|---|---| |---|---|
| `src/presence/worker.ts` | `registerWorker` (upsert + pg_notify worker_connected) + `unregisterWorker` | | `src/presence/worker.ts` | `registerWorker` (upsert + pg_notify worker_connected) + `unregisterWorker` |
| `src/presence/heartbeat.ts` | `startHeartbeat`5 s interval, self-heals by re-registering when record disappears | | `src/presence/heartbeat.ts` | `startHeartbeat`10 s interval, self-heals by re-registering when record disappears |
| `src/presence/shutdown.ts` | `registerShutdownHandlers` — SIGTERM/SIGINT → stop heartbeat + unregister | | `src/presence/shutdown.ts` | `registerShutdownHandlers` — SIGTERM/SIGINT → stop heartbeat + unregister |
| `src/index.ts` | Bootstrap: calls `getAuth``registerWorker``startHeartbeat``registerShutdownHandlers` | | `src/index.ts` | Bootstrap: calls `getAuth``registerWorker``startHeartbeat``registerShutdownHandlers` |

View file

@ -32,6 +32,9 @@ activity and create todos via native tool calls instead of curl.
| `check_queue_empty` | Synchronous, non-blocking count of active jobs (QUEUED/CLAIMED/RUNNING); optional `product_id` scope | no | | `check_queue_empty` | Synchronous, non-blocking count of active jobs (QUEUED/CLAIMED/RUNNING); optional `product_id` scope | no |
| `set_pbi_pr` | Write `pr_url` on a PBI and clear `pr_merged_at`. Idempotent: re-calling overwrites `pr_url` and resets `pr_merged_at` to null | no | | `set_pbi_pr` | Write `pr_url` on a PBI and clear `pr_merged_at`. Idempotent: re-calling overwrites `pr_url` and resets `pr_merged_at` to null | no |
| `mark_pbi_pr_merged` | Set `pr_merged_at = now()` on a PBI. Requires `pr_url` to already be set. Idempotent: re-calling overwrites the timestamp | no | | `mark_pbi_pr_merged` | Set `pr_merged_at = now()` on a PBI. Requires `pr_url` to already be set. Idempotent: re-calling overwrites the timestamp | no |
| `verify_sprint_task` | SPRINT_IMPLEMENTATION-flow: compare a `SprintTaskExecution`'s frozen `plan_snapshot` against `git diff <base_sha>...HEAD`. Returns `verify_result` + `allowed_for_done`. For `task[1..N]` zonder base_sha vult de tool die in op basis van de head_sha van de vorige DONE-execution | yes (read-only) |
| `update_task_execution` | SPRINT_IMPLEMENTATION-flow: mutate `SprintTaskExecution.status` (PENDING/RUNNING/DONE/FAILED/SKIPPED). Token must own the parent SPRINT-job. Idempotent | no |
| `job_heartbeat` | Extend `claude_jobs.lease_until` by 5 min. For SPRINT-jobs: response includes `sprint_run_status` + `sprint_run_pause_reason` so the worker can break its task-loop on UI-side cancel/pause | no |
Demo accounts may read but writes return `PERMISSION_DENIED`. Demo accounts may read but writes return `PERMISSION_DENIED`.
@ -293,6 +296,10 @@ Minimale agent-prompt (geen CLAUDE.md-context nodig):
> *Pak de volgende job uit de Scrum4Me-queue.* > *Pak de volgende job uit de Scrum4Me-queue.*
## Web-push integration
When `INTERNAL_PUSH_URL` and `INTERNAL_PUSH_SECRET` are set, the MCP server fires a fire-and-forget push notification to the main-app's internal endpoint (`/api/internal/push/send`) on two events: when `ask_user_question` creates a new question (tag `claude-q-<id>`), and when `update_job_status` transitions a job to `done` or `failed` (tag `job-<id>`). Both calls are wrapped in a 5 s `AbortController` timeout and a `try/catch` so a push failure never interrupts the tool response. Omitting the env vars disables the feature entirely. The `INTERNAL_PUSH_SECRET` value must match the one configured in the main-app; generate a fresh secret with `openssl rand -hex 32`.
## Schema sync ## Schema sync
The Prisma schema is the source of truth in the upstream Scrum4Me The Prisma schema is the source of truth in the upstream Scrum4Me
@ -335,3 +342,33 @@ npx @modelcontextprotocol/inspector node dist/index.js
- **Production database** — verify against a preview database before - **Production database** — verify against a preview database before
running against prod. The token check enforces user scope but does running against prod. The token check enforces user scope but does
not gate reads of unrelated products you happen to be a member of. not gate reads of unrelated products you happen to be a member of.
## Worktrees
Scrum4Me-mcp uses git worktrees rooted at `~/.scrum4me-agent-worktrees/` (override via `SCRUM4ME_AGENT_WORKTREE_DIR`).
### Two kinds of worktrees
- **Per-job task-worktrees** (`<jobId>/`) — one per `TASK_IMPLEMENTATION` job. Created at claim, cleaned up on `DONE`/`FAILED`/`CANCELLED` via `cleanup_my_worktrees`.
- **Persistent product-worktrees** (`_products/<productId>/`) — one per product with `repo_url`, used by `IDEA_GRILL` and `IDEA_MAKE_PLAN`. **Detached HEAD on `origin/main`**, hard-reset at every job start. `.scratch/` holds throw-away work and is wiped on each claim.
### Concurrency: file-locks
Product-worktrees are serialised via `proper-lockfile` on `_products/<productId>.lock`. Two parallel idea-jobs on the same product wait for each other. For multi-product idea-jobs, locks are acquired in alphabetical order to prevent deadlocks.
### Single-host invariant
`proper-lockfile` only works when all MCP-server processes run on the same host. Migrate to Postgres `pg_advisory_lock` when:
- multiple MCP instances on different machines serve workers, or
- the worktree directory is shared over NFS/CIFS.
Migration path: replace `acquireFileLock` in `src/git/file-lock.ts` with a `pg_try_advisory_lock(hashtext(path)::bigint)` wrapper via the existing Prisma connection. The API stays identical.
### Manual cleanup
`cleanup_my_worktrees` skips `_products/` and `*.lock` automatically. To clean up a product-worktree manually (after archive or repo-rename):
```bash
git worktree remove --force ~/.scrum4me-agent-worktrees/_products/<productId>
rm ~/.scrum4me-agent-worktrees/_products/<productId>.lock # if still present
```

View file

@ -0,0 +1,350 @@
import { describe, it, expect, vi, beforeEach } from 'vitest'
vi.mock('../src/prisma.js', () => ({
prisma: {
claudeJob: {
findUnique: vi.fn(),
findMany: vi.fn(),
updateMany: vi.fn(),
update: vi.fn(),
},
},
}))
vi.mock('../src/tools/wait-for-job.js', async (importOriginal) => {
const original = await importOriginal<typeof import('../src/tools/wait-for-job.js')>()
return { ...original, resolveRepoRoot: vi.fn() }
})
vi.mock('../src/git/worktree.js', () => ({
removeWorktreeForJob: vi.fn(),
}))
vi.mock('../src/git/pr.js', () => ({
closePullRequest: vi.fn(),
getPullRequestState: vi.fn(),
createRevertPullRequest: vi.fn(),
}))
vi.mock('../src/git/push.js', () => ({
deleteRemoteBranch: vi.fn(),
}))
import { prisma } from '../src/prisma.js'
import { resolveRepoRoot } from '../src/tools/wait-for-job.js'
import { removeWorktreeForJob } from '../src/git/worktree.js'
import {
closePullRequest,
getPullRequestState,
createRevertPullRequest,
} from '../src/git/pr.js'
import { deleteRemoteBranch } from '../src/git/push.js'
import { cancelPbiOnFailure } from '../src/cancel/pbi-cascade.js'
const mockPrisma = prisma as unknown as {
claudeJob: {
findUnique: ReturnType<typeof vi.fn>
findMany: ReturnType<typeof vi.fn>
updateMany: ReturnType<typeof vi.fn>
update: ReturnType<typeof vi.fn>
}
}
const mockResolveRepoRoot = resolveRepoRoot as ReturnType<typeof vi.fn>
const mockRemoveWorktree = removeWorktreeForJob as ReturnType<typeof vi.fn>
const mockClosePr = closePullRequest as ReturnType<typeof vi.fn>
const mockGetPrState = getPullRequestState as ReturnType<typeof vi.fn>
const mockCreateRevertPr = createRevertPullRequest as ReturnType<typeof vi.fn>
const mockDeleteBranch = deleteRemoteBranch as ReturnType<typeof vi.fn>
beforeEach(() => {
vi.clearAllMocks()
mockPrisma.claudeJob.update.mockResolvedValue({})
mockPrisma.claudeJob.updateMany.mockResolvedValue({ count: 0 })
mockResolveRepoRoot.mockResolvedValue('/repos/proj')
mockRemoveWorktree.mockResolvedValue(undefined)
// Sensible defaults so an un-stubbed branch in a test doesn't throw on
// `result.deleted` / `result.ok` access. Tests that care override these.
mockDeleteBranch.mockResolvedValue({ deleted: true })
mockClosePr.mockResolvedValue({ ok: true })
})
const FAILED_JOB = {
id: 'job-failed',
kind: 'TASK_IMPLEMENTATION',
product_id: 'prod-1',
task_id: 'task-failed',
branch: 'feat/story-aaaabbbb',
pr_url: null,
task: { story: { pbi: { id: 'pbi-1', code: 'PBI-7' } } },
}
describe('cancelPbiOnFailure', () => {
it('no-ops for non-TASK_IMPLEMENTATION jobs', async () => {
mockPrisma.claudeJob.findUnique.mockResolvedValue({ ...FAILED_JOB, kind: 'IDEA_GRILL' })
const out = await cancelPbiOnFailure('job-failed')
expect(out.cancelled_job_ids).toEqual([])
expect(mockPrisma.claudeJob.findMany).not.toHaveBeenCalled()
expect(mockPrisma.claudeJob.updateMany).not.toHaveBeenCalled()
})
it('no-ops when failed job has no PBI parent', async () => {
mockPrisma.claudeJob.findUnique.mockResolvedValue({
...FAILED_JOB,
task: null,
})
const out = await cancelPbiOnFailure('job-failed')
expect(out).toEqual({
cancelled_job_ids: [],
closed_prs: [],
reverted_prs: [],
deleted_branches: [],
warnings: [],
})
})
it('cancels eligible siblings and writes a trace to the failed job', async () => {
mockPrisma.claudeJob.findUnique.mockResolvedValue(FAILED_JOB)
mockPrisma.claudeJob.findMany.mockResolvedValue([
{ id: 'job-sib1', branch: 'feat/story-aaaabbbb', pr_url: null, status: 'QUEUED', task_id: 't2' },
{ id: 'job-sib2', branch: 'feat/story-ccccdddd', pr_url: null, status: 'CLAIMED', task_id: 't3' },
])
const out = await cancelPbiOnFailure('job-failed')
expect(mockPrisma.claudeJob.updateMany).toHaveBeenCalledWith(
expect.objectContaining({
where: { id: { in: ['job-sib1', 'job-sib2'] } },
data: expect.objectContaining({
status: 'CANCELLED',
error: 'cancelled_by_pbi_failure',
}),
}),
)
expect(out.cancelled_job_ids).toEqual(['job-sib1', 'job-sib2'])
expect(mockPrisma.claudeJob.update).toHaveBeenCalledWith(
expect.objectContaining({
where: { id: 'job-failed' },
data: expect.objectContaining({ error: expect.stringContaining('cancelled_by_self') }),
}),
)
})
it('idempotent: empty eligible set means no updateMany call', async () => {
mockPrisma.claudeJob.findUnique.mockResolvedValue(FAILED_JOB)
mockPrisma.claudeJob.findMany.mockResolvedValue([])
await cancelPbiOnFailure('job-failed')
expect(mockPrisma.claudeJob.updateMany).not.toHaveBeenCalled()
})
it('closes an open PR with the cascade comment', async () => {
mockPrisma.claudeJob.findUnique.mockResolvedValue({
...FAILED_JOB,
pr_url: 'https://github.com/o/r/pull/1',
})
mockPrisma.claudeJob.findMany.mockResolvedValue([])
mockGetPrState.mockResolvedValue({
state: 'OPEN',
mergeCommit: null,
baseRefName: 'main',
title: 'feat: x',
})
mockClosePr.mockResolvedValue({ ok: true })
const out = await cancelPbiOnFailure('job-failed')
expect(mockClosePr).toHaveBeenCalledWith(
expect.objectContaining({
prUrl: 'https://github.com/o/r/pull/1',
comment: expect.stringContaining('PBI PBI-7'),
}),
)
expect(out.closed_prs).toEqual(['https://github.com/o/r/pull/1'])
})
it('creates a revert-PR when an affected PR is already merged', async () => {
mockPrisma.claudeJob.findUnique.mockResolvedValue({
...FAILED_JOB,
pr_url: 'https://github.com/o/r/pull/9',
})
mockPrisma.claudeJob.findMany.mockResolvedValue([])
mockGetPrState.mockResolvedValue({
state: 'MERGED',
mergeCommit: 'abc123def',
baseRefName: 'main',
title: 'feat: shipped',
})
mockCreateRevertPr.mockResolvedValue({ url: 'https://github.com/o/r/pull/10' })
const out = await cancelPbiOnFailure('job-failed')
expect(mockCreateRevertPr).toHaveBeenCalledWith(
expect.objectContaining({
repoRoot: '/repos/proj',
mergeSha: 'abc123def',
baseRef: 'main',
originalTitle: 'feat: shipped',
originalBranch: 'feat/story-aaaabbbb',
jobId: 'job-failed',
pbiCode: 'PBI-7',
}),
)
expect(out.reverted_prs).toEqual([
{ original: 'https://github.com/o/r/pull/9', revertPr: 'https://github.com/o/r/pull/10' },
])
expect(mockClosePr).not.toHaveBeenCalled()
})
it('skips revert when no repo root is configured + emits a warning', async () => {
mockResolveRepoRoot.mockResolvedValue(null)
mockPrisma.claudeJob.findUnique.mockResolvedValue({
...FAILED_JOB,
pr_url: 'https://github.com/o/r/pull/9',
})
mockPrisma.claudeJob.findMany.mockResolvedValue([])
mockGetPrState.mockResolvedValue({
state: 'MERGED',
mergeCommit: 'abc',
baseRefName: 'main',
title: 'x',
})
const out = await cancelPbiOnFailure('job-failed')
expect(mockCreateRevertPr).not.toHaveBeenCalled()
expect(out.warnings.some((w) => /no repo root/i.test(w))).toBe(true)
})
it('deletes a remote branch when there is no PR for it', async () => {
mockPrisma.claudeJob.findUnique.mockResolvedValue({
...FAILED_JOB,
pr_url: null,
})
mockPrisma.claudeJob.findMany.mockResolvedValue([])
mockDeleteBranch.mockResolvedValue({ deleted: true })
const out = await cancelPbiOnFailure('job-failed')
expect(mockDeleteBranch).toHaveBeenCalledWith({
repoRoot: '/repos/proj',
branch: 'feat/story-aaaabbbb',
})
expect(out.deleted_branches).toEqual(['feat/story-aaaabbbb'])
})
it('groups siblings sharing a branch so the PR is only closed once', async () => {
mockPrisma.claudeJob.findUnique.mockResolvedValue({
...FAILED_JOB,
branch: 'feat/story-shared',
pr_url: 'https://github.com/o/r/pull/1',
})
mockPrisma.claudeJob.findMany.mockResolvedValue([
{
id: 'job-sib',
branch: 'feat/story-shared',
pr_url: 'https://github.com/o/r/pull/1',
status: 'QUEUED',
task_id: 't2',
},
])
mockGetPrState.mockResolvedValue({
state: 'OPEN',
mergeCommit: null,
baseRefName: 'main',
title: 't',
})
mockClosePr.mockResolvedValue({ ok: true })
await cancelPbiOnFailure('job-failed')
expect(mockClosePr).toHaveBeenCalledTimes(1)
})
it('removes worktrees of cancelled siblings', async () => {
mockPrisma.claudeJob.findUnique.mockResolvedValue(FAILED_JOB)
mockPrisma.claudeJob.findMany.mockResolvedValue([
{ id: 'job-sib1', branch: null, pr_url: null, status: 'QUEUED', task_id: 't2' },
])
await cancelPbiOnFailure('job-failed')
expect(mockRemoveWorktree).toHaveBeenCalledWith({
repoRoot: '/repos/proj',
jobId: 'job-sib1',
keepBranch: false,
})
})
it('never throws — wraps unexpected errors into warnings', async () => {
mockPrisma.claudeJob.findUnique.mockRejectedValue(new Error('boom'))
const out = await cancelPbiOnFailure('job-failed')
expect(out.warnings.some((w) => w.includes('boom'))).toBe(true)
})
it('no-ops when failed job has status SKIPPED (no-op exit, niet een echte fail)', async () => {
mockPrisma.claudeJob.findUnique.mockResolvedValue({ ...FAILED_JOB, status: 'SKIPPED' })
const out = await cancelPbiOnFailure('job-failed')
expect(out.cancelled_job_ids).toEqual([])
expect(mockPrisma.claudeJob.findMany).not.toHaveBeenCalled()
expect(mockPrisma.claudeJob.updateMany).not.toHaveBeenCalled()
expect(mockPrisma.claudeJob.update).not.toHaveBeenCalled()
})
it('appends the cascade trace to an existing error (preserves original cause)', async () => {
// findUnique wordt twee keer aangeroepen: eerst voor failedJob (status FAILED + originele error),
// daarna door de append-trace om de huidige error te lezen vóór update.
mockPrisma.claudeJob.findUnique
.mockResolvedValueOnce({ ...FAILED_JOB, status: 'FAILED' })
.mockResolvedValueOnce({ error: 'timeout: agent died after 5min' })
mockPrisma.claudeJob.findMany.mockResolvedValue([])
await cancelPbiOnFailure('job-failed')
expect(mockPrisma.claudeJob.update).toHaveBeenCalledWith(
expect.objectContaining({
where: { id: 'job-failed' },
data: expect.objectContaining({
error: expect.stringMatching(/timeout: agent died after 5min[\s\S]*---[\s\S]*cancelled_by_self/),
}),
}),
)
})
it('falls back to trace-only when there is no existing error', async () => {
mockPrisma.claudeJob.findUnique
.mockResolvedValueOnce({ ...FAILED_JOB, status: 'FAILED' })
.mockResolvedValueOnce({ error: null })
mockPrisma.claudeJob.findMany.mockResolvedValue([])
await cancelPbiOnFailure('job-failed')
const updateCall = mockPrisma.claudeJob.update.mock.calls[0]?.[0] as
| { data: { error: string } }
| undefined
expect(updateCall?.data.error).toMatch(/^cancelled_by_self/)
expect(updateCall?.data.error).not.toContain('---')
})
it('truncates the merged error at 1900 chars while preserving the head of the original', async () => {
const longOriginal = 'X'.repeat(1800)
mockPrisma.claudeJob.findUnique
.mockResolvedValueOnce({ ...FAILED_JOB, status: 'FAILED' })
.mockResolvedValueOnce({ error: longOriginal })
mockPrisma.claudeJob.findMany.mockResolvedValue([])
await cancelPbiOnFailure('job-failed')
const updateCall = mockPrisma.claudeJob.update.mock.calls[0]?.[0] as
| { data: { error: string } }
| undefined
expect(updateCall?.data.error.length).toBeLessThanOrEqual(1900)
expect(updateCall?.data.error.startsWith('X')).toBe(true)
})
})

View file

@ -73,6 +73,17 @@ describe('listWorktreeJobIds', () => {
mockReaddir.mockRejectedValue(Object.assign(new Error('ENOENT'), { code: 'ENOENT' })) mockReaddir.mockRejectedValue(Object.assign(new Error('ENOENT'), { code: 'ENOENT' }))
expect(await listWorktreeJobIds(WORKTREE_PARENT)).toEqual([]) expect(await listWorktreeJobIds(WORKTREE_PARENT)).toEqual([])
}) })
it('skips _products/ system dir and *.lock files (PBI-9)', async () => {
mockReaddir.mockResolvedValue([
makeDirent('job-aaa'),
makeDirent('_products'),
makeDirent('product-abc.lock'),
makeDirent('job-bbb'),
])
const ids = await listWorktreeJobIds(WORKTREE_PARENT)
expect(ids).toEqual(['job-aaa', 'job-bbb'])
})
}) })
describe('cleanupWorktrees', () => { describe('cleanupWorktrees', () => {

View file

@ -0,0 +1,165 @@
import { describe, it, expect, vi, beforeEach } from 'vitest'
import { Prisma } from '@prisma/client'
vi.mock('../src/prisma.js', () => ({
prisma: {
sprint: {
findMany: vi.fn(),
create: vi.fn(),
},
},
}))
vi.mock('../src/auth.js', () => ({
requireWriteAccess: vi.fn(),
PermissionDeniedError: class PermissionDeniedError extends Error {
constructor(message = 'Demo accounts cannot perform write operations') {
super(message)
this.name = 'PermissionDeniedError'
}
},
}))
vi.mock('../src/access.js', () => ({
userCanAccessProduct: vi.fn(),
}))
import { prisma } from '../src/prisma.js'
import { requireWriteAccess } from '../src/auth.js'
import { userCanAccessProduct } from '../src/access.js'
import { handleCreateSprint } from '../src/tools/create-sprint.js'
const mockPrisma = prisma as unknown as {
sprint: {
findMany: ReturnType<typeof vi.fn>
create: ReturnType<typeof vi.fn>
}
}
const mockRequireWriteAccess = requireWriteAccess as ReturnType<typeof vi.fn>
const mockUserCanAccessProduct = userCanAccessProduct as ReturnType<typeof vi.fn>
const PRODUCT_ID = 'prod-1'
const USER_ID = 'user-1'
beforeEach(() => {
vi.clearAllMocks()
mockRequireWriteAccess.mockResolvedValue({ userId: USER_ID, tokenId: 'tok-1', username: 'alice', isDemo: false })
mockUserCanAccessProduct.mockResolvedValue(true)
mockPrisma.sprint.findMany.mockResolvedValue([])
})
function parseResult(result: Awaited<ReturnType<typeof handleCreateSprint>>) {
const text = result.content?.[0]?.type === 'text' ? result.content[0].text : ''
try { return JSON.parse(text) } catch { return text }
}
describe('handleCreateSprint', () => {
it('happy path: creates sprint with auto-generated code', async () => {
mockPrisma.sprint.create.mockResolvedValue({
id: 'spr-1',
code: 'S-2026-05-11-1',
sprint_goal: 'My goal',
status: 'OPEN',
start_date: new Date('2026-05-11'),
created_at: new Date('2026-05-11T10:00:00Z'),
})
const result = await handleCreateSprint({
product_id: PRODUCT_ID,
sprint_goal: 'My goal',
})
expect(mockPrisma.sprint.create).toHaveBeenCalledTimes(1)
const callArgs = mockPrisma.sprint.create.mock.calls[0][0]
expect(callArgs.data.product_id).toBe(PRODUCT_ID)
expect(callArgs.data.status).toBe('OPEN')
expect(callArgs.data.sprint_goal).toBe('My goal')
expect(callArgs.data.code).toMatch(/^S-\d{4}-\d{2}-\d{2}-1$/)
expect(callArgs.data.start_date).toBeInstanceOf(Date)
const parsed = parseResult(result)
expect(parsed.id).toBe('spr-1')
expect(parsed.status).toBe('OPEN')
})
it('uses user-provided code when given', async () => {
mockPrisma.sprint.create.mockResolvedValue({
id: 'spr-2',
code: 'CUSTOM-CODE',
sprint_goal: 'g',
status: 'OPEN',
start_date: new Date(),
created_at: new Date(),
})
await handleCreateSprint({
product_id: PRODUCT_ID,
code: 'CUSTOM-CODE',
sprint_goal: 'g',
})
expect(mockPrisma.sprint.create).toHaveBeenCalledTimes(1)
expect(mockPrisma.sprint.findMany).not.toHaveBeenCalled()
expect(mockPrisma.sprint.create.mock.calls[0][0].data.code).toBe('CUSTOM-CODE')
})
it('auto-code increments past existing same-day sprints', async () => {
// Codes moeten relatief aan "vandaag" zijn: generateNextSprintCode telt
// alleen same-day sprints. Hardcoded datums maakten deze test datum-flaky.
const today = new Date().toISOString().slice(0, 10)
mockPrisma.sprint.findMany.mockResolvedValue([
{ code: `S-${today}-1` },
{ code: `S-${today}-3` },
{ code: 'S-2020-01-01-7' },
])
mockPrisma.sprint.create.mockResolvedValue({
id: 'spr-3', code: 'X', sprint_goal: 'g', status: 'OPEN', start_date: new Date(), created_at: new Date(),
})
await handleCreateSprint({ product_id: PRODUCT_ID, sprint_goal: 'g' })
expect(mockPrisma.sprint.create.mock.calls[0][0].data.code).toBe(`S-${today}-4`)
})
it('retries on P2002 unique conflict', async () => {
const conflict = new Prisma.PrismaClientKnownRequestError('unique', {
code: 'P2002', clientVersion: 'x', meta: { target: ['product_id', 'code'] },
})
mockPrisma.sprint.create
.mockRejectedValueOnce(conflict)
.mockResolvedValueOnce({
id: 'spr-r', code: 'S-2026-05-11-2', sprint_goal: 'g', status: 'OPEN',
start_date: new Date(), created_at: new Date(),
})
const result = await handleCreateSprint({ product_id: PRODUCT_ID, sprint_goal: 'g' })
expect(mockPrisma.sprint.create).toHaveBeenCalledTimes(2)
expect(parseResult(result).id).toBe('spr-r')
})
it('returns error when user cannot access product', async () => {
mockUserCanAccessProduct.mockResolvedValue(false)
const result = await handleCreateSprint({ product_id: PRODUCT_ID, sprint_goal: 'g' })
expect(mockPrisma.sprint.create).not.toHaveBeenCalled()
const text = result.content?.[0]?.type === 'text' ? result.content[0].text : ''
expect(text).toMatch(/not found or not accessible/)
})
it('uses provided start_date when given', async () => {
mockPrisma.sprint.create.mockResolvedValue({
id: 'spr-d', code: 'X', sprint_goal: 'g', status: 'OPEN',
start_date: new Date('2026-01-01'), created_at: new Date(),
})
await handleCreateSprint({
product_id: PRODUCT_ID,
sprint_goal: 'g',
start_date: '2026-01-01',
})
const callArgs = mockPrisma.sprint.create.mock.calls[0][0]
expect(callArgs.data.start_date.toISOString().slice(0, 10)).toBe('2026-01-01')
})
})

View file

@ -0,0 +1,141 @@
import { describe, it, expect, vi, beforeEach } from 'vitest'
vi.mock('../src/prisma.js', () => ({
prisma: {
pbi: { findUnique: vi.fn() },
sprint: { findUnique: vi.fn() },
story: {
findFirst: vi.fn(),
findMany: vi.fn(),
create: vi.fn(),
},
},
}))
vi.mock('../src/auth.js', () => ({
requireWriteAccess: vi.fn(),
PermissionDeniedError: class PermissionDeniedError extends Error {
constructor(message = 'Demo accounts cannot perform write operations') {
super(message)
this.name = 'PermissionDeniedError'
}
},
}))
vi.mock('../src/access.js', () => ({
userCanAccessProduct: vi.fn(),
}))
import { prisma } from '../src/prisma.js'
import { requireWriteAccess } from '../src/auth.js'
import { userCanAccessProduct } from '../src/access.js'
import { handleCreateStory } from '../src/tools/create-story.js'
const mockPrisma = prisma as unknown as {
pbi: { findUnique: ReturnType<typeof vi.fn> }
sprint: { findUnique: ReturnType<typeof vi.fn> }
story: {
findFirst: ReturnType<typeof vi.fn>
findMany: ReturnType<typeof vi.fn>
create: ReturnType<typeof vi.fn>
}
}
const mockRequireWriteAccess = requireWriteAccess as ReturnType<typeof vi.fn>
const mockUserCanAccessProduct = userCanAccessProduct as ReturnType<typeof vi.fn>
const PRODUCT_ID = 'prod-1'
const PBI_ID = 'pbi-1'
const SPRINT_ID = 'spr-1'
const USER_ID = 'user-1'
beforeEach(() => {
vi.clearAllMocks()
mockRequireWriteAccess.mockResolvedValue({ userId: USER_ID, tokenId: 'tok-1', username: 'alice', isDemo: false })
mockUserCanAccessProduct.mockResolvedValue(true)
mockPrisma.pbi.findUnique.mockResolvedValue({ product_id: PRODUCT_ID })
mockPrisma.story.findMany.mockResolvedValue([])
mockPrisma.story.findFirst.mockResolvedValue(null)
mockPrisma.story.create.mockImplementation((args: { data: Record<string, unknown> }) =>
Promise.resolve({ id: 'story-1', created_at: new Date('2026-05-14T10:00:00Z'), ...args.data }),
)
})
function parseResult(result: Awaited<ReturnType<typeof handleCreateStory>>) {
const text = result.content?.[0]?.type === 'text' ? result.content[0].text : ''
try { return JSON.parse(text) } catch { return text }
}
function errorText(result: Awaited<ReturnType<typeof handleCreateStory>>): string {
return result.content?.[0]?.type === 'text' ? result.content[0].text : ''
}
describe('handleCreateStory', () => {
it('without sprint_id: creates story with status OPEN and no sprint', async () => {
const result = await handleCreateStory({ pbi_id: PBI_ID, title: 'A story', priority: 2 })
expect(mockPrisma.sprint.findUnique).not.toHaveBeenCalled()
const data = mockPrisma.story.create.mock.calls[0][0].data
expect(data.status).toBe('OPEN')
expect(data.sprint_id).toBeNull()
expect(data.product_id).toBe(PRODUCT_ID)
expect(parseResult(result).status).toBe('OPEN')
})
it('with valid sprint_id: links story to sprint with status IN_SPRINT', async () => {
mockPrisma.sprint.findUnique.mockResolvedValue({ product_id: PRODUCT_ID })
const result = await handleCreateStory({
pbi_id: PBI_ID,
title: 'A story',
priority: 2,
sprint_id: SPRINT_ID,
})
expect(mockPrisma.sprint.findUnique).toHaveBeenCalledWith({
where: { id: SPRINT_ID },
select: { product_id: true },
})
const data = mockPrisma.story.create.mock.calls[0][0].data
expect(data.status).toBe('IN_SPRINT')
expect(data.sprint_id).toBe(SPRINT_ID)
expect(parseResult(result).sprint_id).toBe(SPRINT_ID)
})
it('rejects a non-existent sprint_id', async () => {
mockPrisma.sprint.findUnique.mockResolvedValue(null)
const result = await handleCreateStory({
pbi_id: PBI_ID,
title: 'A story',
priority: 2,
sprint_id: 'missing',
})
expect(mockPrisma.story.create).not.toHaveBeenCalled()
expect(errorText(result)).toMatch(/Sprint missing not found/)
})
it('rejects a sprint from a different product', async () => {
mockPrisma.sprint.findUnique.mockResolvedValue({ product_id: 'other-product' })
const result = await handleCreateStory({
pbi_id: PBI_ID,
title: 'A story',
priority: 2,
sprint_id: SPRINT_ID,
})
expect(mockPrisma.story.create).not.toHaveBeenCalled()
expect(errorText(result)).toMatch(/different product/)
})
it('returns error when PBI not found', async () => {
mockPrisma.pbi.findUnique.mockResolvedValue(null)
const result = await handleCreateStory({ pbi_id: 'missing', title: 'A story', priority: 2 })
expect(mockPrisma.sprint.findUnique).not.toHaveBeenCalled()
expect(mockPrisma.story.create).not.toHaveBeenCalled()
expect(errorText(result)).toMatch(/PBI missing not found/)
})
})

View file

@ -0,0 +1,22 @@
import { describe, it, expect } from 'vitest'
import { executeEffects } from '../../src/flow/effects.js'
describe('effects executor', () => {
it('RELEASE_WORKTREE_LOCKS for unknown jobId is a no-op (no throw)', async () => {
const out = await executeEffects([{ type: 'RELEASE_WORKTREE_LOCKS', jobId: 'no-such-job' }])
expect(out).toEqual([])
})
it('multiple effects execute in order; failure in one is logged but does not abort', async () => {
const out = await executeEffects([
{ type: 'RELEASE_WORKTREE_LOCKS', jobId: 'a' },
{ type: 'RELEASE_WORKTREE_LOCKS', jobId: 'b' },
])
expect(out).toEqual([])
})
it('empty effects array returns empty outcomes', async () => {
const out = await executeEffects([])
expect(out).toEqual([])
})
})

View file

@ -0,0 +1,78 @@
import { describe, it, expect } from 'vitest'
import { transition, type PrFlowState } from '../../src/flow/pr-flow.js'
describe('pr-flow STORY-mode 3-tasks scenario', () => {
it('opens PR early; auto-merge only fires on the last task', () => {
let state: PrFlowState = { kind: 'none', strategy: 'STORY' }
const allEffects: Array<Record<string, unknown>> = []
// Task 1 DONE → PR_CREATED
let r = transition(state, { type: 'PR_CREATED', prUrl: 'https://github.com/o/r/pull/1' })
state = r.nextState
allEffects.push(...r.effects)
expect(state.kind).toBe('pr_opened')
expect(allEffects.filter((e) => e.type === 'ENABLE_AUTO_MERGE')).toHaveLength(0)
// Task 2 DONE → no STORY_COMPLETED yet, no transition emitted
r = transition(state, { type: 'TASK_DONE', taskId: 't2', headSha: 'abc123' })
state = r.nextState
allEffects.push(...r.effects)
expect(state.kind).toBe('pr_opened')
expect(allEffects.filter((e) => e.type === 'ENABLE_AUTO_MERGE')).toHaveLength(0)
// Task 3 DONE = STORY_COMPLETED → ENABLE_AUTO_MERGE with head guard
r = transition(state, { type: 'STORY_COMPLETED', storyId: 's1', headSha: 'def456' })
state = r.nextState
allEffects.push(...r.effects)
expect(state.kind).toBe('waiting_for_checks')
const enableEffects = allEffects.filter((e) => e.type === 'ENABLE_AUTO_MERGE')
expect(enableEffects).toHaveLength(1)
expect(enableEffects[0]).toMatchObject({ expectedHeadSha: 'def456' })
// CI green + merge OK
r = transition(state, { type: 'MERGE_RESULT' })
state = r.nextState
expect(state.kind).toBe('auto_merge_enabled')
})
it('CHECKS_FAILED → checks_failed (no pause)', () => {
const state: PrFlowState = {
kind: 'waiting_for_checks',
strategy: 'STORY',
prUrl: 'x',
headSha: 'y',
}
const r = transition(state, { type: 'MERGE_RESULT', reason: 'CHECKS_FAILED' })
expect(r.nextState.kind).toBe('checks_failed')
})
it('MERGE_CONFLICT → merge_conflict_paused', () => {
const state: PrFlowState = {
kind: 'waiting_for_checks',
strategy: 'STORY',
prUrl: 'x',
headSha: 'y',
}
const r = transition(state, { type: 'MERGE_RESULT', reason: 'MERGE_CONFLICT' })
expect(r.nextState.kind).toBe('merge_conflict_paused')
})
})
describe('pr-flow SPRINT-mode', () => {
it('draft stays draft until SPRINT_COMPLETED → MARK_PR_READY effect', () => {
let state: PrFlowState = { kind: 'none', strategy: 'SPRINT' }
let r = transition(state, { type: 'PR_CREATED', prUrl: 'x' })
expect(r.nextState.kind).toBe('draft_opened')
expect(r.effects).toHaveLength(0)
state = r.nextState
r = transition(state, { type: 'TASK_DONE', taskId: 't1', headSha: 'a' })
expect(r.nextState.kind).toBe('draft_opened')
expect(r.effects).toHaveLength(0)
state = r.nextState
r = transition(state, { type: 'SPRINT_COMPLETED', sprintRunId: 'sr1' })
expect(r.nextState.kind).toBe('ready_for_review')
expect(r.effects.filter((e) => e.type === 'MARK_PR_READY')).toHaveLength(1)
})
})

View file

@ -0,0 +1,82 @@
import { describe, it, expect } from 'vitest'
import { transition, type SprintRunState } from '../../src/flow/sprint-run.js'
describe('sprint-run pure transitions', () => {
it('queued + CLAIM_FIRST_JOB → running with SET_SPRINT_RUN_STATUS effect', () => {
const state: SprintRunState = { kind: 'queued', sprintRunId: 'sr1' }
const r = transition(state, { type: 'CLAIM_FIRST_JOB' })
expect(r.nextState.kind).toBe('running')
expect(r.effects).toEqual([
{ type: 'SET_SPRINT_RUN_STATUS', sprintRunId: 'sr1', status: 'RUNNING' },
])
})
it('running + MERGE_CONFLICT → paused_merge_conflict + 2 effects in order', () => {
const state: SprintRunState = { kind: 'running', sprintRunId: 'sr1' }
const r = transition(state, {
type: 'MERGE_CONFLICT',
prUrl: 'https://github.com/o/r/pull/1',
prHeadSha: 'abc123',
conflictFiles: ['a.ts', 'b.ts'],
resumeInstructions: 'Resolve and push',
})
expect(r.nextState.kind).toBe('paused_merge_conflict')
expect(r.effects).toHaveLength(2)
expect(r.effects[0].type).toBe('CREATE_CLAUDE_QUESTION')
expect(r.effects[1].type).toBe('SET_SPRINT_RUN_STATUS')
if (r.effects[1].type === 'SET_SPRINT_RUN_STATUS') {
expect(r.effects[1].status).toBe('PAUSED')
expect(r.effects[1].pauseContextDraft).toMatchObject({
pause_reason: 'MERGE_CONFLICT',
pr_url: 'https://github.com/o/r/pull/1',
pr_head_sha: 'abc123',
conflict_files: ['a.ts', 'b.ts'],
})
}
})
it('paused + USER_RESUMED → running + CLOSE_CLAUDE_QUESTION + clear pause_context', () => {
const state: SprintRunState = {
kind: 'paused_merge_conflict',
sprintRunId: 'sr1',
pauseContext: {
pause_reason: 'MERGE_CONFLICT',
pr_url: 'x',
pr_head_sha: 'y',
conflict_files: [],
claude_question_id: 'q1',
resume_instructions: 'r',
paused_at: new Date().toISOString(),
},
}
const r = transition(state, { type: 'USER_RESUMED' })
expect(r.nextState.kind).toBe('running')
expect(r.effects[0]).toEqual({ type: 'CLOSE_CLAUDE_QUESTION', questionId: 'q1' })
expect(r.effects[1]).toMatchObject({
type: 'SET_SPRINT_RUN_STATUS',
status: 'RUNNING',
clearPauseContext: true,
})
})
it('running + TASK_FAILED → failed (no PAUSE)', () => {
const state: SprintRunState = { kind: 'running', sprintRunId: 'sr1' }
const r = transition(state, { type: 'TASK_FAILED', taskId: 't1', error: 'CI red' })
expect(r.nextState.kind).toBe('failed')
expect(r.effects[0]).toMatchObject({ status: 'FAILED' })
})
it('running + ALL_DONE → done + SET_SPRINT_RUN_STATUS DONE', () => {
const state: SprintRunState = { kind: 'running', sprintRunId: 'sr1' }
const r = transition(state, { type: 'ALL_DONE' })
expect(r.nextState.kind).toBe('done')
expect(r.effects[0]).toMatchObject({ status: 'DONE' })
})
it('forbidden transition (running + CLAIM_FIRST_JOB) keeps state and emits no effects', () => {
const state: SprintRunState = { kind: 'running', sprintRunId: 'sr1' }
const r = transition(state, { type: 'CLAIM_FIRST_JOB' })
expect(r.nextState).toEqual(state)
expect(r.effects).toEqual([])
})
})

View file

@ -0,0 +1,82 @@
import { describe, it, expect } from 'vitest'
import { transition, type WorktreeLeaseState } from '../../src/flow/worktree-lease.js'
describe('worktree-lease pure transitions', () => {
it('idle + JOB_CLAIMED → acquiring_lock, no effects', () => {
const r = transition({ kind: 'idle' }, { type: 'JOB_CLAIMED', jobId: 'j1', productIds: ['p1'] })
expect(r.nextState.kind).toBe('acquiring_lock')
expect(r.effects).toEqual([])
})
it('acquiring_lock + LOCK_ACQUIRED → creating_or_reusing', () => {
const state: WorktreeLeaseState = {
kind: 'acquiring_lock',
jobId: 'j1',
productIds: ['p1'],
}
const r = transition(state, { type: 'LOCK_ACQUIRED' })
expect(r.nextState.kind).toBe('creating_or_reusing')
expect(r.effects).toEqual([])
})
it('acquiring_lock + LOCK_TIMEOUT → lock_timeout', () => {
const state: WorktreeLeaseState = {
kind: 'acquiring_lock',
jobId: 'j1',
productIds: ['p1'],
}
const r = transition(state, { type: 'LOCK_TIMEOUT' })
expect(r.nextState.kind).toBe('lock_timeout')
})
it('creating_or_reusing + WORKTREE_READY → syncing', () => {
const r = transition(
{ kind: 'creating_or_reusing', jobId: 'j1', productIds: ['p1'] },
{ type: 'WORKTREE_READY' },
)
expect(r.nextState.kind).toBe('syncing')
})
it('syncing + SYNC_DONE → ready (no release effect yet)', () => {
const r = transition(
{ kind: 'syncing', jobId: 'j1', productIds: ['p1'] },
{ type: 'SYNC_DONE' },
)
expect(r.nextState.kind).toBe('ready')
expect(r.effects).toEqual([])
})
it('syncing + SYNC_FAILED → sync_failed + RELEASE_WORKTREE_LOCKS effect', () => {
const r = transition(
{ kind: 'syncing', jobId: 'j1', productIds: ['p1'] },
{ type: 'SYNC_FAILED', error: 'boom' },
)
expect(r.nextState.kind).toBe('sync_failed')
expect(r.effects).toEqual([{ type: 'RELEASE_WORKTREE_LOCKS', jobId: 'j1' }])
})
it('ready + JOB_TERMINAL → releasing + RELEASE_WORKTREE_LOCKS effect', () => {
const r = transition(
{ kind: 'ready', jobId: 'j1', productIds: ['p1'] },
{ type: 'JOB_TERMINAL', jobId: 'j1' },
)
expect(r.nextState.kind).toBe('releasing')
expect(r.effects).toEqual([{ type: 'RELEASE_WORKTREE_LOCKS', jobId: 'j1' }])
})
it('ready + STALE_RESET → stale_released + RELEASE_WORKTREE_LOCKS effect', () => {
const r = transition(
{ kind: 'ready', jobId: 'j1', productIds: ['p1'] },
{ type: 'STALE_RESET', jobId: 'j1' },
)
expect(r.nextState.kind).toBe('stale_released')
expect(r.effects).toEqual([{ type: 'RELEASE_WORKTREE_LOCKS', jobId: 'j1' }])
})
it('forbidden transition (idle + LOCK_ACQUIRED) keeps state, no effects', () => {
const state: WorktreeLeaseState = { kind: 'idle' }
const r = transition(state, { type: 'LOCK_ACQUIRED' })
expect(r.nextState).toEqual(state)
expect(r.effects).toEqual([])
})
})

View file

@ -4,12 +4,12 @@ const {
mockProductFindFirst, mockProductFindFirst,
mockSprintFindFirst, mockSprintFindFirst,
mockStoryFindFirst, mockStoryFindFirst,
mockTodoFindMany, mockIdeaFindMany,
} = vi.hoisted(() => ({ } = vi.hoisted(() => ({
mockProductFindFirst: vi.fn(), mockProductFindFirst: vi.fn(),
mockSprintFindFirst: vi.fn(), mockSprintFindFirst: vi.fn(),
mockStoryFindFirst: vi.fn(), mockStoryFindFirst: vi.fn(),
mockTodoFindMany: vi.fn(), mockIdeaFindMany: vi.fn(),
})) }))
vi.mock('../src/auth.js', () => ({ vi.mock('../src/auth.js', () => ({
@ -21,7 +21,7 @@ vi.mock('../src/prisma.js', () => ({
product: { findFirst: mockProductFindFirst }, product: { findFirst: mockProductFindFirst },
sprint: { findFirst: mockSprintFindFirst }, sprint: { findFirst: mockSprintFindFirst },
story: { findFirst: mockStoryFindFirst }, story: { findFirst: mockStoryFindFirst },
todo: { findMany: mockTodoFindMany }, idea: { findMany: mockIdeaFindMany },
}, },
})) }))
@ -55,7 +55,7 @@ beforeEach(() => {
}) })
mockSprintFindFirst.mockResolvedValue({ id: 'sprint-1', sprint_goal: 'Goal', status: 'ACTIVE' }) mockSprintFindFirst.mockResolvedValue({ id: 'sprint-1', sprint_goal: 'Goal', status: 'ACTIVE' })
mockStoryFindFirst.mockResolvedValue(null) mockStoryFindFirst.mockResolvedValue(null)
mockTodoFindMany.mockResolvedValue([]) mockIdeaFindMany.mockResolvedValue([])
}) })
describe('get_claude_context safety-net filter', () => { describe('get_claude_context safety-net filter', () => {

View file

@ -0,0 +1,96 @@
import { describe, it, expect, beforeEach, afterEach } from 'vitest'
import * as fs from 'node:fs/promises'
import * as os from 'node:os'
import * as path from 'node:path'
import { acquireFileLock, acquireFileLocksOrdered } from '../../src/git/file-lock.js'
describe('file-lock', () => {
let tmpDir: string
beforeEach(async () => {
tmpDir = await fs.mkdtemp(path.join(os.tmpdir(), 'file-lock-'))
})
afterEach(async () => {
await fs.rm(tmpDir, { recursive: true, force: true })
})
it('acquires and releases a lock; lockfile is gone after release', async () => {
const lockPath = path.join(tmpDir, 'a.lock')
const release = await acquireFileLock(lockPath)
// proper-lockfile creates a directory at <lockPath>.lock for the actual lock
const stat = await fs.stat(`${lockPath}.lock`).catch(() => null)
expect(stat).not.toBeNull()
await release()
// After release, the .lock dir should be gone
const after = await fs.stat(`${lockPath}.lock`).catch(() => null)
expect(after).toBeNull()
})
it('release is idempotent (second call is no-op)', async () => {
const lockPath = path.join(tmpDir, 'b.lock')
const release = await acquireFileLock(lockPath)
await release()
await expect(release()).resolves.toBeUndefined()
})
it('second acquire blocks until first release', async () => {
const lockPath = path.join(tmpDir, 'c.lock')
const release1 = await acquireFileLock(lockPath)
let secondAcquired = false
const second = acquireFileLock(lockPath).then((r) => {
secondAcquired = true
return r
})
// Give the second acquire a moment to attempt
await new Promise((r) => setTimeout(r, 200))
expect(secondAcquired).toBe(false)
await release1()
const release2 = await second
expect(secondAcquired).toBe(true)
await release2()
}, 10_000)
it('acquireFileLocksOrdered sorts paths alphabetically (deadlock-free for crossed sets)', async () => {
const a = path.join(tmpDir, 'A.lock')
const b = path.join(tmpDir, 'B.lock')
// Two concurrent multi-locks with crossed orders both sort to [A, B]
const r1Promise = acquireFileLocksOrdered([b, a])
// First should grab both since paths sort the same
const r1 = await r1Promise
let secondAcquired = false
const r2Promise = acquireFileLocksOrdered([a, b]).then((r) => {
secondAcquired = true
return r
})
await new Promise((r) => setTimeout(r, 200))
expect(secondAcquired).toBe(false)
await r1()
const r2 = await r2Promise
expect(secondAcquired).toBe(true)
await r2()
}, 15_000)
it('partial failure releases held locks', async () => {
// Force the second acquire to fail by writing a regular file at the lockfile
// location proper-lockfile wants to create as a directory.
const a = path.join(tmpDir, 'A.lock')
const bPath = path.join(tmpDir, 'B.lock')
// Create a regular file at `${bPath}.lock` so proper-lockfile's mkdir fails with EEXIST
await fs.writeFile(`${bPath}.lock`, 'blocked')
await expect(acquireFileLocksOrdered([a, bPath])).rejects.toThrow()
// After failure, A's lock should be released — re-acquire immediately
const r = await acquireFileLock(a)
await r()
}, 90_000)
})

View file

@ -0,0 +1,136 @@
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest'
import * as fs from 'node:fs/promises'
import * as os from 'node:os'
import * as path from 'node:path'
import { execFile } from 'node:child_process'
import { promisify } from 'node:util'
import {
registerJobLockReleases,
releaseLocksOnTerminal,
setupProductWorktrees,
_resetJobReleasesForTest,
} from '../../src/git/job-locks.js'
const exec = promisify(execFile)
describe('job-locks: registerJobLockReleases + releaseLocksOnTerminal', () => {
beforeEach(() => _resetJobReleasesForTest())
it('releaseLocksOnTerminal for unknown job is a no-op', async () => {
await expect(releaseLocksOnTerminal('nonexistent')).resolves.toBeUndefined()
})
it('runs registered releases and clears the entry', async () => {
const release = vi.fn().mockResolvedValue(undefined)
registerJobLockReleases('job-1', [release])
await releaseLocksOnTerminal('job-1')
expect(release).toHaveBeenCalledTimes(1)
// Second call → no-op (cleared)
await releaseLocksOnTerminal('job-1')
expect(release).toHaveBeenCalledTimes(1)
})
it('failures in one release do not abort others', async () => {
const r1 = vi.fn().mockRejectedValue(new Error('boom'))
const r2 = vi.fn().mockResolvedValue(undefined)
registerJobLockReleases('job-2', [r1, r2])
await expect(releaseLocksOnTerminal('job-2')).resolves.toBeUndefined()
expect(r1).toHaveBeenCalled()
expect(r2).toHaveBeenCalled()
})
it('append-mode: multiple registers accumulate', async () => {
const r1 = vi.fn().mockResolvedValue(undefined)
const r2 = vi.fn().mockResolvedValue(undefined)
registerJobLockReleases('job-3', [r1])
registerJobLockReleases('job-3', [r2])
await releaseLocksOnTerminal('job-3')
expect(r1).toHaveBeenCalledTimes(1)
expect(r2).toHaveBeenCalledTimes(1)
})
})
describe('job-locks: setupProductWorktrees', () => {
let tmpRoot: string
let originalEnv: string | undefined
let bareRepo: string
let originRepo: string
beforeEach(async () => {
_resetJobReleasesForTest()
tmpRoot = await fs.mkdtemp(path.join(os.tmpdir(), 'job-locks-'))
originalEnv = process.env.SCRUM4ME_AGENT_WORKTREE_DIR
process.env.SCRUM4ME_AGENT_WORKTREE_DIR = path.join(tmpRoot, 'agent-worktrees')
// Set up a bare repo as origin and a clone with origin/main
bareRepo = path.join(tmpRoot, 'origin.git')
await exec('git', ['init', '--bare', '-b', 'main', bareRepo])
originRepo = path.join(tmpRoot, 'work')
await exec('git', ['init', '-b', 'main', originRepo])
await exec('git', ['config', 'user.email', 't@t.local'], { cwd: originRepo })
await exec('git', ['config', 'user.name', 'Test'], { cwd: originRepo })
await exec('git', ['remote', 'add', 'origin', bareRepo], { cwd: originRepo })
await fs.writeFile(path.join(originRepo, 'README.md'), '# init\n')
await exec('git', ['add', '-A'], { cwd: originRepo })
await exec('git', ['commit', '-m', 'init'], { cwd: originRepo })
await exec('git', ['push', '-u', 'origin', 'main'], { cwd: originRepo })
})
afterEach(async () => {
if (originalEnv) process.env.SCRUM4ME_AGENT_WORKTREE_DIR = originalEnv
else delete process.env.SCRUM4ME_AGENT_WORKTREE_DIR
await fs.rm(tmpRoot, { recursive: true, force: true })
})
it('returns empty when productIds is empty', async () => {
const result = await setupProductWorktrees('j1', [], async () => null)
expect(result).toEqual([])
})
it('creates a product-worktree, registers a lock-release, and releases it', async () => {
const result = await setupProductWorktrees('j2', ['prod-a'], async () => originRepo)
expect(result).toHaveLength(1)
expect(result[0].productId).toBe('prod-a')
expect(result[0].worktreePath).toContain('_products/prod-a')
// Worktree dir exists with detached HEAD on origin/main
const stat = await fs.stat(result[0].worktreePath)
expect(stat.isDirectory()).toBe(true)
// Lockfile is held during the job (proper-lockfile creates a .lock dir)
const lockDir = path.join(
process.env.SCRUM4ME_AGENT_WORKTREE_DIR!,
'_products',
'prod-a.lock.lock',
)
const lockStat = await fs.stat(lockDir).catch(() => null)
expect(lockStat).not.toBeNull()
await releaseLocksOnTerminal('j2')
const lockAfter = await fs.stat(lockDir).catch(() => null)
expect(lockAfter).toBeNull()
})
it('skips products where resolveRepoRoot returns null', async () => {
const result = await setupProductWorktrees('j3', ['no-repo'], async () => null)
expect(result).toEqual([])
// Lock was still acquired and registered — release cleans up
await releaseLocksOnTerminal('j3')
})
it('output preserves input order regardless of alphabetical lock-acquire order', async () => {
// 'z-primary' sorts AFTER 'a-secondary' alphabetically, but caller passes
// primary first → output[0] must be 'z-primary' so wait_for_job's
// primary_worktree_path = worktrees[0]?.worktreePath points at the right repo.
const result = await setupProductWorktrees(
'j4',
['z-primary', 'a-secondary'],
async () => originRepo,
)
expect(result).toHaveLength(2)
expect(result[0].productId).toBe('z-primary')
expect(result[1].productId).toBe('a-secondary')
await releaseLocksOnTerminal('j4')
})
})

View file

@ -0,0 +1,75 @@
import { describe, it, expect, vi, beforeEach } from 'vitest'
// Mock node:child_process before importing the module under test
vi.mock('node:child_process', () => ({
execFile: vi.fn(),
}))
import { execFile } from 'node:child_process'
import { enableAutoMergeOnPr } from '../../src/git/pr.js'
const mockExecFile = vi.mocked(execFile) as unknown as ReturnType<typeof vi.fn>
function mockGhFailure(stderr: string) {
mockExecFile.mockImplementation(((_cmd: string, _args: string[], _opts: unknown, cb: any) => {
cb(Object.assign(new Error('gh exit'), { stderr }))
}) as never)
}
function mockGhSuccess() {
mockExecFile.mockImplementation(((_cmd: string, _args: string[], _opts: unknown, cb: any) => {
cb(null, { stdout: '', stderr: '' })
}) as never)
}
describe('enableAutoMergeOnPr — typed errors (PBI-47 C2 layer 1)', () => {
beforeEach(() => {
mockExecFile.mockReset()
})
it('returns ok:true on green merge', async () => {
mockGhSuccess()
const result = await enableAutoMergeOnPr({ prUrl: 'x', expectedHeadSha: 'sha1' })
expect(result.ok).toBe(true)
})
it('classifies GH_AUTH_ERROR for 401/403 / permission strings', async () => {
mockGhFailure('gh: HTTP 403: permission denied')
const result = await enableAutoMergeOnPr({ prUrl: 'x', expectedHeadSha: 'sha1' })
expect(result.ok).toBe(false)
if (!result.ok) expect(result.reason).toBe('GH_AUTH_ERROR')
})
it('classifies AUTO_MERGE_NOT_ALLOWED for repo-setting refusal', async () => {
mockGhFailure('auto-merge is not allowed for this repository')
const result = await enableAutoMergeOnPr({ prUrl: 'x', expectedHeadSha: 'sha1' })
expect(result.ok).toBe(false)
if (!result.ok) expect(result.reason).toBe('AUTO_MERGE_NOT_ALLOWED')
})
it('classifies MERGE_CONFLICT for dirty merge state', async () => {
mockGhFailure('pull request is not in a mergeable state (dirty)')
const result = await enableAutoMergeOnPr({ prUrl: 'x', expectedHeadSha: 'sha1' })
expect(result.ok).toBe(false)
if (!result.ok) expect(result.reason).toBe('MERGE_CONFLICT')
})
it('classifies UNKNOWN for unrecognised stderr', async () => {
mockGhFailure('unexpected gh error')
const result = await enableAutoMergeOnPr({ prUrl: 'x', expectedHeadSha: 'sha1' })
expect(result.ok).toBe(false)
if (!result.ok) expect(result.reason).toBe('UNKNOWN')
})
it('passes --match-head-commit when expectedHeadSha provided', async () => {
mockGhSuccess()
await enableAutoMergeOnPr({ prUrl: 'pr-url', expectedHeadSha: 'abc123' })
const callArgs = mockExecFile.mock.calls[0]
expect(callArgs[0]).toBe('gh')
const args = callArgs[1] as string[]
expect(args).toContain('--match-head-commit')
expect(args).toContain('abc123')
expect(args).toContain('--auto')
expect(args).toContain('--squash')
})
})

View file

@ -12,7 +12,7 @@ vi.mock('node:util', () => ({
), ),
})) }))
import { createPullRequest } from '../../src/git/pr.js' import { createPullRequest, markPullRequestReady } from '../../src/git/pr.js'
beforeEach(() => { beforeEach(() => {
vi.clearAllMocks() vi.clearAllMocks()
@ -66,4 +66,80 @@ describe('createPullRequest', () => {
expect(result).toMatchObject({ error: expect.stringContaining('gh pr create failed') }) expect(result).toMatchObject({ error: expect.stringContaining('gh pr create failed') })
}) })
it('passes --draft when draft=true en slaat auto-merge over', async () => {
const calls: string[][] = []
mockExecFile.mockImplementation(
(
_cmd: string,
args: string[],
_opts: unknown,
cb: (err: null, res: { stdout: string; stderr: string }) => void,
) => {
calls.push(args)
cb(null, {
stdout: 'Creating draft pull request...\nhttps://github.com/org/repo/pull/100\n',
stderr: '',
})
},
)
const result = await createPullRequest({
worktreePath: '/wt/sprint-1',
branchName: 'feat/sprint-12345678',
title: 'Sprint: Cascade-flow live',
body: 'Sprint draft',
draft: true,
enableAutoMerge: false,
})
expect(result).toEqual({ url: 'https://github.com/org/repo/pull/100' })
expect(calls.some((a) => a.includes('--draft'))).toBe(true)
// gh pr merge --auto mag NIET gestart zijn voor draft + auto-merge=false
expect(calls.some((a) => a[0] === 'pr' && a[1] === 'merge')).toBe(false)
})
})
describe('markPullRequestReady', () => {
it('roept gh pr ready aan met de PR-URL', async () => {
const calls: string[][] = []
mockExecFile.mockImplementation(
(
_cmd: string,
args: string[],
_opts: unknown,
cb: (err: null, res: { stdout: string; stderr: string }) => void,
) => {
calls.push(args)
cb(null, { stdout: '', stderr: '' })
},
)
const result = await markPullRequestReady({ prUrl: 'https://github.com/org/repo/pull/100' })
expect(result).toEqual({ ok: true })
expect(calls[0]).toEqual(['pr', 'ready', 'https://github.com/org/repo/pull/100'])
})
it('behandelt "already ready" als success', async () => {
mockExecFile.mockImplementation(
(_cmd: string, _args: string[], _opts: unknown, cb: (err: Error) => void) =>
cb(Object.assign(new Error(''), { stderr: 'Pull request is not in draft state' })),
)
const result = await markPullRequestReady({ prUrl: 'https://github.com/org/repo/pull/100' })
expect(result).toEqual({ ok: true })
})
it('retourneert error op onverwachte gh-fout', async () => {
mockExecFile.mockImplementation(
(_cmd: string, _args: string[], _opts: unknown, cb: (err: Error) => void) =>
cb(new Error('rate limit exceeded')),
)
const result = await markPullRequestReady({ prUrl: 'https://github.com/org/repo/pull/100' })
expect(result).toMatchObject({ error: expect.stringContaining('gh pr ready failed') })
})
}) })

View file

@ -113,6 +113,71 @@ describe('createWorktreeForJob', () => {
}), }),
).rejects.toThrow('Worktree path already exists') ).rejects.toThrow('Worktree path already exists')
}) })
it('reuseBranch: reuses an existing local branch', async () => {
const { repoDir, originDir } = await setupRepo()
tmpDirs.push(repoDir, originDir)
await makeWorktreeParent()
// Sibling already created the branch locally.
await git(['branch', 'feat/sprint-abc', 'origin/main'], repoDir)
const result = await createWorktreeForJob({
repoRoot: repoDir,
jobId: 'job-reuse-local',
branchName: 'feat/sprint-abc',
baseRef: 'origin/main',
reuseBranch: true,
})
const { stdout } = await git(['rev-parse', '--abbrev-ref', 'HEAD'], result.worktreePath)
expect(stdout.trim()).toBe('feat/sprint-abc')
expect(result.branchName).toBe('feat/sprint-abc')
})
it('reuseBranch: recreates a local branch from origin when only the remote has it', async () => {
const { repoDir, originDir } = await setupRepo()
tmpDirs.push(repoDir, originDir)
await makeWorktreeParent()
// Branch exists on origin (a sibling pushed it, or the container was
// recreated and the local clone is fresh) but not as a local branch.
await git(['branch', 'feat/sprint-xyz', 'origin/main'], repoDir)
await git(['push', 'origin', 'feat/sprint-xyz'], repoDir)
await git(['branch', '-D', 'feat/sprint-xyz'], repoDir)
const result = await createWorktreeForJob({
repoRoot: repoDir,
jobId: 'job-reuse-origin',
branchName: 'feat/sprint-xyz',
baseRef: 'origin/main',
reuseBranch: true,
})
const { stdout } = await git(['rev-parse', '--abbrev-ref', 'HEAD'], result.worktreePath)
expect(stdout.trim()).toBe('feat/sprint-xyz')
})
it('reuseBranch: falls back to a fresh branch when it exists nowhere (cross-repo sprint)', async () => {
const { repoDir, originDir } = await setupRepo()
tmpDirs.push(repoDir, originDir)
await makeWorktreeParent()
// reuseBranch is decided sprint-wide; for the first job targeting THIS
// repo the branch exists neither locally nor on origin. Must not throw
// "invalid reference" — should create it fresh from baseRef.
const result = await createWorktreeForJob({
repoRoot: repoDir,
jobId: 'job-reuse-fresh',
branchName: 'feat/sprint-newrepo',
baseRef: 'origin/main',
reuseBranch: true,
})
const { stdout } = await git(['rev-parse', '--abbrev-ref', 'HEAD'], result.worktreePath)
expect(stdout.trim()).toBe('feat/sprint-newrepo')
expect(result.branchName).toBe('feat/sprint-newrepo')
})
}) })
describe('removeWorktreeForJob', () => { describe('removeWorktreeForJob', () => {

View file

@ -0,0 +1,166 @@
import { describe, it, expect } from 'vitest'
import { getKindDefault, resolveJobConfig, mapBudgetToEffort } from '../src/lib/job-config.js'
const KIND_EXPECTED = {
IDEA_GRILL: { model: 'claude-sonnet-4-6', thinking_budget: 12000, permission_mode: 'acceptEdits', max_turns: 15 },
IDEA_MAKE_PLAN: { model: 'claude-opus-4-7', thinking_budget: 24000, permission_mode: 'acceptEdits', max_turns: 20 },
PLAN_CHAT: { model: 'claude-sonnet-4-6', thinking_budget: 6000, permission_mode: 'acceptEdits', max_turns: 5 },
TASK_IMPLEMENTATION: { model: 'claude-sonnet-4-6', thinking_budget: 6000, permission_mode: 'bypassPermissions', max_turns: 50 },
SPRINT_IMPLEMENTATION: { model: 'claude-sonnet-4-6', thinking_budget: 6000, permission_mode: 'bypassPermissions', max_turns: null },
} as const
describe('getKindDefault', () => {
for (const [kind, expected] of Object.entries(KIND_EXPECTED)) {
it(`returnt de juiste defaults voor ${kind}`, () => {
const cfg = getKindDefault(kind)
expect(cfg.model).toBe(expected.model)
expect(cfg.thinking_budget).toBe(expected.thinking_budget)
expect(cfg.permission_mode).toBe(expected.permission_mode)
expect(cfg.max_turns).toBe(expected.max_turns)
})
}
it('valt terug op een veilige fallback voor onbekende kinds', () => {
const cfg = getKindDefault('SOMETHING_NEW')
expect(cfg.model).toBe('claude-sonnet-4-6')
expect(cfg.permission_mode).toBe('default')
})
})
describe('resolveJobConfig — geen overrides', () => {
for (const kind of Object.keys(KIND_EXPECTED)) {
it(`returnt kind-default voor ${kind} zonder overrides`, () => {
const cfg = resolveJobConfig({ kind }, {})
expect(cfg).toEqual(getKindDefault(kind))
})
}
})
describe('resolveJobConfig — cascade', () => {
it('product.preferred_model overrult kind-default', () => {
const cfg = resolveJobConfig({ kind: 'TASK_IMPLEMENTATION' }, { preferred_model: 'claude-haiku-4-5-20251001' })
expect(cfg.model).toBe('claude-haiku-4-5-20251001')
})
it('job.requested_model overrult product.preferred_model', () => {
const cfg = resolveJobConfig(
{ kind: 'TASK_IMPLEMENTATION', requested_model: 'claude-opus-4-7' },
{ preferred_model: 'claude-haiku-4-5-20251001' },
)
expect(cfg.model).toBe('claude-opus-4-7')
})
it('task.requires_opus overrult product.preferred_model', () => {
const cfg = resolveJobConfig(
{ kind: 'TASK_IMPLEMENTATION' },
{ preferred_model: 'claude-sonnet-4-6' },
{ requires_opus: true },
)
expect(cfg.model).toBe('claude-opus-4-7')
})
it('task.requires_opus overrult ook job.requested_model = haiku', () => {
const cfg = resolveJobConfig(
{ kind: 'TASK_IMPLEMENTATION', requested_model: 'claude-haiku-4-5-20251001' },
{},
{ requires_opus: true },
)
expect(cfg.model).toBe('claude-opus-4-7')
})
it('job.requested_thinking_budget overrult kind-default', () => {
const cfg = resolveJobConfig({ kind: 'PLAN_CHAT', requested_thinking_budget: 1024 }, {})
expect(cfg.thinking_budget).toBe(1024)
})
it('product.thinking_budget_default overrult kind-default', () => {
const cfg = resolveJobConfig({ kind: 'IDEA_GRILL' }, { thinking_budget_default: 0 })
expect(cfg.thinking_budget).toBe(0)
})
it('product.preferred_permission_mode = acceptEdits overrult bypassPermissions voor TASK_IMPLEMENTATION', () => {
const cfg = resolveJobConfig(
{ kind: 'TASK_IMPLEMENTATION' },
{ preferred_permission_mode: 'acceptEdits' },
)
expect(cfg.permission_mode).toBe('acceptEdits')
})
it('max_turns blijft kind-default ook met product- en job-overrides (geen V1-cascade)', () => {
const cfg = resolveJobConfig(
{ kind: 'IDEA_GRILL', requested_model: 'claude-haiku-4-5-20251001' },
{ preferred_model: 'claude-sonnet-4-6' },
)
expect(cfg.max_turns).toBe(15)
})
})
describe('KIND_DEFAULTS.allowed_tools', () => {
it('TASK_IMPLEMENTATION bevat geen claim-tools', () => {
const cfg = getKindDefault('TASK_IMPLEMENTATION')
expect(cfg.allowed_tools).not.toContain('mcp__scrum4me__wait_for_job')
expect(cfg.allowed_tools).not.toContain('mcp__scrum4me__check_queue_empty')
expect(cfg.allowed_tools).not.toContain('mcp__scrum4me__get_idea_context')
})
it('TASK_IMPLEMENTATION bevat de essentiële task-tools', () => {
const cfg = getKindDefault('TASK_IMPLEMENTATION')
expect(cfg.allowed_tools).toContain('mcp__scrum4me__update_task_status')
expect(cfg.allowed_tools).toContain('mcp__scrum4me__update_job_status')
expect(cfg.allowed_tools).toContain('mcp__scrum4me__verify_task_against_plan')
expect(cfg.allowed_tools).toContain('Bash')
expect(cfg.allowed_tools).toContain('Edit')
expect(cfg.allowed_tools).toContain('Write')
})
it('SPRINT_IMPLEMENTATION bevat sprint-specifieke tools maar GEEN job_heartbeat (runner doet die)', () => {
const cfg = getKindDefault('SPRINT_IMPLEMENTATION')
expect(cfg.allowed_tools).toContain('mcp__scrum4me__update_task_execution')
expect(cfg.allowed_tools).toContain('mcp__scrum4me__verify_sprint_task')
expect(cfg.allowed_tools).not.toContain('mcp__scrum4me__job_heartbeat')
})
it('IDEA_GRILL bevat update_idea_grill_md en geen wait_for_job', () => {
const cfg = getKindDefault('IDEA_GRILL')
expect(cfg.allowed_tools).toContain('mcp__scrum4me__update_idea_grill_md')
expect(cfg.allowed_tools).toContain('mcp__scrum4me__log_idea_decision')
expect(cfg.allowed_tools).toContain('mcp__scrum4me__update_job_status')
expect(cfg.allowed_tools).not.toContain('mcp__scrum4me__wait_for_job')
})
it('IDEA_MAKE_PLAN bevat update_idea_plan_md en geen wait_for_job', () => {
const cfg = getKindDefault('IDEA_MAKE_PLAN')
expect(cfg.allowed_tools).toContain('mcp__scrum4me__update_idea_plan_md')
expect(cfg.allowed_tools).toContain('mcp__scrum4me__log_idea_decision')
expect(cfg.allowed_tools).not.toContain('mcp__scrum4me__wait_for_job')
})
it('alle kinds hebben non-null allowed_tools', () => {
for (const kind of ['IDEA_GRILL', 'IDEA_MAKE_PLAN', 'PLAN_CHAT', 'TASK_IMPLEMENTATION', 'SPRINT_IMPLEMENTATION']) {
const cfg = getKindDefault(kind)
expect(cfg.allowed_tools).not.toBeNull()
expect(Array.isArray(cfg.allowed_tools)).toBe(true)
}
})
})
describe('mapBudgetToEffort', () => {
it.each([
[0, null],
[-1, null],
[1, 'medium'],
[3000, 'medium'],
[6000, 'medium'],
[6001, 'high'],
[9000, 'high'],
[12000, 'high'],
[12001, 'xhigh'],
[18000, 'xhigh'],
[24000, 'xhigh'],
[24001, 'max'],
[50000, 'max'],
[100000, 'max'],
])('budget %i → %s', (budget, expected) => {
expect(mapBudgetToEffort(budget)).toBe(expected)
})
})

View file

@ -0,0 +1,137 @@
import { describe, it, expect, vi, beforeEach } from 'vitest'
vi.mock('../src/prisma.js', () => ({
prisma: {
$queryRaw: vi.fn(),
sprintRun: { findUnique: vi.fn() },
},
}))
vi.mock('../src/auth.js', async (importOriginal) => {
const original = await importOriginal<typeof import('../src/auth.js')>()
return { ...original, requireWriteAccess: vi.fn() }
})
import { prisma } from '../src/prisma.js'
import { requireWriteAccess } from '../src/auth.js'
import { registerJobHeartbeatTool } from '../src/tools/job-heartbeat.js'
import type { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js'
const mockPrisma = prisma as unknown as {
$queryRaw: ReturnType<typeof vi.fn>
sprintRun: { findUnique: ReturnType<typeof vi.fn> }
}
const mockAuth = requireWriteAccess as ReturnType<typeof vi.fn>
const TOKEN_ID = 'tok-owner'
function makeServer() {
let handler: (args: Record<string, unknown>) => Promise<unknown>
const server = {
registerTool: vi.fn((_name: string, _meta: unknown, fn: typeof handler) => {
handler = fn
}),
call: (args: Record<string, unknown>) => handler(args),
}
registerJobHeartbeatTool(server as unknown as McpServer)
return server
}
beforeEach(() => {
vi.clearAllMocks()
mockAuth.mockResolvedValue({
userId: 'u-1',
tokenId: TOKEN_ID,
username: 'agent',
isDemo: false,
})
})
describe('job_heartbeat', () => {
it('returns 403-style error when no row matched (token mismatch / terminal)', async () => {
mockPrisma.$queryRaw.mockResolvedValue([])
const server = makeServer()
const result = (await server.call({ job_id: 'job-x' })) as {
content: { text: string }[]
isError?: boolean
}
expect(result.isError).toBe(true)
expect(result.content[0].text).toMatch(/not found|terminal|claimed_by/i)
})
it('non-SPRINT job returns ok + lease_until without sprint fields', async () => {
const lease = new Date()
mockPrisma.$queryRaw.mockResolvedValue([
{
id: 'job-1',
lease_until: lease,
kind: 'TASK_IMPLEMENTATION',
sprint_run_id: null,
},
])
const server = makeServer()
const result = (await server.call({ job_id: 'job-1' })) as {
content: { text: string }[]
}
const body = JSON.parse(result.content[0].text)
expect(body).toEqual({
ok: true,
job_id: 'job-1',
lease_until: lease.toISOString(),
sprint_run_status: null,
sprint_run_pause_reason: null,
})
expect(mockPrisma.sprintRun.findUnique).not.toHaveBeenCalled()
})
it('SPRINT job returns sprint_run_status from sibling lookup', async () => {
const lease = new Date()
mockPrisma.$queryRaw.mockResolvedValue([
{
id: 'job-2',
lease_until: lease,
kind: 'SPRINT_IMPLEMENTATION',
sprint_run_id: 'sr-1',
},
])
mockPrisma.sprintRun.findUnique.mockResolvedValue({
status: 'PAUSED',
pause_context: { pause_reason: 'QUOTA_DEPLETED' },
})
const server = makeServer()
const result = (await server.call({ job_id: 'job-2' })) as {
content: { text: string }[]
}
const body = JSON.parse(result.content[0].text)
expect(body).toMatchObject({
ok: true,
sprint_run_status: 'PAUSED',
sprint_run_pause_reason: 'QUOTA_DEPLETED',
})
})
it('SPRINT job tolerates missing pause_context', async () => {
const lease = new Date()
mockPrisma.$queryRaw.mockResolvedValue([
{
id: 'job-3',
lease_until: lease,
kind: 'SPRINT_IMPLEMENTATION',
sprint_run_id: 'sr-2',
},
])
mockPrisma.sprintRun.findUnique.mockResolvedValue({
status: 'RUNNING',
pause_context: null,
})
const server = makeServer()
const result = (await server.call({ job_id: 'job-3' })) as {
content: { text: string }[]
}
const body = JSON.parse(result.content[0].text)
expect(body.sprint_run_status).toBe('RUNNING')
expect(body.sprint_run_pause_reason).toBeNull()
})
})

View file

@ -0,0 +1,64 @@
import { describe, it, expect } from 'vitest'
import type { ClaudeJobKind } from '@prisma/client'
import { getKindPromptText, getIdeaPromptText } from '../src/lib/kind-prompts.js'
const KINDS: ClaudeJobKind[] = [
'IDEA_GRILL',
'IDEA_MAKE_PLAN',
'TASK_IMPLEMENTATION',
'SPRINT_IMPLEMENTATION',
'PLAN_CHAT',
]
describe('getKindPromptText', () => {
it.each(KINDS)('returnt non-empty content voor %s', (kind) => {
const text = getKindPromptText(kind)
expect(text.length).toBeGreaterThan(0)
})
it('TASK_IMPLEMENTATION-prompt verbiedt wait_for_job', () => {
const text = getKindPromptText('TASK_IMPLEMENTATION')
expect(text).toMatch(/GEEN.*wait_for_job/)
})
it('SPRINT_IMPLEMENTATION-prompt verbiedt job_heartbeat', () => {
const text = getKindPromptText('SPRINT_IMPLEMENTATION')
expect(text).toMatch(/GEEN.*job_heartbeat/)
})
it.each(KINDS)(
'%s-prompt noemt $PAYLOAD_PATH als variabele (alle kinds — runner doet substitution)',
(kind) => {
const text = getKindPromptText(kind)
expect(text).toContain('$PAYLOAD_PATH')
},
)
it.each(['IDEA_GRILL', 'IDEA_MAKE_PLAN'] as const)(
'%s-prompt verwijst niet meer naar wait_for_job (refactor: runner claimt)',
(kind) => {
const text = getKindPromptText(kind)
expect(text).not.toContain('wait_for_job')
},
)
it.each(['IDEA_GRILL', 'IDEA_MAKE_PLAN'] as const)(
'%s-prompt bevat geen onvervangen {idea_*} placeholders',
(kind) => {
const text = getKindPromptText(kind)
expect(text).not.toMatch(/\{idea_code\}|\{idea_title\}/)
},
)
})
describe('getIdeaPromptText (back-compat)', () => {
it('returnt content voor IDEA_GRILL', () => {
expect(getIdeaPromptText('IDEA_GRILL').length).toBeGreaterThan(0)
})
it('returnt content voor IDEA_MAKE_PLAN', () => {
expect(getIdeaPromptText('IDEA_MAKE_PLAN').length).toBeGreaterThan(0)
})
it('returnt empty string voor non-idea kind', () => {
expect(getIdeaPromptText('TASK_IMPLEMENTATION')).toBe('')
})
})

View file

@ -8,6 +8,24 @@ vi.mock('../src/prisma.js', () => ({
}, },
story: { story: {
findUniqueOrThrow: vi.fn(), findUniqueOrThrow: vi.fn(),
findMany: vi.fn(),
update: vi.fn(),
},
pbi: {
findUniqueOrThrow: vi.fn(),
findMany: vi.fn(),
update: vi.fn(),
},
sprint: {
findUniqueOrThrow: vi.fn(),
update: vi.fn(),
},
claudeJob: {
findFirst: vi.fn(),
updateMany: vi.fn(),
},
sprintRun: {
findUnique: vi.fn(),
update: vi.fn(), update: vi.fn(),
}, },
$transaction: vi.fn(), $transaction: vi.fn(),
@ -15,14 +33,47 @@ vi.mock('../src/prisma.js', () => ({
})) }))
import { prisma } from '../src/prisma.js' import { prisma } from '../src/prisma.js'
import { updateTaskStatusWithStoryPromotion } from '../src/lib/tasks-status-update.js' import {
propagateStatusUpwards,
updateTaskStatusWithStoryPromotion,
} from '../src/lib/tasks-status-update.js'
const mockPrisma = prisma as unknown as { type MockedPrisma = {
task: { update: ReturnType<typeof vi.fn>; findMany: ReturnType<typeof vi.fn> } task: { update: ReturnType<typeof vi.fn>; findMany: ReturnType<typeof vi.fn> }
story: { findUniqueOrThrow: ReturnType<typeof vi.fn>; update: ReturnType<typeof vi.fn> } story: {
findUniqueOrThrow: ReturnType<typeof vi.fn>
findMany: ReturnType<typeof vi.fn>
update: ReturnType<typeof vi.fn>
}
pbi: {
findUniqueOrThrow: ReturnType<typeof vi.fn>
findMany: ReturnType<typeof vi.fn>
update: ReturnType<typeof vi.fn>
}
sprint: {
findUniqueOrThrow: ReturnType<typeof vi.fn>
update: ReturnType<typeof vi.fn>
}
claudeJob: {
findFirst: ReturnType<typeof vi.fn>
updateMany: ReturnType<typeof vi.fn>
}
sprintRun: {
findUnique: ReturnType<typeof vi.fn>
update: ReturnType<typeof vi.fn>
}
$transaction: ReturnType<typeof vi.fn> $transaction: ReturnType<typeof vi.fn>
} }
const mockPrisma = prisma as unknown as MockedPrisma
const TASK_BASE = {
id: 'task-1',
title: 'Task',
story_id: 'story-1',
implementation_plan: null,
}
beforeEach(() => { beforeEach(() => {
vi.clearAllMocks() vi.clearAllMocks()
mockPrisma.$transaction.mockImplementation( mockPrisma.$transaction.mockImplementation(
@ -30,107 +81,181 @@ beforeEach(() => {
) )
}) })
const TASK_BASE = { describe('propagateStatusUpwards — story-niveau', () => {
id: 'task-1', it('zet story op DONE wanneer alle siblings DONE zijn', async () => {
title: 'Task',
story_id: 'story-1',
implementation_plan: null,
}
describe('updateTaskStatusWithStoryPromotion', () => {
it('promotes story to DONE when last sibling task transitions to DONE', async () => {
mockPrisma.task.update.mockResolvedValue({ ...TASK_BASE, status: 'DONE' }) mockPrisma.task.update.mockResolvedValue({ ...TASK_BASE, status: 'DONE' })
mockPrisma.task.findMany.mockResolvedValue([{ status: 'DONE' }, { status: 'DONE' }]) mockPrisma.task.findMany.mockResolvedValue([{ status: 'DONE' }, { status: 'DONE' }])
mockPrisma.story.findUniqueOrThrow.mockResolvedValue({ status: 'IN_SPRINT' }) mockPrisma.story.findUniqueOrThrow.mockResolvedValue({
id: 'story-1',
status: 'IN_SPRINT',
pbi_id: 'pbi-1',
sprint_id: null,
})
mockPrisma.pbi.findUniqueOrThrow.mockResolvedValue({ id: 'pbi-1', status: 'READY' })
mockPrisma.story.findMany.mockResolvedValue([{ status: 'DONE' }])
const result = await propagateStatusUpwards('task-1', 'DONE')
expect(result.storyChanged).toBe(true)
expect(mockPrisma.story.update).toHaveBeenCalledWith({
where: { id: 'story-1' },
data: { status: 'DONE' },
})
})
it('zet story op FAILED wanneer een task FAILED is, ongeacht andere tasks', async () => {
mockPrisma.task.update.mockResolvedValue({ ...TASK_BASE, status: 'FAILED' })
mockPrisma.task.findMany.mockResolvedValue([
{ status: 'FAILED' },
{ status: 'DONE' },
{ status: 'TO_DO' },
])
mockPrisma.story.findUniqueOrThrow.mockResolvedValue({
id: 'story-1',
status: 'IN_SPRINT',
pbi_id: 'pbi-1',
sprint_id: null,
})
mockPrisma.pbi.findUniqueOrThrow.mockResolvedValue({ id: 'pbi-1', status: 'READY' })
mockPrisma.story.findMany.mockResolvedValue([{ status: 'FAILED' }])
const result = await propagateStatusUpwards('task-1', 'FAILED')
expect(result.storyChanged).toBe(true)
expect(mockPrisma.story.update).toHaveBeenCalledWith({
where: { id: 'story-1' },
data: { status: 'FAILED' },
})
})
})
describe('propagateStatusUpwards — PBI BLOCKED met rust laten', () => {
it('overschrijft een handmatig BLOCKED PBI niet', async () => {
mockPrisma.task.update.mockResolvedValue({ ...TASK_BASE, status: 'DONE' })
mockPrisma.task.findMany.mockResolvedValue([{ status: 'DONE' }])
mockPrisma.story.findUniqueOrThrow.mockResolvedValue({
id: 'story-1',
status: 'IN_SPRINT',
pbi_id: 'pbi-1',
sprint_id: null,
})
mockPrisma.pbi.findUniqueOrThrow.mockResolvedValue({ id: 'pbi-1', status: 'BLOCKED' })
const result = await propagateStatusUpwards('task-1', 'DONE')
expect(result.pbiChanged).toBe(false)
expect(mockPrisma.pbi.update).not.toHaveBeenCalled()
})
})
describe('propagateStatusUpwards — sprint cascade tot SprintRun', () => {
it('zet bij FAILED de hele keten op FAILED en cancelt sibling-jobs', async () => {
mockPrisma.task.update.mockResolvedValue({ ...TASK_BASE, status: 'FAILED' })
mockPrisma.task.findMany.mockResolvedValue([{ status: 'FAILED' }, { status: 'DONE' }])
mockPrisma.story.findUniqueOrThrow.mockResolvedValue({
id: 'story-1',
status: 'IN_SPRINT',
pbi_id: 'pbi-1',
sprint_id: 'sprint-1',
})
mockPrisma.pbi.findUniqueOrThrow.mockResolvedValue({ id: 'pbi-1', status: 'READY' })
mockPrisma.story.findMany.mockImplementation(
async (args: { where?: { pbi_id?: string; sprint_id?: string } }) => {
if (args.where?.pbi_id) return [{ status: 'FAILED' }]
if (args.where?.sprint_id) return [{ pbi_id: 'pbi-1' }]
return []
},
)
mockPrisma.sprint.findUniqueOrThrow.mockResolvedValue({ id: 'sprint-1', status: 'ACTIVE' })
;(mockPrisma.pbi as unknown as { findMany: ReturnType<typeof vi.fn> }).findMany = vi
.fn()
.mockResolvedValue([{ status: 'FAILED' }])
mockPrisma.claudeJob.findFirst.mockResolvedValue({ id: 'job-1', sprint_run_id: 'run-1' })
mockPrisma.sprintRun.findUnique.mockResolvedValue({ id: 'run-1', status: 'RUNNING' })
const result = await propagateStatusUpwards('task-1', 'FAILED')
expect(result.sprintChanged).toBe(true)
expect(result.sprintRunChanged).toBe(true)
expect(mockPrisma.sprintRun.update).toHaveBeenCalledWith(
expect.objectContaining({
where: { id: 'run-1' },
data: expect.objectContaining({ status: 'FAILED', failed_task_id: 'task-1' }),
}),
)
expect(mockPrisma.claudeJob.updateMany).toHaveBeenCalledWith(
expect.objectContaining({
where: expect.objectContaining({
sprint_run_id: 'run-1',
status: { in: ['QUEUED', 'CLAIMED', 'RUNNING'] },
id: { not: 'job-1' },
}),
data: expect.objectContaining({ status: 'CANCELLED' }),
}),
)
})
})
describe('updateTaskStatusWithStoryPromotion (BC-wrapper)', () => {
it('mapt storyChanged + DONE-newStatus naar storyStatusChange="promoted"', async () => {
mockPrisma.task.update.mockResolvedValue({ ...TASK_BASE, status: 'DONE' })
mockPrisma.task.findMany.mockResolvedValue([{ status: 'DONE' }])
mockPrisma.story.findUniqueOrThrow.mockResolvedValue({
id: 'story-1',
status: 'IN_SPRINT',
pbi_id: 'pbi-1',
sprint_id: null,
})
mockPrisma.pbi.findUniqueOrThrow.mockResolvedValue({ id: 'pbi-1', status: 'READY' })
mockPrisma.story.findMany.mockResolvedValue([{ status: 'DONE' }])
const result = await updateTaskStatusWithStoryPromotion('task-1', 'DONE') const result = await updateTaskStatusWithStoryPromotion('task-1', 'DONE')
expect(result.storyStatusChange).toBe('promoted') expect(result.storyStatusChange).toBe('promoted')
expect(result.storyId).toBe('story-1') expect(result.storyId).toBe('story-1')
expect(mockPrisma.story.update).toHaveBeenCalledWith({
where: { id: 'story-1' },
data: { status: 'DONE' },
})
}) })
it('does not promote when story is already DONE (idempotent)', async () => { it('mapt storyChanged + non-DONE naar storyStatusChange="demoted"', async () => {
mockPrisma.task.update.mockResolvedValue({ ...TASK_BASE, status: 'DONE' })
mockPrisma.task.findMany.mockResolvedValue([{ status: 'DONE' }])
mockPrisma.story.findUniqueOrThrow.mockResolvedValue({ status: 'DONE' })
const result = await updateTaskStatusWithStoryPromotion('task-1', 'DONE')
expect(result.storyStatusChange).toBe(null)
expect(mockPrisma.story.update).not.toHaveBeenCalled()
})
it('does not promote when not all siblings are DONE', async () => {
mockPrisma.task.update.mockResolvedValue({ ...TASK_BASE, status: 'DONE' })
mockPrisma.task.findMany.mockResolvedValue([{ status: 'DONE' }, { status: 'IN_PROGRESS' }])
mockPrisma.story.findUniqueOrThrow.mockResolvedValue({ status: 'IN_SPRINT' })
const result = await updateTaskStatusWithStoryPromotion('task-1', 'DONE')
expect(result.storyStatusChange).toBe(null)
expect(mockPrisma.story.update).not.toHaveBeenCalled()
})
it('demotes story to IN_SPRINT when a task moves out of DONE on a DONE story', async () => {
mockPrisma.task.update.mockResolvedValue({ ...TASK_BASE, status: 'IN_PROGRESS' }) mockPrisma.task.update.mockResolvedValue({ ...TASK_BASE, status: 'IN_PROGRESS' })
mockPrisma.task.findMany.mockResolvedValue([{ status: 'IN_PROGRESS' }, { status: 'DONE' }]) mockPrisma.task.findMany.mockResolvedValue([{ status: 'IN_PROGRESS' }, { status: 'DONE' }])
mockPrisma.story.findUniqueOrThrow.mockResolvedValue({ status: 'DONE' }) mockPrisma.story.findUniqueOrThrow.mockResolvedValue({
id: 'story-1',
status: 'DONE',
pbi_id: 'pbi-1',
sprint_id: null,
})
mockPrisma.pbi.findUniqueOrThrow.mockResolvedValue({ id: 'pbi-1', status: 'DONE' })
mockPrisma.story.findMany.mockResolvedValue([{ status: 'OPEN' }])
const result = await updateTaskStatusWithStoryPromotion('task-1', 'IN_PROGRESS') const result = await updateTaskStatusWithStoryPromotion('task-1', 'IN_PROGRESS')
expect(result.storyStatusChange).toBe('demoted') expect(result.storyStatusChange).toBe('demoted')
expect(mockPrisma.story.update).toHaveBeenCalledWith({
where: { id: 'story-1' },
data: { status: 'IN_SPRINT' },
})
}) })
it('does not demote when story is not DONE', async () => { it('null wanneer story niet verandert', async () => {
mockPrisma.task.update.mockResolvedValue({ ...TASK_BASE, status: 'IN_PROGRESS' }) mockPrisma.task.update.mockResolvedValue({ ...TASK_BASE, status: 'IN_PROGRESS' })
mockPrisma.task.findMany.mockResolvedValue([{ status: 'IN_PROGRESS' }]) mockPrisma.task.findMany.mockResolvedValue([{ status: 'IN_PROGRESS' }, { status: 'TO_DO' }])
mockPrisma.story.findUniqueOrThrow.mockResolvedValue({ status: 'IN_SPRINT' }) mockPrisma.story.findUniqueOrThrow.mockResolvedValue({
id: 'story-1',
status: 'IN_SPRINT',
pbi_id: 'pbi-1',
sprint_id: 'sprint-1',
})
mockPrisma.pbi.findUniqueOrThrow.mockResolvedValue({ id: 'pbi-1', status: 'READY' })
mockPrisma.story.findMany.mockImplementation(
async (args: { where?: { pbi_id?: string; sprint_id?: string } }) => {
if (args.where?.pbi_id) return [{ status: 'IN_SPRINT' }]
if (args.where?.sprint_id) return [{ pbi_id: 'pbi-1' }]
return []
},
)
mockPrisma.sprint.findUniqueOrThrow.mockResolvedValue({ id: 'sprint-1', status: 'ACTIVE' })
;(mockPrisma.pbi as unknown as { findMany: ReturnType<typeof vi.fn> }).findMany = vi
.fn()
.mockResolvedValue([{ status: 'READY' }])
const result = await updateTaskStatusWithStoryPromotion('task-1', 'IN_PROGRESS') const result = await updateTaskStatusWithStoryPromotion('task-1', 'IN_PROGRESS')
expect(result.storyStatusChange).toBe(null) expect(result.storyStatusChange).toBe(null)
expect(mockPrisma.story.update).not.toHaveBeenCalled()
})
it('updates the task regardless of story-status change', async () => {
mockPrisma.task.update.mockResolvedValue({ ...TASK_BASE, status: 'IN_PROGRESS' })
mockPrisma.task.findMany.mockResolvedValue([{ status: 'IN_PROGRESS' }])
mockPrisma.story.findUniqueOrThrow.mockResolvedValue({ status: 'IN_SPRINT' })
await updateTaskStatusWithStoryPromotion('task-1', 'IN_PROGRESS')
expect(mockPrisma.task.update).toHaveBeenCalledWith({
where: { id: 'task-1' },
data: { status: 'IN_PROGRESS' },
select: expect.any(Object),
})
})
it('uses the provided transaction client when passed', async () => {
const tx = {
task: { update: vi.fn(), findMany: vi.fn() },
story: { findUniqueOrThrow: vi.fn(), update: vi.fn() },
}
tx.task.update.mockResolvedValue({ ...TASK_BASE, status: 'DONE' })
tx.task.findMany.mockResolvedValue([{ status: 'DONE' }])
tx.story.findUniqueOrThrow.mockResolvedValue({ status: 'IN_SPRINT' })
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const result = await updateTaskStatusWithStoryPromotion('task-1', 'DONE', tx as any)
expect(result.storyStatusChange).toBe('promoted')
expect(mockPrisma.$transaction).not.toHaveBeenCalled()
expect(tx.story.update).toHaveBeenCalledWith({
where: { id: 'story-1' },
data: { status: 'DONE' },
})
}) })
}) })

View file

@ -0,0 +1,140 @@
import { describe, it, expect, vi, beforeEach } from 'vitest'
vi.mock('../src/prisma.js', () => ({
prisma: {
idea: { update: vi.fn() },
ideaLog: { create: vi.fn() },
$transaction: vi.fn(),
},
}))
vi.mock('../src/auth.js', () => ({
requireWriteAccess: vi.fn(),
PermissionDeniedError: class PermissionDeniedError extends Error {
constructor(message = 'Demo accounts cannot perform write operations') {
super(message)
this.name = 'PermissionDeniedError'
}
},
}))
vi.mock('../src/access.js', () => ({
userOwnsIdea: vi.fn(),
}))
import { prisma } from '../src/prisma.js'
import { requireWriteAccess } from '../src/auth.js'
import { userOwnsIdea } from '../src/access.js'
import { handleUpdateIdeaPlanReviewed } from '../src/tools/update-idea-plan-reviewed.js'
const mockPrisma = prisma as unknown as {
idea: { update: ReturnType<typeof vi.fn> }
ideaLog: { create: ReturnType<typeof vi.fn> }
$transaction: ReturnType<typeof vi.fn>
}
const mockRequireWriteAccess = requireWriteAccess as ReturnType<typeof vi.fn>
const mockUserOwnsIdea = userOwnsIdea as ReturnType<typeof vi.fn>
const IDEA_ID = 'idea-1'
const USER_ID = 'user-1'
const REVIEW_LOG = {
rounds: [{ score: 88 }],
convergence: { stable_at_round: 2 },
approval: { status: 'approved' },
}
beforeEach(() => {
vi.clearAllMocks()
mockRequireWriteAccess.mockResolvedValue({
userId: USER_ID,
tokenId: 'tok-1',
username: 'alice',
isDemo: false,
})
mockUserOwnsIdea.mockResolvedValue(true)
// $transaction returns the array of its two operations' results; the handler
// only reads result[0] (the idea.update result).
mockPrisma.$transaction.mockImplementation(async () => [
{ id: IDEA_ID, status: 'PLACEHOLDER', code: 'IDEA-1' },
{},
])
})
function parseResult(result: Awaited<ReturnType<typeof handleUpdateIdeaPlanReviewed>>) {
const text = result.content?.[0]?.type === 'text' ? result.content[0].text : ''
try {
return JSON.parse(text)
} catch {
return text
}
}
// The handler builds `data.status` inside the idea.update call passed to
// $transaction. We capture it by inspecting the prisma.idea.update mock args.
function statusPassedToUpdate(): string | undefined {
const call = mockPrisma.idea.update.mock.calls[0]
return call?.[0]?.data?.status
}
describe('handleUpdateIdeaPlanReviewed — status transition', () => {
it('approval_status="approved" → PLAN_REVIEWED', async () => {
await handleUpdateIdeaPlanReviewed({
idea_id: IDEA_ID,
review_log: REVIEW_LOG,
approval_status: 'approved',
})
expect(statusPassedToUpdate()).toBe('PLAN_REVIEWED')
})
it('approval_status="rejected" → PLAN_REVIEW_FAILED', async () => {
await handleUpdateIdeaPlanReviewed({
idea_id: IDEA_ID,
review_log: REVIEW_LOG,
approval_status: 'rejected',
})
expect(statusPassedToUpdate()).toBe('PLAN_REVIEW_FAILED')
})
it('approval_status="pending" → PLAN_REVIEW_FAILED (needs manual approval, never silently approved)', async () => {
await handleUpdateIdeaPlanReviewed({
idea_id: IDEA_ID,
review_log: REVIEW_LOG,
approval_status: 'pending',
})
expect(statusPassedToUpdate()).toBe('PLAN_REVIEW_FAILED')
})
it('omitted approval_status → PLAN_REVIEW_FAILED (safe default, not PLAN_REVIEWED)', async () => {
await handleUpdateIdeaPlanReviewed({
idea_id: IDEA_ID,
review_log: REVIEW_LOG,
})
expect(statusPassedToUpdate()).toBe('PLAN_REVIEW_FAILED')
})
it('returns "Idea not found" when the user does not own the idea', async () => {
mockUserOwnsIdea.mockResolvedValue(false)
const result = await handleUpdateIdeaPlanReviewed({
idea_id: IDEA_ID,
review_log: REVIEW_LOG,
approval_status: 'approved',
})
expect(parseResult(result)).toContain('Idea not found')
expect(mockPrisma.idea.update).not.toHaveBeenCalled()
})
it('persists review_log + reviewed_at and logs a PLAN_REVIEW_RESULT entry', async () => {
await handleUpdateIdeaPlanReviewed({
idea_id: IDEA_ID,
review_log: REVIEW_LOG,
approval_status: 'approved',
})
const updateArg = mockPrisma.idea.update.mock.calls[0]?.[0]
expect(updateArg?.data?.plan_review_log).toEqual(REVIEW_LOG)
expect(updateArg?.data?.reviewed_at).toBeInstanceOf(Date)
const logArg = mockPrisma.ideaLog.create.mock.calls[0]?.[0]
expect(logArg?.data?.type).toBe('PLAN_REVIEW_RESULT')
expect(logArg?.data?.idea_id).toBe(IDEA_ID)
})
})

View file

@ -4,12 +4,13 @@ vi.mock('../src/prisma.js', () => ({
prisma: { prisma: {
product: { findUnique: vi.fn() }, product: { findUnique: vi.fn() },
task: { findUnique: vi.fn() }, task: { findUnique: vi.fn() },
claudeJob: { findFirst: vi.fn() }, claudeJob: { findFirst: vi.fn(), findMany: vi.fn(), findUnique: vi.fn() },
}, },
})) }))
vi.mock('../src/git/pr.js', () => ({ vi.mock('../src/git/pr.js', () => ({
createPullRequest: vi.fn(), createPullRequest: vi.fn(),
markPullRequestReady: vi.fn(),
})) }))
import { prisma } from '../src/prisma.js' import { prisma } from '../src/prisma.js'
@ -19,7 +20,11 @@ import { maybeCreateAutoPr } from '../src/tools/update-job-status.js'
const mockPrisma = prisma as unknown as { const mockPrisma = prisma as unknown as {
product: { findUnique: ReturnType<typeof vi.fn> } product: { findUnique: ReturnType<typeof vi.fn> }
task: { findUnique: ReturnType<typeof vi.fn> } task: { findUnique: ReturnType<typeof vi.fn> }
claudeJob: { findFirst: ReturnType<typeof vi.fn> } claudeJob: {
findFirst: ReturnType<typeof vi.fn>
findMany: ReturnType<typeof vi.fn>
findUnique: ReturnType<typeof vi.fn>
}
} }
const mockCreatePr = createPullRequest as ReturnType<typeof vi.fn> const mockCreatePr = createPullRequest as ReturnType<typeof vi.fn>
@ -37,9 +42,12 @@ beforeEach(() => {
mockPrisma.product.findUnique.mockResolvedValue({ auto_pr: true }) mockPrisma.product.findUnique.mockResolvedValue({ auto_pr: true })
mockPrisma.task.findUnique.mockResolvedValue({ mockPrisma.task.findUnique.mockResolvedValue({
title: 'Add feature', title: 'Add feature',
repo_url: null,
story: { id: 'story-1', code: 'SCRUM-42', title: 'Story title' }, story: { id: 'story-1', code: 'SCRUM-42', title: 'Story title' },
}) })
mockPrisma.claudeJob.findFirst.mockResolvedValue(null) // no sibling PR by default mockPrisma.claudeJob.findMany.mockResolvedValue([]) // no sibling PRs by default
// Default: legacy job zonder sprint_run (STORY-mode pad).
mockPrisma.claudeJob.findUnique.mockResolvedValue({ sprint_run_id: null, sprint_run: null })
mockCreatePr.mockResolvedValue({ url: 'https://github.com/org/repo/pull/99' }) mockCreatePr.mockResolvedValue({ url: 'https://github.com/org/repo/pull/99' })
}) })
@ -56,12 +64,27 @@ describe('maybeCreateAutoPr', () => {
}) })
it('reuses sibling pr_url when another job in same story already opened a PR', async () => { it('reuses sibling pr_url when another job in same story already opened a PR', async () => {
mockPrisma.claudeJob.findFirst.mockResolvedValue({ pr_url: 'https://github.com/org/repo/pull/77' }) mockPrisma.claudeJob.findMany.mockResolvedValue([
{ pr_url: 'https://github.com/org/repo/pull/77', task: { repo_url: null } },
])
const url = await maybeCreateAutoPr(BASE_OPTS) const url = await maybeCreateAutoPr(BASE_OPTS)
expect(url).toBe('https://github.com/org/repo/pull/77') expect(url).toBe('https://github.com/org/repo/pull/77')
expect(mockCreatePr).not.toHaveBeenCalled() expect(mockCreatePr).not.toHaveBeenCalled()
}) })
it('does NOT reuse a sibling PR from a different repo (cross-repo story)', async () => {
// Sibling targeted another repo via task.repo_url — its PR must not leak in.
mockPrisma.claudeJob.findMany.mockResolvedValue([
{
pr_url: 'https://github.com/org/other-repo/pull/12',
task: { repo_url: 'https://github.com/org/other-repo' },
},
])
const url = await maybeCreateAutoPr(BASE_OPTS)
expect(url).toBe('https://github.com/org/repo/pull/99') // fresh PR, not the sibling's
expect(mockCreatePr).toHaveBeenCalledOnce()
})
it('returns null when auto_pr=false', async () => { it('returns null when auto_pr=false', async () => {
mockPrisma.product.findUnique.mockResolvedValue({ auto_pr: false }) mockPrisma.product.findUnique.mockResolvedValue({ auto_pr: false })
const url = await maybeCreateAutoPr(BASE_OPTS) const url = await maybeCreateAutoPr(BASE_OPTS)
@ -72,6 +95,7 @@ describe('maybeCreateAutoPr', () => {
it('uses story title without code prefix when story has no code', async () => { it('uses story title without code prefix when story has no code', async () => {
mockPrisma.task.findUnique.mockResolvedValue({ mockPrisma.task.findUnique.mockResolvedValue({
title: 'Add feature', title: 'Add feature',
repo_url: null,
story: { id: 'story-1', code: null, title: 'Story title' }, story: { id: 'story-1', code: null, title: 'Story title' },
}) })
await maybeCreateAutoPr(BASE_OPTS) await maybeCreateAutoPr(BASE_OPTS)
@ -80,6 +104,66 @@ describe('maybeCreateAutoPr', () => {
) )
}) })
it('SPRINT-mode: maakt een draft-PR aan met sprint-titel, geen auto-merge', async () => {
mockPrisma.claudeJob.findUnique.mockResolvedValue({
sprint_run_id: 'run-1',
sprint_run: {
id: 'run-1',
pr_strategy: 'SPRINT',
sprint: { sprint_goal: 'Cascade-flow live' },
},
})
const url = await maybeCreateAutoPr(BASE_OPTS)
expect(url).toBe('https://github.com/org/repo/pull/99')
expect(mockCreatePr).toHaveBeenCalledWith(
expect.objectContaining({
title: 'Sprint: Cascade-flow live',
draft: true,
enableAutoMerge: false,
}),
)
})
it('SPRINT-mode: hergebruikt sibling-PR binnen dezelfde SprintRun', async () => {
mockPrisma.claudeJob.findUnique.mockResolvedValue({
sprint_run_id: 'run-1',
sprint_run: { id: 'run-1', pr_strategy: 'SPRINT', sprint: { sprint_goal: 'Goal' } },
})
mockPrisma.claudeJob.findMany.mockResolvedValue([
{ pr_url: 'https://github.com/org/repo/pull/55', task: { repo_url: null } },
])
const url = await maybeCreateAutoPr(BASE_OPTS)
expect(url).toBe('https://github.com/org/repo/pull/55')
expect(mockCreatePr).not.toHaveBeenCalled()
})
it('SPRINT-mode: cross-repo — sibling-PR van ander repo wordt niet hergebruikt', async () => {
mockPrisma.claudeJob.findUnique.mockResolvedValue({
sprint_run_id: 'run-1',
sprint_run: { id: 'run-1', pr_strategy: 'SPRINT', sprint: { sprint_goal: 'Goal' } },
})
// Deze job target een ander repo via task.repo_url.
mockPrisma.task.findUnique.mockResolvedValue({
title: 'MCP-taak',
repo_url: 'https://github.com/org/scrum4me-mcp',
story: { id: 'story-1', code: 'SCRUM-9', title: 'Story title' },
})
// Sibling met pr_url hoort bij het product-repo (repo_url null) → andere bucket.
mockPrisma.claudeJob.findMany.mockResolvedValue([
{ pr_url: 'https://github.com/org/repo/pull/201', task: { repo_url: null } },
])
const url = await maybeCreateAutoPr(BASE_OPTS)
// Geen hergebruik van de product-repo PR → eigen draft-PR voor het mcp-repo.
expect(url).toBe('https://github.com/org/repo/pull/99')
expect(mockCreatePr).toHaveBeenCalledOnce()
})
it('returns null and does not throw when gh fails', async () => { it('returns null and does not throw when gh fails', async () => {
mockCreatePr.mockResolvedValue({ error: 'gh CLI not found' }) mockCreatePr.mockResolvedValue({ error: 'gh CLI not found' })
const url = await maybeCreateAutoPr(BASE_OPTS) const url = await maybeCreateAutoPr(BASE_OPTS)

View file

@ -5,13 +5,26 @@ vi.mock('../src/git/push.js', () => ({
pushBranchForJob: vi.fn(), pushBranchForJob: vi.fn(),
})) }))
vi.mock('../src/prisma.js', () => ({
prisma: {
claudeJob: {
findUnique: vi.fn(),
},
},
}))
import { pushBranchForJob } from '../src/git/push.js' import { pushBranchForJob } from '../src/git/push.js'
import { prisma } from '../src/prisma.js'
import { prepareDoneUpdate } from '../src/tools/update-job-status.js' import { prepareDoneUpdate } from '../src/tools/update-job-status.js'
const mockPush = pushBranchForJob as ReturnType<typeof vi.fn> const mockPush = pushBranchForJob as ReturnType<typeof vi.fn>
const mockFindUnique = (prisma as unknown as {
claudeJob: { findUnique: ReturnType<typeof vi.fn> }
}).claudeJob.findUnique
beforeEach(() => { beforeEach(() => {
vi.clearAllMocks() vi.clearAllMocks()
mockFindUnique.mockResolvedValue(null)
}) })
describe('prepareDoneUpdate', () => { describe('prepareDoneUpdate', () => {
@ -39,8 +52,25 @@ describe('prepareDoneUpdate', () => {
}) })
}) })
it('derives branchName from jobId when branch is undefined', async () => { it('reads branchName from DB (claudeJob.branch) when branch arg is undefined', async () => {
process.env.SCRUM4ME_AGENT_WORKTREE_DIR = '/wt' process.env.SCRUM4ME_AGENT_WORKTREE_DIR = '/wt'
mockFindUnique.mockResolvedValue({ branch: 'feat/sprint-fvy30lvv' })
mockPush.mockResolvedValue({ pushed: true, remoteRef: 'refs/heads/feat/sprint-fvy30lvv' })
await prepareDoneUpdate('job-abc12345', undefined)
expect(mockFindUnique).toHaveBeenCalledWith({
where: { id: 'job-abc12345' },
select: { branch: true },
})
expect(mockPush).toHaveBeenCalledWith(
expect.objectContaining({ branchName: 'feat/sprint-fvy30lvv' }),
)
})
it('falls back to feat/job-<8> when neither branch arg nor DB.branch is set', async () => {
process.env.SCRUM4ME_AGENT_WORKTREE_DIR = '/wt'
mockFindUnique.mockResolvedValue({ branch: null })
mockPush.mockResolvedValue({ pushed: true, remoteRef: 'refs/heads/feat/job-abc12345' }) mockPush.mockResolvedValue({ pushed: true, remoteRef: 'refs/heads/feat/job-abc12345' })
await prepareDoneUpdate('job-abc12345', undefined) await prepareDoneUpdate('job-abc12345', undefined)

View file

@ -0,0 +1,95 @@
// Unit-tests voor de no-op SKIPPED exit-route in update_job_status (PBI-57 ST-1273).
// Volle handler-integratie wordt niet hier getest — die hangt aan tientallen
// MCP/Prisma-mocks. Wel testen we de geëxporteerde helpers die expliciet
// SKIPPED-aware zijn gemaakt: resolveNextAction en cleanupWorktreeForTerminalStatus.
import { describe, it, expect, vi, beforeEach } from 'vitest'
vi.mock('../src/prisma.js', () => ({
prisma: {
claudeJob: { findUnique: vi.fn(), count: vi.fn() },
},
}))
vi.mock('../src/git/worktree.js', () => ({
removeWorktreeForJob: vi.fn(),
}))
vi.mock('../src/tools/wait-for-job.js', async (importOriginal) => {
const original = await importOriginal<typeof import('../src/tools/wait-for-job.js')>()
return {
...original,
resolveRepoRoot: vi.fn(),
}
})
import { prisma } from '../src/prisma.js'
import { removeWorktreeForJob } from '../src/git/worktree.js'
import { resolveRepoRoot } from '../src/tools/wait-for-job.js'
import {
cleanupWorktreeForTerminalStatus,
resolveNextAction,
} from '../src/tools/update-job-status.js'
const mockRemove = removeWorktreeForJob as ReturnType<typeof vi.fn>
const mockResolve = resolveRepoRoot as ReturnType<typeof vi.fn>
const mockPrisma = prisma as unknown as {
claudeJob: {
findUnique: ReturnType<typeof vi.fn>
count: ReturnType<typeof vi.fn>
}
}
beforeEach(() => {
vi.clearAllMocks()
mockPrisma.claudeJob.findUnique.mockResolvedValue({ task: { story_id: 'story-default' } })
mockPrisma.claudeJob.count.mockResolvedValue(0)
})
describe('resolveNextAction — skipped pad', () => {
it('returns wait_for_job_again when queue has jobs after skipped', () => {
expect(resolveNextAction(2, 'skipped')).toBe('wait_for_job_again')
})
it('returns queue_empty when queue is empty after skipped', () => {
expect(resolveNextAction(0, 'skipped')).toBe('queue_empty')
})
})
describe('cleanupWorktreeForTerminalStatus — skipped pad', () => {
it('calls removeWorktreeForJob with keepBranch=false when skipped (no push happened)', async () => {
mockResolve.mockResolvedValue('/repos/my-project')
mockRemove.mockResolvedValue({ removed: true })
await cleanupWorktreeForTerminalStatus('prod-001', 'job-skip', 'skipped', undefined)
expect(mockRemove).toHaveBeenCalledWith({
repoRoot: '/repos/my-project',
jobId: 'job-skip',
keepBranch: false,
})
})
it('keeps keepBranch=false when skipped even if a branch is reported', async () => {
mockResolve.mockResolvedValue('/repos/my-project')
mockRemove.mockResolvedValue({ removed: true })
await cleanupWorktreeForTerminalStatus('prod-001', 'job-skip', 'skipped', 'feat/job-skip')
expect(mockRemove).toHaveBeenCalledWith({
repoRoot: '/repos/my-project',
jobId: 'job-skip',
keepBranch: false,
})
})
it('defers cleanup when sibling jobs in same story are still active (skipped path)', async () => {
mockResolve.mockResolvedValue('/repos/my-project')
mockPrisma.claudeJob.findUnique.mockResolvedValue({ task: { story_id: 'story-shared' } })
mockPrisma.claudeJob.count.mockResolvedValue(1)
await cleanupWorktreeForTerminalStatus('prod-001', 'job-skip', 'skipped', undefined)
expect(mockRemove).not.toHaveBeenCalled()
})
})

View file

@ -0,0 +1,192 @@
import { describe, it, expect, vi, beforeEach } from 'vitest'
vi.mock('../src/prisma.js', () => ({
prisma: {
sprintTaskExecution: {
findMany: vi.fn(),
},
sprintRun: {
findUnique: vi.fn(),
update: vi.fn(),
},
story: {
count: vi.fn(),
},
},
}))
import { prisma } from '../src/prisma.js'
import {
checkSprintVerifyGate,
finalizeSprintRunOnDone,
} from '../src/tools/update-job-status.js'
type MockedPrisma = {
sprintTaskExecution: { findMany: ReturnType<typeof vi.fn> }
sprintRun: {
findUnique: ReturnType<typeof vi.fn>
update: ReturnType<typeof vi.fn>
}
story: { count: ReturnType<typeof vi.fn> }
}
const mocked = prisma as unknown as MockedPrisma
const LONG_SUMMARY = 'Refactor touched extra files for type narrowing.'
function execRow(overrides: Record<string, unknown>) {
return {
id: 'exec-' + Math.random().toString(36).slice(2, 8),
task_id: 't1',
order: 0,
status: 'DONE',
verify_result: 'ALIGNED',
verify_summary: null,
verify_required_snapshot: 'ALIGNED_OR_PARTIAL',
verify_only_snapshot: false,
task: { code: 'TASK-1', title: 'Sample task' },
...overrides,
}
}
describe('checkSprintVerifyGate', () => {
beforeEach(() => {
vi.clearAllMocks()
})
it('rejects when no executions exist (claim-bug)', async () => {
mocked.sprintTaskExecution.findMany.mockResolvedValue([])
const r = await checkSprintVerifyGate('job-x')
expect(r.allowed).toBe(false)
if (!r.allowed) expect(r.error).toMatch(/geen SprintTaskExecution-rows/i)
})
it('blocks PENDING/RUNNING executions', async () => {
mocked.sprintTaskExecution.findMany.mockResolvedValue([
execRow({ status: 'PENDING' }),
execRow({ status: 'RUNNING' }),
])
const r = await checkSprintVerifyGate('job-x')
expect(r.allowed).toBe(false)
if (!r.allowed) {
expect(r.error).toMatch(/PENDING/)
expect(r.error).toMatch(/RUNNING/)
}
})
it('blocks FAILED executions', async () => {
mocked.sprintTaskExecution.findMany.mockResolvedValue([
execRow({ status: 'FAILED' }),
])
const r = await checkSprintVerifyGate('job-x')
expect(r.allowed).toBe(false)
if (!r.allowed) expect(r.error).toMatch(/FAILED/)
})
it('blocks SKIPPED unless verify_required_snapshot=ANY', async () => {
mocked.sprintTaskExecution.findMany.mockResolvedValue([
execRow({ status: 'SKIPPED', verify_required_snapshot: 'ALIGNED' }),
])
const r = await checkSprintVerifyGate('job-x')
expect(r.allowed).toBe(false)
if (!r.allowed) expect(r.error).toMatch(/SKIPPED/)
})
it('allows SKIPPED when verify_required_snapshot=ANY', async () => {
mocked.sprintTaskExecution.findMany.mockResolvedValue([
execRow({ status: 'SKIPPED', verify_required_snapshot: 'ANY' }),
])
expect((await checkSprintVerifyGate('job-x')).allowed).toBe(true)
})
it('runs per-row gate for DONE executions', async () => {
// PARTIAL zonder summary onder ALIGNED_OR_PARTIAL → blocker
mocked.sprintTaskExecution.findMany.mockResolvedValue([
execRow({
status: 'DONE',
verify_result: 'PARTIAL',
verify_summary: null,
verify_required_snapshot: 'ALIGNED_OR_PARTIAL',
}),
])
const r = await checkSprintVerifyGate('job-x')
expect(r.allowed).toBe(false)
if (!r.allowed) expect(r.error).toMatch(/DONE-gate/)
})
it('passes when all DONE rows pass per-row gate', async () => {
mocked.sprintTaskExecution.findMany.mockResolvedValue([
execRow({ verify_result: 'ALIGNED' }),
execRow({
verify_result: 'PARTIAL',
verify_summary: LONG_SUMMARY,
verify_required_snapshot: 'ALIGNED_OR_PARTIAL',
}),
])
expect((await checkSprintVerifyGate('job-x')).allowed).toBe(true)
})
it('aggregates multiple blockers in one error message', async () => {
mocked.sprintTaskExecution.findMany.mockResolvedValue([
execRow({ status: 'FAILED', task: { code: 'A', title: 'a' } }),
execRow({ status: 'PENDING', task: { code: 'B', title: 'b' } }),
])
const r = await checkSprintVerifyGate('job-x')
expect(r.allowed).toBe(false)
if (!r.allowed) {
expect(r.error).toMatch(/2 task\(s\) blokkeren/)
expect(r.error).toMatch(/A: a/)
expect(r.error).toMatch(/B: b/)
}
})
})
describe('finalizeSprintRunOnDone', () => {
beforeEach(() => {
vi.clearAllMocks()
})
it('no-op when SprintRun already DONE (idempotent)', async () => {
mocked.sprintRun.findUnique.mockResolvedValue({
id: 'sr-1',
status: 'DONE',
sprint_id: 's1',
})
await finalizeSprintRunOnDone('sr-1')
expect(mocked.sprintRun.update).not.toHaveBeenCalled()
})
it('no-op when SprintRun does not exist', async () => {
mocked.sprintRun.findUnique.mockResolvedValue(null)
await finalizeSprintRunOnDone('sr-x')
expect(mocked.sprintRun.update).not.toHaveBeenCalled()
})
it('no-op when stories still open', async () => {
mocked.sprintRun.findUnique.mockResolvedValue({
id: 'sr-1',
status: 'RUNNING',
sprint_id: 's1',
})
mocked.story.count.mockResolvedValue(2)
await finalizeSprintRunOnDone('sr-1')
expect(mocked.sprintRun.update).not.toHaveBeenCalled()
})
it('sets SprintRun → DONE when all stories DONE/FAILED', async () => {
mocked.sprintRun.findUnique.mockResolvedValue({
id: 'sr-1',
status: 'RUNNING',
sprint_id: 's1',
})
mocked.story.count.mockResolvedValue(0)
await finalizeSprintRunOnDone('sr-1')
expect(mocked.sprintRun.update).toHaveBeenCalledWith({
where: { id: 'sr-1' },
data: expect.objectContaining({
status: 'DONE',
finished_at: expect.any(Date),
}),
})
})
})

View file

@ -0,0 +1,74 @@
// Unit-tests voor resolveJobTimestamps — de status-gedreven timestamp-helper
// van update_job_status. Pure functie, geen mocks (zoals update-job-status-gate).
import { describe, it, expect } from 'vitest'
import { resolveJobTimestamps } from '../src/tools/update-job-status.js'
const NOW = new Date('2026-05-14T12:00:00.000Z')
const EARLIER = new Date('2026-05-14T11:00:00.000Z')
describe('resolveJobTimestamps', () => {
describe('running', () => {
it('sets started_at when not yet set, no finished_at', () => {
const r = resolveJobTimestamps('running', { claimed_at: EARLIER, started_at: null }, NOW)
expect(r.started_at).toBe(NOW)
expect(r.finished_at).toBeUndefined()
expect(r.claimed_at).toBeUndefined()
})
it('is set-once: does not re-stamp started_at when already set', () => {
const r = resolveJobTimestamps('running', { claimed_at: EARLIER, started_at: EARLIER }, NOW)
expect(r.started_at).toBeUndefined()
expect(r.finished_at).toBeUndefined()
expect(r.claimed_at).toBeUndefined()
})
})
describe('terminal transitions (done/failed/skipped)', () => {
it.each(['done', 'failed', 'skipped'] as const)(
'backfills started_at and sets finished_at for %s when started_at is null',
(status) => {
const r = resolveJobTimestamps(status, { claimed_at: EARLIER, started_at: null }, NOW)
expect(r.started_at).toBe(NOW)
expect(r.finished_at).toBe(NOW)
expect(r.claimed_at).toBeUndefined()
},
)
it('only sets finished_at when started_at is already set', () => {
const r = resolveJobTimestamps('done', { claimed_at: EARLIER, started_at: EARLIER }, NOW)
expect(r.started_at).toBeUndefined()
expect(r.finished_at).toBe(NOW)
expect(r.claimed_at).toBeUndefined()
})
})
describe('claimed_at backfill', () => {
it.each(['running', 'done', 'failed', 'skipped'] as const)(
'backfills claimed_at for %s when it is null',
(status) => {
const r = resolveJobTimestamps(status, { claimed_at: null, started_at: null }, NOW)
expect(r.claimed_at).toBe(NOW)
},
)
it('never returns claimed_at when it is already set', () => {
const r = resolveJobTimestamps('done', { claimed_at: EARLIER, started_at: EARLIER }, NOW)
expect(r.claimed_at).toBeUndefined()
})
})
it('returns only finished_at when all timestamps are already set and status is terminal', () => {
const r = resolveJobTimestamps('failed', { claimed_at: EARLIER, started_at: EARLIER }, NOW)
expect(r).toEqual({ finished_at: NOW })
})
it('defaults now to a fresh Date when omitted', () => {
const before = Date.now()
const r = resolveJobTimestamps('running', { claimed_at: EARLIER, started_at: null })
const after = Date.now()
expect(r.started_at).toBeInstanceOf(Date)
expect(r.started_at!.getTime()).toBeGreaterThanOrEqual(before)
expect(r.started_at!.getTime()).toBeLessThanOrEqual(after)
})
})

View file

@ -0,0 +1,174 @@
import { describe, it, expect, vi, beforeEach } from 'vitest'
vi.mock('../src/prisma.js', () => ({
prisma: {
sprint: {
findUnique: vi.fn(),
update: vi.fn(),
},
},
}))
vi.mock('../src/auth.js', () => ({
requireWriteAccess: vi.fn(),
PermissionDeniedError: class PermissionDeniedError extends Error {
constructor(message = 'Demo accounts cannot perform write operations') {
super(message)
this.name = 'PermissionDeniedError'
}
},
}))
vi.mock('../src/access.js', () => ({
userCanAccessProduct: vi.fn(),
}))
import { prisma } from '../src/prisma.js'
import { requireWriteAccess } from '../src/auth.js'
import { userCanAccessProduct } from '../src/access.js'
import { handleUpdateSprint } from '../src/tools/update-sprint.js'
const mockPrisma = prisma as unknown as {
sprint: {
findUnique: ReturnType<typeof vi.fn>
update: ReturnType<typeof vi.fn>
}
}
const mockRequireWriteAccess = requireWriteAccess as ReturnType<typeof vi.fn>
const mockUserCanAccessProduct = userCanAccessProduct as ReturnType<typeof vi.fn>
const SPRINT_ID = 'spr-1'
const PRODUCT_ID = 'prod-1'
const USER_ID = 'user-1'
beforeEach(() => {
vi.clearAllMocks()
mockRequireWriteAccess.mockResolvedValue({ userId: USER_ID, tokenId: 'tok-1', username: 'alice', isDemo: false })
mockUserCanAccessProduct.mockResolvedValue(true)
mockPrisma.sprint.findUnique.mockResolvedValue({ id: SPRINT_ID, product_id: PRODUCT_ID })
mockPrisma.sprint.update.mockResolvedValue({
id: SPRINT_ID,
code: 'S-2026-05-11-1',
sprint_goal: 'g',
status: 'OPEN',
start_date: new Date('2026-05-11'),
end_date: null,
completed_at: null,
})
})
function getText(result: Awaited<ReturnType<typeof handleUpdateSprint>>) {
return result.content?.[0]?.type === 'text' ? result.content[0].text : ''
}
describe('handleUpdateSprint', () => {
it('returns error when no fields provided', async () => {
const result = await handleUpdateSprint({ sprint_id: SPRINT_ID })
expect(mockPrisma.sprint.update).not.toHaveBeenCalled()
expect(getText(result)).toMatch(/Minstens één veld vereist/)
})
it('updates status only', async () => {
await handleUpdateSprint({ sprint_id: SPRINT_ID, status: 'OPEN' })
expect(mockPrisma.sprint.update).toHaveBeenCalledTimes(1)
const args = mockPrisma.sprint.update.mock.calls[0][0]
expect(args.where).toEqual({ id: SPRINT_ID })
expect(args.data).toEqual({ status: 'OPEN' })
})
it('auto-sets end_date AND completed_at when status → CLOSED without explicit end_date', async () => {
const before = Date.now()
await handleUpdateSprint({ sprint_id: SPRINT_ID, status: 'CLOSED' })
const after = Date.now()
const args = mockPrisma.sprint.update.mock.calls[0][0]
expect(args.data.status).toBe('CLOSED')
expect(args.data.end_date).toBeInstanceOf(Date)
expect(args.data.end_date.getTime()).toBeGreaterThanOrEqual(before)
expect(args.data.end_date.getTime()).toBeLessThanOrEqual(after)
expect(args.data.completed_at).toBeInstanceOf(Date)
expect(args.data.completed_at.getTime()).toBeGreaterThanOrEqual(before)
expect(args.data.completed_at.getTime()).toBeLessThanOrEqual(after)
})
it('auto-sets end_date when status → FAILED, but does NOT set completed_at', async () => {
await handleUpdateSprint({ sprint_id: SPRINT_ID, status: 'FAILED' })
const args = mockPrisma.sprint.update.mock.calls[0][0]
expect(args.data.end_date).toBeInstanceOf(Date)
expect(args.data.completed_at).toBeUndefined()
})
it('auto-sets end_date when status → ARCHIVED, but does NOT set completed_at', async () => {
await handleUpdateSprint({ sprint_id: SPRINT_ID, status: 'ARCHIVED' })
const args = mockPrisma.sprint.update.mock.calls[0][0]
expect(args.data.end_date).toBeInstanceOf(Date)
expect(args.data.completed_at).toBeUndefined()
})
it('still sets completed_at when status → CLOSED even with explicit end_date', async () => {
await handleUpdateSprint({
sprint_id: SPRINT_ID,
status: 'CLOSED',
end_date: '2025-12-31',
})
const args = mockPrisma.sprint.update.mock.calls[0][0]
expect(args.data.end_date.toISOString().slice(0, 10)).toBe('2025-12-31')
expect(args.data.completed_at).toBeInstanceOf(Date)
})
it('does NOT auto-set end_date or completed_at when status → OPEN', async () => {
await handleUpdateSprint({ sprint_id: SPRINT_ID, status: 'OPEN' })
const args = mockPrisma.sprint.update.mock.calls[0][0]
expect(args.data.end_date).toBeUndefined()
expect(args.data.completed_at).toBeUndefined()
})
it('updates multiple fields at once', async () => {
await handleUpdateSprint({
sprint_id: SPRINT_ID,
sprint_goal: 'New goal',
start_date: '2026-05-15',
})
const args = mockPrisma.sprint.update.mock.calls[0][0]
expect(args.data.sprint_goal).toBe('New goal')
expect(args.data.start_date.toISOString().slice(0, 10)).toBe('2026-05-15')
expect(args.data.status).toBeUndefined()
expect(args.data.end_date).toBeUndefined()
})
it('returns error when sprint not found', async () => {
mockPrisma.sprint.findUnique.mockResolvedValue(null)
const result = await handleUpdateSprint({ sprint_id: SPRINT_ID, status: 'CLOSED' })
expect(mockPrisma.sprint.update).not.toHaveBeenCalled()
expect(getText(result)).toMatch(/not found/)
})
it('returns error when user cannot access sprint product', async () => {
mockUserCanAccessProduct.mockResolvedValue(false)
const result = await handleUpdateSprint({ sprint_id: SPRINT_ID, status: 'CLOSED' })
expect(mockPrisma.sprint.update).not.toHaveBeenCalled()
expect(getText(result)).toMatch(/not accessible/)
})
it('allows any status transition (no state-machine)', async () => {
await handleUpdateSprint({ sprint_id: SPRINT_ID, status: 'OPEN' })
expect(mockPrisma.sprint.update).toHaveBeenCalledTimes(1)
await handleUpdateSprint({ sprint_id: SPRINT_ID, status: 'CLOSED' })
expect(mockPrisma.sprint.update).toHaveBeenCalledTimes(2)
await handleUpdateSprint({ sprint_id: SPRINT_ID, status: 'OPEN' })
expect(mockPrisma.sprint.update).toHaveBeenCalledTimes(3)
})
})

View file

@ -0,0 +1,199 @@
import { describe, it, expect, vi, beforeEach } from 'vitest'
vi.mock('../src/prisma.js', () => ({
prisma: {
sprintTaskExecution: {
findUnique: vi.fn(),
update: vi.fn(),
},
},
}))
vi.mock('../src/auth.js', async (importOriginal) => {
const original = await importOriginal<typeof import('../src/auth.js')>()
return { ...original, requireWriteAccess: vi.fn() }
})
import { prisma } from '../src/prisma.js'
import { requireWriteAccess } from '../src/auth.js'
import { registerUpdateTaskExecutionTool } from '../src/tools/update-task-execution.js'
import type { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js'
const mockPrisma = prisma as unknown as {
sprintTaskExecution: {
findUnique: ReturnType<typeof vi.fn>
update: ReturnType<typeof vi.fn>
}
}
const mockAuth = requireWriteAccess as ReturnType<typeof vi.fn>
const TOKEN_ID = 'tok-owner'
function makeServer() {
let handler: (args: Record<string, unknown>) => Promise<unknown>
const server = {
registerTool: vi.fn((_name: string, _meta: unknown, fn: typeof handler) => {
handler = fn
}),
call: (args: Record<string, unknown>) => handler(args),
}
registerUpdateTaskExecutionTool(server as unknown as McpServer)
return server
}
function execRecord(overrides: Record<string, unknown> = {}) {
return {
id: 'exec-1',
sprint_job_id: 'job-1',
sprint_job: {
claimed_by_token_id: TOKEN_ID,
status: 'CLAIMED',
kind: 'SPRINT_IMPLEMENTATION',
},
...overrides,
}
}
beforeEach(() => {
vi.clearAllMocks()
mockAuth.mockResolvedValue({
userId: 'u-1',
tokenId: TOKEN_ID,
username: 'agent',
isDemo: false,
})
})
describe('update_task_execution', () => {
it('rejects when execution not found', async () => {
mockPrisma.sprintTaskExecution.findUnique.mockResolvedValue(null)
const server = makeServer()
const result = (await server.call({
execution_id: 'missing',
status: 'RUNNING',
})) as { content: { text: string }[]; isError?: boolean }
expect(result.isError).toBe(true)
expect(result.content[0].text).toMatch(/not found/i)
})
it('rejects wrong job-kind', async () => {
mockPrisma.sprintTaskExecution.findUnique.mockResolvedValue(
execRecord({
sprint_job: { claimed_by_token_id: TOKEN_ID, status: 'CLAIMED', kind: 'TASK_IMPLEMENTATION' },
}),
)
const server = makeServer()
const result = (await server.call({
execution_id: 'exec-1',
status: 'RUNNING',
})) as { content: { text: string }[]; isError?: boolean }
expect(result.isError).toBe(true)
expect(result.content[0].text).toMatch(/SPRINT_IMPLEMENTATION/)
})
it('rejects when token does not own the job', async () => {
mockPrisma.sprintTaskExecution.findUnique.mockResolvedValue(
execRecord({
sprint_job: { claimed_by_token_id: 'other-token', status: 'CLAIMED', kind: 'SPRINT_IMPLEMENTATION' },
}),
)
const server = makeServer()
const result = (await server.call({
execution_id: 'exec-1',
status: 'RUNNING',
})) as { content: { text: string }[]; isError?: boolean }
expect(result.isError).toBe(true)
expect(result.content[0].text).toMatch(/Forbidden/)
})
it('rejects when job is in terminal state', async () => {
mockPrisma.sprintTaskExecution.findUnique.mockResolvedValue(
execRecord({
sprint_job: { claimed_by_token_id: TOKEN_ID, status: 'DONE', kind: 'SPRINT_IMPLEMENTATION' },
}),
)
const server = makeServer()
const result = (await server.call({
execution_id: 'exec-1',
status: 'DONE',
})) as { content: { text: string }[]; isError?: boolean }
expect(result.isError).toBe(true)
expect(result.content[0].text).toMatch(/terminal/)
})
it('writes started_at on RUNNING', async () => {
mockPrisma.sprintTaskExecution.findUnique.mockResolvedValue(execRecord())
mockPrisma.sprintTaskExecution.update.mockResolvedValue({
id: 'exec-1',
status: 'RUNNING',
base_sha: null,
head_sha: null,
verify_result: null,
verify_summary: null,
skip_reason: null,
started_at: new Date(),
finished_at: null,
})
const server = makeServer()
await server.call({ execution_id: 'exec-1', status: 'RUNNING' })
const updateCall = mockPrisma.sprintTaskExecution.update.mock.calls[0][0]
expect(updateCall.data.status).toBe('RUNNING')
expect(updateCall.data.started_at).toBeInstanceOf(Date)
expect(updateCall.data.finished_at).toBeUndefined()
})
it('writes finished_at on DONE/FAILED/SKIPPED', async () => {
mockPrisma.sprintTaskExecution.findUnique.mockResolvedValue(execRecord())
mockPrisma.sprintTaskExecution.update.mockResolvedValue({
id: 'exec-1',
status: 'DONE',
base_sha: 'sha-base',
head_sha: 'sha-head',
verify_result: null,
verify_summary: null,
skip_reason: null,
started_at: new Date(),
finished_at: new Date(),
})
const server = makeServer()
await server.call({
execution_id: 'exec-1',
status: 'DONE',
head_sha: 'sha-head',
})
const updateCall = mockPrisma.sprintTaskExecution.update.mock.calls[0][0]
expect(updateCall.data.status).toBe('DONE')
expect(updateCall.data.finished_at).toBeInstanceOf(Date)
expect(updateCall.data.head_sha).toBe('sha-head')
})
it('persists skip_reason on SKIPPED', async () => {
mockPrisma.sprintTaskExecution.findUnique.mockResolvedValue(execRecord())
mockPrisma.sprintTaskExecution.update.mockResolvedValue({
id: 'exec-1',
status: 'SKIPPED',
base_sha: null,
head_sha: null,
verify_result: null,
verify_summary: null,
skip_reason: 'no-op task',
started_at: null,
finished_at: new Date(),
})
const server = makeServer()
await server.call({
execution_id: 'exec-1',
status: 'SKIPPED',
skip_reason: 'no-op task',
})
const updateCall = mockPrisma.sprintTaskExecution.update.mock.calls[0][0]
expect(updateCall.data.skip_reason).toBe('no-op task')
expect(updateCall.data.finished_at).toBeInstanceOf(Date)
})
})

View file

@ -0,0 +1,216 @@
import { describe, it, expect, vi, beforeEach } from 'vitest'
vi.mock('../src/prisma.js', () => ({
prisma: {
sprintTaskExecution: {
findUnique: vi.fn(),
findFirst: vi.fn(),
update: vi.fn(),
},
},
}))
vi.mock('../src/auth.js', async (importOriginal) => {
const original = await importOriginal<typeof import('../src/auth.js')>()
return { ...original, requireWriteAccess: vi.fn() }
})
vi.mock('../src/verify/classify.js', () => ({
classifyDiffAgainstPlan: vi.fn(),
}))
vi.mock('node:child_process', () => ({
execFile: vi.fn(),
}))
import { prisma } from '../src/prisma.js'
import { requireWriteAccess } from '../src/auth.js'
import { classifyDiffAgainstPlan } from '../src/verify/classify.js'
import { execFile } from 'node:child_process'
import { registerVerifySprintTaskTool } from '../src/tools/verify-sprint-task.js'
import type { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js'
const mockPrisma = prisma as unknown as {
sprintTaskExecution: {
findUnique: ReturnType<typeof vi.fn>
findFirst: ReturnType<typeof vi.fn>
update: ReturnType<typeof vi.fn>
}
}
const mockAuth = requireWriteAccess as ReturnType<typeof vi.fn>
const mockClassify = classifyDiffAgainstPlan as ReturnType<typeof vi.fn>
const mockExecFile = execFile as unknown as ReturnType<typeof vi.fn>
const TOKEN_ID = 'tok-owner'
function makeServer() {
let handler: (args: Record<string, unknown>) => Promise<unknown>
const server = {
registerTool: vi.fn((_name: string, _meta: unknown, fn: typeof handler) => {
handler = fn
}),
call: (args: Record<string, unknown>) => handler(args),
}
registerVerifySprintTaskTool(server as unknown as McpServer)
return server
}
function stubGitDiff(stdout: string) {
// promisify(execFile) calls (cmd, args, opts, cb)
mockExecFile.mockImplementation(
(
_cmd: string,
_args: string[],
_opts: unknown,
cb: (err: null, result: { stdout: string; stderr: string }) => void,
) => {
cb(null, { stdout, stderr: '' })
},
)
}
function execRecord(overrides: Record<string, unknown> = {}) {
return {
id: 'exec-1',
sprint_job_id: 'job-1',
order: 0,
base_sha: 'sha-base',
plan_snapshot: 'frozen plan',
verify_required_snapshot: 'ALIGNED_OR_PARTIAL',
verify_only_snapshot: false,
sprint_job: {
claimed_by_token_id: TOKEN_ID,
status: 'CLAIMED',
kind: 'SPRINT_IMPLEMENTATION',
},
...overrides,
}
}
beforeEach(() => {
vi.clearAllMocks()
mockAuth.mockResolvedValue({
userId: 'u-1',
tokenId: TOKEN_ID,
username: 'agent',
isDemo: false,
})
})
describe('verify_sprint_task', () => {
it('rejects when execution not found', async () => {
mockPrisma.sprintTaskExecution.findUnique.mockResolvedValue(null)
const server = makeServer()
const result = (await server.call({
execution_id: 'missing',
worktree_path: '/tmp/wt',
})) as { content: { text: string }[]; isError?: boolean }
expect(result.isError).toBe(true)
expect(result.content[0].text).toMatch(/not found/i)
})
it('rejects wrong token', async () => {
mockPrisma.sprintTaskExecution.findUnique.mockResolvedValue(
execRecord({
sprint_job: { claimed_by_token_id: 'other', status: 'CLAIMED', kind: 'SPRINT_IMPLEMENTATION' },
}),
)
const server = makeServer()
const result = (await server.call({
execution_id: 'exec-1',
worktree_path: '/tmp/wt',
})) as { content: { text: string }[]; isError?: boolean }
expect(result.isError).toBe(true)
expect(result.content[0].text).toMatch(/Forbidden/)
})
it('PARTIAL with summary returns allowed_for_done=true under ALIGNED_OR_PARTIAL', async () => {
mockPrisma.sprintTaskExecution.findUnique.mockResolvedValue(execRecord())
stubGitDiff('diff --git a/x b/x\n+ change\n')
mockClassify.mockReturnValue({ result: 'PARTIAL', reasoning: 'extra files' })
const server = makeServer()
const result = (await server.call({
execution_id: 'exec-1',
worktree_path: '/tmp/wt',
summary: 'Refactor touched extra files for type narrowing.',
})) as { content: { text: string }[] }
const body = JSON.parse(result.content[0].text)
expect(body.result).toBe('partial')
expect(body.allowed_for_done).toBe(true)
expect(body.reason).toBeNull()
})
it('PARTIAL without summary returns allowed_for_done=false', async () => {
mockPrisma.sprintTaskExecution.findUnique.mockResolvedValue(execRecord())
stubGitDiff('diff --git a/x b/x\n')
mockClassify.mockReturnValue({ result: 'PARTIAL', reasoning: 'r' })
const server = makeServer()
const result = (await server.call({
execution_id: 'exec-1',
worktree_path: '/tmp/wt',
})) as { content: { text: string }[] }
const body = JSON.parse(result.content[0].text)
expect(body.result).toBe('partial')
expect(body.allowed_for_done).toBe(false)
expect(body.reason).toMatch(/summary/i)
})
it('DIVERGENT with strict ALIGNED returns allowed_for_done=false', async () => {
mockPrisma.sprintTaskExecution.findUnique.mockResolvedValue(
execRecord({ verify_required_snapshot: 'ALIGNED' }),
)
stubGitDiff('diff --git a/x b/x\n')
mockClassify.mockReturnValue({ result: 'DIVERGENT', reasoning: 'r' })
const server = makeServer()
const result = (await server.call({
execution_id: 'exec-1',
worktree_path: '/tmp/wt',
summary: 'Long enough summary describing the deviation rationale clearly.',
})) as { content: { text: string }[] }
const body = JSON.parse(result.content[0].text)
expect(body.allowed_for_done).toBe(false)
expect(body.reason).toMatch(/ALIGNED/)
})
it('auto-fills base_sha from previous DONE execution head_sha', async () => {
mockPrisma.sprintTaskExecution.findUnique.mockResolvedValue(
execRecord({ order: 1, base_sha: null }),
)
mockPrisma.sprintTaskExecution.findFirst.mockResolvedValue({
head_sha: 'prev-head-sha',
})
stubGitDiff('diff\n')
mockClassify.mockReturnValue({ result: 'ALIGNED', reasoning: 'ok' })
const server = makeServer()
const result = (await server.call({
execution_id: 'exec-1',
worktree_path: '/tmp/wt',
})) as { content: { text: string }[] }
const body = JSON.parse(result.content[0].text)
expect(body.base_sha).toBe('prev-head-sha')
// Persisted back to row
const updateCalls = mockPrisma.sprintTaskExecution.update.mock.calls
const baseShaPersist = updateCalls.find((c) => c[0].data.base_sha === 'prev-head-sha')
expect(baseShaPersist).toBeDefined()
})
it('errors when base_sha cannot be derived (no prior DONE)', async () => {
mockPrisma.sprintTaskExecution.findUnique.mockResolvedValue(
execRecord({ order: 2, base_sha: null }),
)
mockPrisma.sprintTaskExecution.findFirst.mockResolvedValue(null)
const server = makeServer()
const result = (await server.call({
execution_id: 'exec-1',
worktree_path: '/tmp/wt',
})) as { content: { text: string }[]; isError?: boolean }
expect(result.isError).toBe(true)
expect(result.content[0].text).toMatch(/MISSING_BASE_SHA/)
})
})

View file

@ -0,0 +1,59 @@
import { describe, it, expect } from 'vitest'
import { classifyDiffAgainstPlan } from '../../src/verify/classify.js'
describe('classify — delete-only commits (PBI-47 C5)', () => {
it('returns ALIGNED when the deleted path is in the plan', () => {
const diff = `diff --git a/app/todos/page.tsx b/app/todos/page.tsx
deleted file mode 100644
index 1234567..0000000
--- a/app/todos/page.tsx
+++ /dev/null
@@ -1,3 +0,0 @@
-export default function TodosPage() {
- return null
-}`
const plan = '- Verwijder `app/todos/page.tsx`\n- Verwijder gerelateerde imports'
const result = classifyDiffAgainstPlan({ diff, plan })
expect(result.result).toBe('ALIGNED')
})
it('returns ALIGNED for multi-file delete-only when both paths in plan', () => {
const diff = `diff --git a/app/todos/page.tsx b/app/todos/page.tsx
deleted file mode 100644
--- a/app/todos/page.tsx
+++ /dev/null
@@ -1,2 +0,0 @@
-line 1
-line 2
diff --git a/components/todo-list.tsx b/components/todo-list.tsx
deleted file mode 100644
--- a/components/todo-list.tsx
+++ /dev/null
@@ -1,1 +0,0 @@
-line`
const plan = '- `app/todos/page.tsx`\n- `components/todo-list.tsx`'
const result = classifyDiffAgainstPlan({ diff, plan })
expect(result.result).toBe('ALIGNED')
})
it('returns PARTIAL when only some plan deletes appear in the diff', () => {
const diff = `diff --git a/a.ts b/a.ts
deleted file mode 100644
--- a/a.ts
+++ /dev/null
@@ -1,1 +0,0 @@
-x`
const plan = '- `a.ts`\n- `b.ts`' // b.ts missing
const result = classifyDiffAgainstPlan({ diff, plan })
expect(result.result).toBe('PARTIAL')
})
it('returns EMPTY for a no-op diff', () => {
const result = classifyDiffAgainstPlan({ diff: '', plan: 'irrelevant' })
expect(result.result).toBe('EMPTY')
})
})

View file

@ -124,3 +124,92 @@ describe('classifyDiffAgainstPlan — DIVERGENT (scope creep)', () => {
expect(r.reasoning).toMatch(/extra/i) expect(r.reasoning).toMatch(/extra/i)
}) })
}) })
// Helper voor pure-delete diffs: +++ /dev/null betekent dat het bestand
// volledig verwijderd is. Pad zit alleen nog in de "--- a/<path>" regel.
function makeDeleteDiff(files: string[], linesPerFile = 5): string {
return files
.map(
(f) =>
`diff --git a/${f} b/${f}\ndeleted file mode 100644\n--- a/${f}\n+++ /dev/null\n` +
Array.from({ length: linesPerFile }, (_, i) => `-removed line ${i}`).join('\n'),
)
.join('\n')
}
describe('classifyDiffAgainstPlan — delete-only commits', () => {
it('herkent delete-only diff (geen +++ b/, wel --- a/) als ALIGNED bij matchend plan', () => {
const plan = 'Verwijder `src/old-helper.ts` — niet meer gebruikt.'
const diff = makeDeleteDiff(['src/old-helper.ts'])
const r = classifyDiffAgainstPlan({ diff, plan })
expect(r.result).toBe('ALIGNED')
})
it('retourneert PARTIAL wanneer plan meer paden noemt dan zijn verwijderd', () => {
const plan = 'Verwijder `src/a.ts` en `src/b.ts`.'
const diff = makeDeleteDiff(['src/a.ts'])
const r = classifyDiffAgainstPlan({ diff, plan })
expect(r.result).toBe('PARTIAL')
})
it('retourneert ALIGNED voor delete-only diff zonder plan-baseline', () => {
const diff = makeDeleteDiff(['src/old.ts'])
const r = classifyDiffAgainstPlan({ diff, plan: null })
expect(r.result).toBe('ALIGNED')
})
it('retourneert nog steeds EMPTY voor echt lege diff', () => {
const r = classifyDiffAgainstPlan({ diff: '', plan: 'Verwijder `src/x.ts`.' })
expect(r.result).toBe('EMPTY')
})
})
// Pseudo-paths in plans (code-snippets, attribute-syntax, ellipses) moeten
// niet als plan-paden meetellen — anders krijg je PARTIAL terwijl het werk
// volledig gedaan is. Regression-guard voor T-815-incident (sprint
// cmoyiu4yd000zf917acq9twtr, 2026-05-09).
describe('classifyDiffAgainstPlan — plan met pseudo-paths', () => {
it('negeert `data-debug-label="..."` als pseudo-pad en classificeert ALIGNED', () => {
const plan = [
'Verwijder alle voorkomens van `data-debug-label="..."` uit:',
'',
'- `app/components/shared/status-bar.tsx`',
'- `app/components/shared/header.tsx`',
].join('\n')
const diff = makeDiff([
'app/components/shared/status-bar.tsx',
'app/components/shared/header.tsx',
])
const r = classifyDiffAgainstPlan({ diff, plan })
expect(r.result).toBe('ALIGNED')
})
it('negeert ellipsis-tokens (drie of meer dots) als pad', () => {
const plan = 'Refactor `foo(...)` naar `bar()`. Files: `src/a.ts`.'
const diff = makeDiff(['src/a.ts'])
const r = classifyDiffAgainstPlan({ diff, plan })
expect(r.result).toBe('ALIGNED')
})
it('negeert tokens met operators/quotes als pad', () => {
const plan = 'Wijzig `props={x: 1}` en `useState<string>()` in `src/c.tsx`.'
const diff = makeDiff(['src/c.tsx'])
const r = classifyDiffAgainstPlan({ diff, plan })
expect(r.result).toBe('ALIGNED')
})
it('accepteert package.json en andere extension-only paths', () => {
const plan = 'Update `package.json` en `tsconfig.json`.'
const diff = makeDiff(['package.json', 'tsconfig.json'])
const r = classifyDiffAgainstPlan({ diff, plan })
expect(r.result).toBe('ALIGNED')
})
it('blijft PARTIAL retourneren wanneer een echt plan-pad ontbreekt', () => {
const plan = 'Wijzig `src/foo.ts` en `src/bar.ts`. Verwijder `data-x="..."`.'
const diff = makeDiff(['src/foo.ts'])
const r = classifyDiffAgainstPlan({ diff, plan })
expect(r.result).toBe('PARTIAL')
expect(r.reasoning).toMatch(/bar\.ts/)
})
})

View file

@ -0,0 +1,55 @@
import { describe, it, expect, beforeAll, afterAll } from 'vitest'
import * as fs from 'node:fs/promises'
import * as path from 'node:path'
import * as os from 'node:os'
import { execFile } from 'node:child_process'
import { promisify } from 'node:util'
import { getDiffInWorktree } from '../../src/tools/verify-task-against-plan.js'
const exec = promisify(execFile)
describe('verify scope per-job (PBI-47 P0)', () => {
let tmpRepo: string
let baseSha: string
let task1Sha: string
beforeAll(async () => {
tmpRepo = await fs.mkdtemp(path.join(os.tmpdir(), 'verify-scope-'))
await exec('git', ['init', '-b', 'main'], { cwd: tmpRepo })
await exec('git', ['config', 'user.email', 't@t.local'], { cwd: tmpRepo })
await exec('git', ['config', 'user.name', 'Test'], { cwd: tmpRepo })
await fs.writeFile(path.join(tmpRepo, 'README.md'), '# init\n')
await exec('git', ['add', '-A'], { cwd: tmpRepo })
await exec('git', ['commit', '-m', 'init'], { cwd: tmpRepo })
const baseRev = await exec('git', ['rev-parse', 'HEAD'], { cwd: tmpRepo })
baseSha = baseRev.stdout.trim()
// Simulate task 1: add a.ts
await fs.writeFile(path.join(tmpRepo, 'a.ts'), 'task 1\n')
await exec('git', ['add', '-A'], { cwd: tmpRepo })
await exec('git', ['commit', '-m', 'task 1'], { cwd: tmpRepo })
const t1Rev = await exec('git', ['rev-parse', 'HEAD'], { cwd: tmpRepo })
task1Sha = t1Rev.stdout.trim()
// Simulate task 2: add b.ts
await fs.writeFile(path.join(tmpRepo, 'b.ts'), 'task 2\n')
await exec('git', ['add', '-A'], { cwd: tmpRepo })
await exec('git', ['commit', '-m', 'task 2'], { cwd: tmpRepo })
})
afterAll(async () => {
await fs.rm(tmpRepo, { recursive: true, force: true })
})
it('diff vs base = origin/main → both task 1 and task 2 visible', async () => {
const diff = await getDiffInWorktree(tmpRepo, baseSha)
expect(diff).toContain('a.ts')
expect(diff).toContain('b.ts')
})
it('diff vs base = task1_sha → only task 2 visible', async () => {
const diff = await getDiffInWorktree(tmpRepo, task1Sha)
expect(diff).not.toContain('a.ts')
expect(diff).toContain('b.ts')
})
})

View file

@ -0,0 +1,91 @@
import { describe, it, expect, vi, beforeEach } from 'vitest'
vi.mock('../src/prisma.js', () => ({
prisma: {
claudeJob: {
findUnique: vi.fn(),
findFirst: vi.fn(),
},
},
}))
import { prisma } from '../src/prisma.js'
import { resolveBranchForJob } from '../src/tools/wait-for-job.js'
const mockPrisma = prisma as unknown as {
claudeJob: {
findUnique: ReturnType<typeof vi.fn>
findFirst: ReturnType<typeof vi.fn>
}
}
beforeEach(() => {
vi.clearAllMocks()
})
describe('resolveBranchForJob — sprint-aware', () => {
it('SPRINT-mode: kiest feat/sprint-<id-suffix> en marks reused=false bij eerste task', async () => {
mockPrisma.claudeJob.findUnique.mockResolvedValue({
sprint_run_id: 'run-cuid-12345678',
sprint_run: { id: 'run-cuid-12345678', pr_strategy: 'SPRINT' },
})
mockPrisma.claudeJob.findFirst.mockResolvedValue(null)
const result = await resolveBranchForJob('job-1', 'story-anything')
expect(result.branchName).toBe('feat/sprint-12345678')
expect(result.reused).toBe(false)
})
it('SPRINT-mode: marks reused=true wanneer sibling al de branch gebruikt', async () => {
mockPrisma.claudeJob.findUnique.mockResolvedValue({
sprint_run_id: 'run-cuid-12345678',
sprint_run: { id: 'run-cuid-12345678', pr_strategy: 'SPRINT' },
})
mockPrisma.claudeJob.findFirst.mockResolvedValue({ branch: 'feat/sprint-12345678' })
const result = await resolveBranchForJob('job-2', 'story-anything')
expect(result.branchName).toBe('feat/sprint-12345678')
expect(result.reused).toBe(true)
})
it('STORY-mode (sprint-flow): valt terug op story-branch via legacy-pad', async () => {
mockPrisma.claudeJob.findUnique.mockResolvedValue({
sprint_run_id: 'run-cuid-12345678',
sprint_run: { id: 'run-cuid-12345678', pr_strategy: 'STORY' },
})
mockPrisma.claudeJob.findFirst.mockResolvedValue(null)
const result = await resolveBranchForJob('job-1', 'story-cuid-87654321')
expect(result.branchName).toBe('feat/story-87654321')
expect(result.reused).toBe(false)
})
it('Legacy (geen sprint_run): bestaand gedrag — feat/story-<id-suffix>', async () => {
mockPrisma.claudeJob.findUnique.mockResolvedValue({
sprint_run_id: null,
sprint_run: null,
})
mockPrisma.claudeJob.findFirst.mockResolvedValue(null)
const result = await resolveBranchForJob('job-1', 'story-cuid-87654321')
expect(result.branchName).toBe('feat/story-87654321')
expect(result.reused).toBe(false)
})
it('Legacy: hergebruik branch wanneer sibling-job in dezelfde story al een branch heeft', async () => {
mockPrisma.claudeJob.findUnique.mockResolvedValue({
sprint_run_id: null,
sprint_run: null,
})
mockPrisma.claudeJob.findFirst.mockResolvedValue({ branch: 'feat/story-87654321' })
const result = await resolveBranchForJob('job-2', 'story-cuid-87654321')
expect(result.branchName).toBe('feat/story-87654321')
expect(result.reused).toBe(true)
})
})

View file

@ -6,7 +6,7 @@ import * as fs from 'node:fs/promises'
vi.mock('../src/prisma.js', () => ({ vi.mock('../src/prisma.js', () => ({
prisma: { prisma: {
$executeRaw: vi.fn(), $executeRaw: vi.fn(),
claudeJob: { findFirst: vi.fn() }, claudeJob: { findFirst: vi.fn(), findUnique: vi.fn(), update: vi.fn() },
product: { findUnique: vi.fn() }, product: { findUnique: vi.fn() },
}, },
})) }))
@ -21,13 +21,15 @@ import { resolveRepoRoot, rollbackClaim, attachWorktreeToJob } from '../src/tool
const mockPrisma = prisma as unknown as { const mockPrisma = prisma as unknown as {
$executeRaw: ReturnType<typeof vi.fn> $executeRaw: ReturnType<typeof vi.fn>
claudeJob: { findFirst: ReturnType<typeof vi.fn> } claudeJob: { findFirst: ReturnType<typeof vi.fn>; findUnique: ReturnType<typeof vi.fn>; update: ReturnType<typeof vi.fn> }
product: { findUnique: ReturnType<typeof vi.fn> } product: { findUnique: ReturnType<typeof vi.fn> }
} }
const mockCreateWorktree = createWorktreeForJob as ReturnType<typeof vi.fn> const mockCreateWorktree = createWorktreeForJob as ReturnType<typeof vi.fn>
beforeEach(() => { beforeEach(() => {
vi.clearAllMocks() vi.clearAllMocks()
// Default: legacy job zonder sprint_run (oude flow).
mockPrisma.claudeJob.findUnique.mockResolvedValue({ sprint_run_id: null, sprint_run: null })
}) })
describe('resolveRepoRoot', () => { describe('resolveRepoRoot', () => {

25
package-lock.json generated
View file

@ -1,19 +1,21 @@
{ {
"name": "scrum4me-mcp", "name": "scrum4me-mcp",
"version": "0.5.0", "version": "0.8.0",
"lockfileVersion": 3, "lockfileVersion": 3,
"requires": true, "requires": true,
"packages": { "packages": {
"": { "": {
"name": "scrum4me-mcp", "name": "scrum4me-mcp",
"version": "0.5.0", "version": "0.8.0",
"hasInstallScript": true, "hasInstallScript": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@modelcontextprotocol/sdk": "^1.29.0", "@modelcontextprotocol/sdk": "^1.29.0",
"@prisma/adapter-pg": "^7.8.0", "@prisma/adapter-pg": "^7.8.0",
"@prisma/client": "^7.8.0", "@prisma/client": "^7.8.0",
"@types/proper-lockfile": "^4.1.4",
"pg": "^8.13.1", "pg": "^8.13.1",
"proper-lockfile": "^4.1.2",
"yaml": "^2.8.4", "yaml": "^2.8.4",
"zod": "^4.0.0" "zod": "^4.0.0"
}, },
@ -1327,6 +1329,15 @@
"pg-types": "^2.2.0" "pg-types": "^2.2.0"
} }
}, },
"node_modules/@types/proper-lockfile": {
"version": "4.1.4",
"resolved": "https://registry.npmjs.org/@types/proper-lockfile/-/proper-lockfile-4.1.4.tgz",
"integrity": "sha512-uo2ABllncSqg9F1D4nugVl9v93RmjxF6LJzQLMLDdPaXCUIDPeOJ21Gbqi43xNKzBi/WQ0Q0dICqufzQbMjipQ==",
"license": "MIT",
"dependencies": {
"@types/retry": "*"
}
},
"node_modules/@types/react": { "node_modules/@types/react": {
"version": "19.2.14", "version": "19.2.14",
"resolved": "https://registry.npmjs.org/@types/react/-/react-19.2.14.tgz", "resolved": "https://registry.npmjs.org/@types/react/-/react-19.2.14.tgz",
@ -1338,6 +1349,12 @@
"csstype": "^3.2.2" "csstype": "^3.2.2"
} }
}, },
"node_modules/@types/retry": {
"version": "0.12.5",
"resolved": "https://registry.npmjs.org/@types/retry/-/retry-0.12.5.tgz",
"integrity": "sha512-3xSjTp3v03X/lSQLkczaN9UIEwJMoMCA1+Nb5HfbJEQWogdeQIyVtTvxPXDQjZ5zws8rFQfVfRdz03ARihPJgw==",
"license": "MIT"
},
"node_modules/@vitest/expect": { "node_modules/@vitest/expect": {
"version": "4.1.5", "version": "4.1.5",
"resolved": "https://registry.npmjs.org/@vitest/expect/-/expect-4.1.5.tgz", "resolved": "https://registry.npmjs.org/@vitest/expect/-/expect-4.1.5.tgz",
@ -2332,7 +2349,6 @@
"version": "4.2.11", "version": "4.2.11",
"resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.11.tgz", "resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.11.tgz",
"integrity": "sha512-RbJ5/jmFcNNCcDV5o9eTnBLJ/HszWV0P73bc+Ff4nS/rJj+YaS6IGyiOL0VoBYX+l1Wrl3k63h/KrH+nhJ0XvQ==", "integrity": "sha512-RbJ5/jmFcNNCcDV5o9eTnBLJ/HszWV0P73bc+Ff4nS/rJj+YaS6IGyiOL0VoBYX+l1Wrl3k63h/KrH+nhJ0XvQ==",
"devOptional": true,
"license": "ISC" "license": "ISC"
}, },
"node_modules/grammex": { "node_modules/grammex": {
@ -3277,7 +3293,6 @@
"version": "4.1.2", "version": "4.1.2",
"resolved": "https://registry.npmjs.org/proper-lockfile/-/proper-lockfile-4.1.2.tgz", "resolved": "https://registry.npmjs.org/proper-lockfile/-/proper-lockfile-4.1.2.tgz",
"integrity": "sha512-TjNPblN4BwAWMXU8s9AEz4JmQxnD1NNL7bNOY/AKUzyamc379FWASUhc/K1pL2noVb+XmZKLL68cjzLsiOAMaA==", "integrity": "sha512-TjNPblN4BwAWMXU8s9AEz4JmQxnD1NNL7bNOY/AKUzyamc379FWASUhc/K1pL2noVb+XmZKLL68cjzLsiOAMaA==",
"devOptional": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"graceful-fs": "^4.2.4", "graceful-fs": "^4.2.4",
@ -3289,7 +3304,6 @@
"version": "3.0.7", "version": "3.0.7",
"resolved": "https://registry.npmjs.org/signal-exit/-/signal-exit-3.0.7.tgz", "resolved": "https://registry.npmjs.org/signal-exit/-/signal-exit-3.0.7.tgz",
"integrity": "sha512-wnD2ZE+l+SPC/uoS0vXeE9L1+0wuaMqKlfz9AMUo38JsyLSBWSFcHR1Rri62LZc12vLr1gb3jl7iwQhgwpAbGQ==", "integrity": "sha512-wnD2ZE+l+SPC/uoS0vXeE9L1+0wuaMqKlfz9AMUo38JsyLSBWSFcHR1Rri62LZc12vLr1gb3jl7iwQhgwpAbGQ==",
"devOptional": true,
"license": "ISC" "license": "ISC"
}, },
"node_modules/proxy-addr": { "node_modules/proxy-addr": {
@ -3444,7 +3458,6 @@
"version": "0.12.0", "version": "0.12.0",
"resolved": "https://registry.npmjs.org/retry/-/retry-0.12.0.tgz", "resolved": "https://registry.npmjs.org/retry/-/retry-0.12.0.tgz",
"integrity": "sha512-9LkiTwjUh6rT555DtE9rTX+BKByPfrMzEAtnlEtdEwr3Nkffwiihqe2bWADg+OQRjt9gl6ICdmB/ZFDCGAtSow==", "integrity": "sha512-9LkiTwjUh6rT555DtE9rTX+BKByPfrMzEAtnlEtdEwr3Nkffwiihqe2bWADg+OQRjt9gl6ICdmB/ZFDCGAtSow==",
"devOptional": true,
"license": "MIT", "license": "MIT",
"engines": { "engines": {
"node": ">= 4" "node": ">= 4"

View file

@ -1,6 +1,6 @@
{ {
"name": "scrum4me-mcp", "name": "scrum4me-mcp",
"version": "0.7.0", "version": "0.8.0",
"description": "MCP server for Scrum4Me — exposes dev-flow tools and prompts via the Model Context Protocol", "description": "MCP server for Scrum4Me — exposes dev-flow tools and prompts via the Model Context Protocol",
"type": "module", "type": "module",
"bin": { "bin": {
@ -32,7 +32,9 @@
"@modelcontextprotocol/sdk": "^1.29.0", "@modelcontextprotocol/sdk": "^1.29.0",
"@prisma/adapter-pg": "^7.8.0", "@prisma/adapter-pg": "^7.8.0",
"@prisma/client": "^7.8.0", "@prisma/client": "^7.8.0",
"@types/proper-lockfile": "^4.1.4",
"pg": "^8.13.1", "pg": "^8.13.1",
"proper-lockfile": "^4.1.2",
"yaml": "^2.8.4", "yaml": "^2.8.4",
"zod": "^4.0.0" "zod": "^4.0.0"
}, },

View file

@ -2,7 +2,6 @@ generator client {
provider = "prisma-client-js" provider = "prisma-client-js"
} }
datasource db { datasource db {
provider = "postgresql" provider = "postgresql"
} }
@ -18,11 +17,13 @@ enum StoryStatus {
OPEN OPEN
IN_SPRINT IN_SPRINT
DONE DONE
FAILED
} }
enum PbiStatus { enum PbiStatus {
READY READY
BLOCKED BLOCKED
FAILED
DONE DONE
} }
@ -54,6 +55,8 @@ enum TaskStatus {
IN_PROGRESS IN_PROGRESS
REVIEW REVIEW
DONE DONE
FAILED
EXCLUDED
} }
enum LogType { enum LogType {
@ -68,8 +71,25 @@ enum TestStatus {
} }
enum SprintStatus { enum SprintStatus {
ACTIVE OPEN
COMPLETED CLOSED
ARCHIVED
FAILED
}
enum SprintRunStatus {
QUEUED
RUNNING
PAUSED
DONE
FAILED
CANCELLED
}
enum PrStrategy {
SPRINT
STORY
SPRINT_BATCH
} }
enum IdeaStatus { enum IdeaStatus {
@ -80,6 +100,9 @@ enum IdeaStatus {
PLANNING PLANNING
PLAN_FAILED PLAN_FAILED
PLAN_READY PLAN_READY
REVIEWING_PLAN
PLAN_REVIEW_FAILED
PLAN_REVIEWED
PLANNED PLANNED
} }
@ -87,7 +110,17 @@ enum ClaudeJobKind {
TASK_IMPLEMENTATION TASK_IMPLEMENTATION
IDEA_GRILL IDEA_GRILL
IDEA_MAKE_PLAN IDEA_MAKE_PLAN
IDEA_REVIEW_PLAN
PLAN_CHAT PLAN_CHAT
SPRINT_IMPLEMENTATION
}
enum SprintTaskExecutionStatus {
PENDING
RUNNING
DONE
FAILED
SKIPPED
} }
enum IdeaLogType { enum IdeaLogType {
@ -95,6 +128,7 @@ enum IdeaLogType {
NOTE NOTE
GRILL_RESULT GRILL_RESULT
PLAN_RESULT PLAN_RESULT
PLAN_REVIEW_RESULT
STATUS_CHANGE STATUS_CHANGE
JOB_EVENT JOB_EVENT
} }
@ -105,33 +139,35 @@ enum UserQuestionStatus {
} }
model User { model User {
id String @id @default(cuid()) id String @id @default(cuid())
username String @unique username String @unique
email String? @unique email String? @unique
password_hash String password_hash String
is_demo Boolean @default(false) is_demo Boolean @default(false)
bio String? @db.VarChar(160) bio String? @db.VarChar(160)
bio_detail String? @db.VarChar(2000) bio_detail String? @db.VarChar(2000)
must_reset_password Boolean @default(false) must_reset_password Boolean @default(false)
avatar_data Bytes? avatar_data Bytes?
active_product_id String? active_product_id String?
active_product Product? @relation("UserActiveProduct", fields: [active_product_id], references: [id], onDelete: SetNull) active_product Product? @relation("UserActiveProduct", fields: [active_product_id], references: [id], onDelete: SetNull)
idea_code_counter Int @default(0) idea_code_counter Int @default(0)
min_quota_pct Int @default(20) min_quota_pct Int @default(20)
created_at DateTime @default(now()) settings Json @default("{}")
updated_at DateTime @updatedAt created_at DateTime @default(now())
roles UserRole[] updated_at DateTime @updatedAt
api_tokens ApiToken[] roles UserRole[]
products Product[] api_tokens ApiToken[]
todos Todo[] products Product[]
ideas Idea[] ideas Idea[]
product_members ProductMember[] product_members ProductMember[]
assigned_stories Story[] @relation("StoryAssignee") assigned_stories Story[] @relation("StoryAssignee")
login_pairings LoginPairing[] login_pairings LoginPairing[]
asked_questions ClaudeQuestion[] @relation("ClaudeQuestionAsker") asked_questions ClaudeQuestion[] @relation("ClaudeQuestionAsker")
answered_questions ClaudeQuestion[] @relation("ClaudeQuestionAnswerer") answered_questions ClaudeQuestion[] @relation("ClaudeQuestionAnswerer")
claude_jobs ClaudeJob[] claude_jobs ClaudeJob[]
claude_workers ClaudeWorker[] claude_workers ClaudeWorker[]
started_sprint_runs SprintRun[] @relation("SprintRunStartedBy")
push_subscriptions PushSubscription[]
@@index([active_product_id]) @@index([active_product_id])
@@map("users") @@map("users")
@ -172,6 +208,10 @@ model Product {
repo_url String? repo_url String?
definition_of_done String definition_of_done String
auto_pr Boolean @default(false) auto_pr Boolean @default(false)
pr_strategy PrStrategy @default(SPRINT)
preferred_model String?
thinking_budget_default Int?
preferred_permission_mode String?
archived Boolean @default(false) archived Boolean @default(false)
created_at DateTime @default(now()) created_at DateTime @default(now())
updated_at DateTime @updatedAt updated_at DateTime @updatedAt
@ -179,7 +219,6 @@ model Product {
sprints Sprint[] sprints Sprint[]
stories Story[] stories Story[]
tasks Task[] tasks Task[]
todos Todo[]
members ProductMember[] members ProductMember[]
active_for_users User[] @relation("UserActiveProduct") active_for_users User[] @relation("UserActiveProduct")
claude_questions ClaudeQuestion[] claude_questions ClaudeQuestion[]
@ -267,45 +306,79 @@ model Sprint {
id String @id @default(cuid()) id String @id @default(cuid())
product Product @relation(fields: [product_id], references: [id], onDelete: Cascade) product Product @relation(fields: [product_id], references: [id], onDelete: Cascade)
product_id String product_id String
code String @db.VarChar(30)
sprint_goal String sprint_goal String
status SprintStatus @default(ACTIVE) status SprintStatus @default(OPEN)
start_date DateTime? @db.Date start_date DateTime? @db.Date
end_date DateTime? @db.Date end_date DateTime? @db.Date
created_at DateTime @default(now()) created_at DateTime @default(now())
completed_at DateTime? completed_at DateTime?
stories Story[] stories Story[]
tasks Task[] tasks Task[]
sprint_runs SprintRun[]
@@unique([product_id, code])
@@index([product_id, status]) @@index([product_id, status])
@@map("sprints") @@map("sprints")
} }
model SprintRun {
id String @id @default(cuid())
sprint Sprint @relation(fields: [sprint_id], references: [id], onDelete: Cascade)
sprint_id String
started_by User @relation("SprintRunStartedBy", fields: [started_by_id], references: [id])
started_by_id String
status SprintRunStatus @default(QUEUED)
pr_strategy PrStrategy
branch String?
pr_url String?
started_at DateTime?
finished_at DateTime?
failure_reason String?
failed_task Task? @relation("SprintRunFailedTask", fields: [failed_task_id], references: [id], onDelete: SetNull)
failed_task_id String?
pause_context Json?
previous_run_id String? @unique
previous_run SprintRun? @relation("SprintRunChain", fields: [previous_run_id], references: [id], onDelete: SetNull)
next_run SprintRun? @relation("SprintRunChain")
created_at DateTime @default(now())
updated_at DateTime @updatedAt
jobs ClaudeJob[]
@@index([sprint_id, status])
@@index([started_by_id, status])
@@map("sprint_runs")
}
model Task { model Task {
id String @id @default(cuid()) id String @id @default(cuid())
story Story @relation(fields: [story_id], references: [id], onDelete: Cascade) story Story @relation(fields: [story_id], references: [id], onDelete: Cascade)
story_id String story_id String
product Product @relation(fields: [product_id], references: [id], onDelete: Cascade) product Product @relation(fields: [product_id], references: [id], onDelete: Cascade)
product_id String product_id String
sprint Sprint? @relation(fields: [sprint_id], references: [id]) sprint Sprint? @relation(fields: [sprint_id], references: [id])
sprint_id String? sprint_id String?
code String @db.VarChar(30) code String @db.VarChar(30)
title String title String
description String? description String?
implementation_plan String? implementation_plan String?
priority Int priority Int
sort_order Float sort_order Float
status TaskStatus @default(TO_DO) status TaskStatus @default(TO_DO)
verify_only Boolean @default(false) verify_only Boolean @default(false)
verify_required VerifyRequired @default(ALIGNED_OR_PARTIAL) verify_required VerifyRequired @default(ALIGNED_OR_PARTIAL)
requires_opus Boolean @default(false)
// Override product.repo_url for branch/worktree/push purposes. Set when // Override product.repo_url for branch/worktree/push purposes. Set when
// a task targets a different repo than its parent product (e.g. an // a task targets a different repo than its parent product (e.g. an
// MCP-server task tracked under the main product's PBI). Falls back to // MCP-server task tracked under the main product's PBI). Falls back to
// product.repo_url when null. // product.repo_url when null.
repo_url String? repo_url String?
created_at DateTime @default(now()) created_at DateTime @default(now())
updated_at DateTime @updatedAt updated_at DateTime @updatedAt
claude_questions ClaudeQuestion[] claude_questions ClaudeQuestion[]
claude_jobs ClaudeJob[] claude_jobs ClaudeJob[]
sprint_run_failures SprintRun[] @relation("SprintRunFailedTask")
sprint_task_executions SprintTaskExecution[]
@@unique([product_id, code]) @@unique([product_id, code])
@@index([story_id, priority, sort_order]) @@index([story_id, priority, sort_order])
@ -315,18 +388,20 @@ model Task {
} }
model ClaudeJob { model ClaudeJob {
id String @id @default(cuid()) id String @id @default(cuid())
user User @relation(fields: [user_id], references: [id], onDelete: Cascade) user User @relation(fields: [user_id], references: [id], onDelete: Cascade)
user_id String user_id String
product Product @relation(fields: [product_id], references: [id], onDelete: Cascade) product Product @relation(fields: [product_id], references: [id], onDelete: Cascade)
product_id String product_id String
task Task? @relation(fields: [task_id], references: [id], onDelete: Cascade) task Task? @relation(fields: [task_id], references: [id], onDelete: Cascade)
task_id String? task_id String?
idea Idea? @relation(fields: [idea_id], references: [id], onDelete: Cascade) idea Idea? @relation(fields: [idea_id], references: [id], onDelete: Cascade)
idea_id String? idea_id String?
kind ClaudeJobKind @default(TASK_IMPLEMENTATION) sprint_run SprintRun? @relation(fields: [sprint_run_id], references: [id], onDelete: SetNull)
status ClaudeJobStatus @default(QUEUED) sprint_run_id String?
claimed_by_token ApiToken? @relation(fields: [claimed_by_token_id], references: [id], onDelete: SetNull) kind ClaudeJobKind @default(TASK_IMPLEMENTATION)
status ClaudeJobStatus @default(QUEUED)
claimed_by_token ApiToken? @relation(fields: [claimed_by_token_id], references: [id], onDelete: SetNull)
claimed_by_token_id String? claimed_by_token_id String?
claimed_at DateTime? claimed_at DateTime?
started_at DateTime? started_at DateTime?
@ -338,43 +413,84 @@ model ClaudeJob {
output_tokens Int? output_tokens Int?
cache_read_tokens Int? cache_read_tokens Int?
cache_write_tokens Int? cache_write_tokens Int?
requested_model String?
requested_thinking_budget Int?
requested_permission_mode String?
actual_thinking_tokens Int?
plan_snapshot String? plan_snapshot String?
base_sha String?
head_sha String?
branch String? branch String?
pr_url String? pr_url String?
summary String? summary String?
error String? error String?
retry_count Int @default(0) retry_count Int @default(0)
created_at DateTime @default(now()) lease_until DateTime?
updated_at DateTime @updatedAt task_executions SprintTaskExecution[] @relation("SprintJobExecutions")
created_at DateTime @default(now())
updated_at DateTime @updatedAt
@@index([user_id, status]) @@index([user_id, status])
@@index([task_id, status]) @@index([task_id, status])
@@index([idea_id, status]) @@index([idea_id, status])
@@index([sprint_run_id, status])
@@index([status, claimed_at]) @@index([status, claimed_at])
@@index([status, finished_at]) @@index([status, finished_at])
@@index([status, lease_until])
@@map("claude_jobs") @@map("claude_jobs")
} }
// PBI-50: frozen scope-snapshot per SPRINT_IMPLEMENTATION-claim. Bij claim
// wordt voor elke TO_DO-task in scope één PENDING-record gemaakt met
// implementation_plan + verify_required gesnapshot. Worker en gate werken
// uitsluitend op deze rows; latere wijzigingen aan Task hebben geen
// invloed op de lopende batch.
model SprintTaskExecution {
id String @id @default(cuid())
sprint_job ClaudeJob @relation("SprintJobExecutions", fields: [sprint_job_id], references: [id], onDelete: Cascade)
sprint_job_id String
task Task @relation(fields: [task_id], references: [id], onDelete: Cascade)
task_id String
order Int
plan_snapshot String @db.Text
verify_required_snapshot VerifyRequired
verify_only_snapshot Boolean @default(false)
base_sha String?
head_sha String?
status SprintTaskExecutionStatus @default(PENDING)
verify_result VerifyResult?
verify_summary String? @db.Text
skip_reason String? @db.Text
started_at DateTime?
finished_at DateTime?
created_at DateTime @default(now())
updated_at DateTime @updatedAt
@@unique([sprint_job_id, task_id])
@@index([sprint_job_id, order])
@@map("sprint_task_executions")
}
model ModelPrice { model ModelPrice {
id String @id @default(cuid()) id String @id @default(cuid())
model_id String @unique model_id String @unique
input_price_per_1m Decimal @db.Decimal(12, 6) input_price_per_1m Decimal @db.Decimal(12, 6)
output_price_per_1m Decimal @db.Decimal(12, 6) output_price_per_1m Decimal @db.Decimal(12, 6)
cache_read_price_per_1m Decimal @db.Decimal(12, 6) cache_read_price_per_1m Decimal @db.Decimal(12, 6)
cache_write_price_per_1m Decimal @db.Decimal(12, 6) cache_write_price_per_1m Decimal @db.Decimal(12, 6)
currency String @default("USD") currency String @default("USD")
created_at DateTime @default(now()) created_at DateTime @default(now())
updated_at DateTime @updatedAt updated_at DateTime @updatedAt
@@map("model_prices") @@map("model_prices")
} }
model ClaudeWorker { model ClaudeWorker {
id String @id @default(cuid()) id String @id @default(cuid())
user User @relation(fields: [user_id], references: [id], onDelete: Cascade) user User @relation(fields: [user_id], references: [id], onDelete: Cascade)
user_id String user_id String
token ApiToken @relation(fields: [token_id], references: [id], onDelete: Cascade) token ApiToken @relation(fields: [token_id], references: [id], onDelete: Cascade)
token_id String token_id String
product_id String? product_id String?
started_at DateTime @default(now()) started_at DateTime @default(now())
last_seen_at DateTime @default(now()) last_seen_at DateTime @default(now())
@ -399,41 +515,25 @@ model ProductMember {
@@map("product_members") @@map("product_members")
} }
model Todo {
id String @id @default(cuid())
user User @relation(fields: [user_id], references: [id], onDelete: Cascade)
user_id String
product Product? @relation(fields: [product_id], references: [id], onDelete: SetNull)
product_id String?
title String
description String? @db.VarChar(2000)
done Boolean @default(false)
archived Boolean @default(false)
created_at DateTime @default(now())
updated_at DateTime @updatedAt
@@index([user_id, done, archived])
@@index([user_id, product_id])
@@map("todos")
}
model Idea { model Idea {
id String @id @default(cuid()) id String @id @default(cuid())
user User @relation(fields: [user_id], references: [id], onDelete: Cascade) user User @relation(fields: [user_id], references: [id], onDelete: Cascade)
user_id String user_id String
product Product? @relation(fields: [product_id], references: [id], onDelete: SetNull) product Product? @relation(fields: [product_id], references: [id], onDelete: SetNull)
product_id String? product_id String?
code String @db.VarChar(30) code String @db.VarChar(30)
title String title String
description String? @db.VarChar(4000) description String? @db.VarChar(4000)
grill_md String? @db.Text grill_md String? @db.Text
plan_md String? @db.Text plan_md String? @db.Text
pbi Pbi? @relation(fields: [pbi_id], references: [id], onDelete: SetNull) plan_review_log Json? // ReviewLog from orchestrator (all rounds, convergence metrics, approval status)
pbi_id String? @unique reviewed_at DateTime? // When last reviewed
status IdeaStatus @default(DRAFT) pbi Pbi? @relation(fields: [pbi_id], references: [id], onDelete: SetNull)
archived Boolean @default(false) pbi_id String? @unique
created_at DateTime @default(now()) status IdeaStatus @default(DRAFT)
updated_at DateTime @updatedAt archived Boolean @default(false)
created_at DateTime @default(now())
updated_at DateTime @updatedAt
questions ClaudeQuestion[] questions ClaudeQuestion[]
jobs ClaudeJob[] jobs ClaudeJob[]
@ -453,8 +553,8 @@ model IdeaProduct {
product_id String product_id String
created_at DateTime @default(now()) created_at DateTime @default(now())
idea Idea @relation(fields: [idea_id], references: [id], onDelete: Cascade) idea Idea @relation(fields: [idea_id], references: [id], onDelete: Cascade)
product Product @relation(fields: [product_id], references: [id], onDelete: Cascade) product Product @relation(fields: [product_id], references: [id], onDelete: Cascade)
@@unique([idea_id, product_id]) @@unique([idea_id, product_id])
@@index([product_id]) @@index([product_id])
@ -484,7 +584,7 @@ model UserQuestion {
created_at DateTime @default(now()) created_at DateTime @default(now())
updated_at DateTime @updatedAt updated_at DateTime @updatedAt
idea Idea @relation(fields: [idea_id], references: [id], onDelete: Cascade) idea Idea @relation(fields: [idea_id], references: [id], onDelete: Cascade)
@@index([idea_id, status]) @@index([idea_id, status])
@@index([user_id]) @@index([user_id])
@ -538,3 +638,18 @@ model ClaudeQuestion {
@@index([status, expires_at]) @@index([status, expires_at])
@@map("claude_questions") @@map("claude_questions")
} }
model PushSubscription {
id String @id @default(cuid())
user User @relation(fields: [user_id], references: [id], onDelete: Cascade)
user_id String
endpoint String @unique
p256dh String
auth String
user_agent String?
created_at DateTime @default(now())
last_used_at DateTime @default(now())
@@index([user_id])
@@map("push_subscriptions")
}

253
src/cancel/pbi-cascade.ts Normal file
View file

@ -0,0 +1,253 @@
// PBI fail-cascade — wanneer een TASK_IMPLEMENTATION-job FAILED wordt,
// cancellen we alle queued/claimed/running siblings binnen dezelfde PBI
// en draaien we eerder gepushte commits ongedaan via PR-close of een
// auto-revert-PR. Idempotent en non-blocking: elke fout wordt gelogd in
// het error-veld van de oorspronkelijke failed-job en stopt de cascade niet.
import { prisma } from '../prisma.js'
import { resolveRepoRoot } from '../tools/wait-for-job.js'
import { removeWorktreeForJob } from '../git/worktree.js'
import {
closePullRequest,
createRevertPullRequest,
getPullRequestState,
} from '../git/pr.js'
import { deleteRemoteBranch } from '../git/push.js'
import { releaseLocksOnTerminal } from '../git/job-locks.js'
export type CascadeOutcome = {
cancelled_job_ids: string[]
closed_prs: string[]
reverted_prs: { original: string; revertPr: string }[]
deleted_branches: string[]
warnings: string[]
}
const EMPTY: CascadeOutcome = {
cancelled_job_ids: [],
closed_prs: [],
reverted_prs: [],
deleted_branches: [],
warnings: [],
}
// Public entry. Always returns; never throws.
export async function cancelPbiOnFailure(failedJobId: string): Promise<CascadeOutcome> {
try {
return await runCascade(failedJobId)
} catch (err) {
console.warn(`[pbi-cascade] unexpected error for failedJob=${failedJobId}:`, err)
return { ...EMPTY, warnings: [`unexpected: ${(err as Error).message}`] }
}
}
async function runCascade(failedJobId: string): Promise<CascadeOutcome> {
const failedJob = await prisma.claudeJob.findUnique({
where: { id: failedJobId },
select: {
id: true,
kind: true,
status: true,
product_id: true,
task_id: true,
branch: true,
pr_url: true,
task: {
select: {
story: {
select: {
pbi: { select: { id: true, code: true } },
},
},
},
},
},
})
if (!failedJob) return EMPTY
if (failedJob.kind !== 'TASK_IMPLEMENTATION') return EMPTY
// SKIPPED is een no-op exit (zie update_job_status). Geen cascade naar siblings.
if (failedJob.status === 'SKIPPED') return EMPTY
const pbi = failedJob.task?.story?.pbi
if (!pbi) return EMPTY
// 1. Atomic cascade: select + updateMany. Race-window between SELECT
// and UPDATE is harmless because the cascade is idempotent — a second
// invocation simply finds zero rows.
const eligible = await prisma.claudeJob.findMany({
where: {
id: { not: failedJobId },
status: { in: ['QUEUED', 'CLAIMED', 'RUNNING'] },
task: { story: { pbi_id: pbi.id } },
},
select: { id: true, branch: true, pr_url: true, status: true, task_id: true },
})
if (eligible.length > 0) {
await prisma.claudeJob.updateMany({
where: { id: { in: eligible.map((j) => j.id) } },
data: {
status: 'CANCELLED',
finished_at: new Date(),
error: 'cancelled_by_pbi_failure',
},
})
// PBI-9: release product-worktree locks for cancelled jobs.
// No-op for jobs without registered locks (TASK_IMPLEMENTATION).
for (const j of eligible) await releaseLocksOnTerminal(j.id)
}
const outcome: CascadeOutcome = {
cancelled_job_ids: eligible.map((j) => j.id),
closed_prs: [],
reverted_prs: [],
deleted_branches: [],
warnings: [],
}
// 2. Group affected jobs (cascade-set failed) by branch to avoid
// closing the same PR twice for siblings sharing a story-branch.
const branchSet = new Map<string, { prUrl: string | null }>()
const all = [...eligible, { branch: failedJob.branch, pr_url: failedJob.pr_url }]
for (const j of all) {
if (!j.branch) continue
const existing = branchSet.get(j.branch)
// Prefer a non-null pr_url if any sibling has one.
if (!existing) {
branchSet.set(j.branch, { prUrl: j.pr_url ?? null })
} else if (!existing.prUrl && j.pr_url) {
branchSet.set(j.branch, { prUrl: j.pr_url })
}
}
const repoRoot = await resolveRepoRoot(failedJob.product_id)
const cascadeComment = `PBI ${pbi.code ?? pbi.id} cascaded fail — see job ${failedJobId}`
for (const [branch, { prUrl }] of branchSet) {
if (prUrl) {
const info = await getPullRequestState({ prUrl, cwd: repoRoot ?? undefined })
if ('error' in info) {
outcome.warnings.push(`gh pr view ${prUrl}: ${info.error}`)
continue
}
if (info.state === 'CLOSED') {
// Already closed; nothing to do for the PR. Branch may still exist.
if (repoRoot) await tryDeleteBranch(repoRoot, branch, outcome)
continue
}
if (info.state === 'OPEN') {
const closed = await closePullRequest({
prUrl,
comment: cascadeComment,
cwd: repoRoot ?? undefined,
})
if ('error' in closed) {
outcome.warnings.push(`close ${prUrl}: ${closed.error}`)
} else {
outcome.closed_prs.push(prUrl)
}
continue
}
if (info.state === 'MERGED') {
if (!repoRoot) {
outcome.warnings.push(
`merged PR ${prUrl} not reverted: no repo root configured for product ${failedJob.product_id}`,
)
continue
}
if (!info.mergeCommit) {
outcome.warnings.push(`merged PR ${prUrl} has no mergeCommit — skipping revert`)
continue
}
const revert = await createRevertPullRequest({
repoRoot,
mergeSha: info.mergeCommit,
baseRef: info.baseRefName,
originalTitle: info.title,
originalBranch: branch,
jobId: failedJobId,
pbiCode: pbi.code,
})
if ('error' in revert) {
outcome.warnings.push(`revert ${prUrl}: ${revert.error}`)
} else {
outcome.reverted_prs.push({ original: prUrl, revertPr: revert.url })
}
continue
}
} else {
// Branch without PR: best-effort delete on remote.
if (repoRoot) await tryDeleteBranch(repoRoot, branch, outcome)
}
}
// 3. Worktree cleanup for every cancelled job (and the failed job itself
// is handled elsewhere by cleanupWorktreeForTerminalStatus). For
// cancelled jobs we always discard the branch locally — they did not
// succeed.
if (repoRoot) {
for (const j of eligible) {
try {
await removeWorktreeForJob({ repoRoot, jobId: j.id, keepBranch: false })
} catch (err) {
outcome.warnings.push(`worktree cleanup for ${j.id}: ${(err as Error).message}`)
}
}
}
// 4. Persist a trace on the failed-job's error field so the operator can
// follow up. Use a structured one-liner to keep the column readable.
// Append to the existing error (separated by '\n---\n') so the original
// failure reason is preserved instead of being overwritten by the trace.
const trace = formatTrace(outcome)
if (trace) {
try {
const fresh = await prisma.claudeJob.findUnique({
where: { id: failedJobId },
select: { error: true },
})
const merged = fresh?.error
? `${fresh.error}\n---\n${trace}`.slice(0, 1900)
: trace.slice(0, 1900)
await prisma.claudeJob.update({
where: { id: failedJobId },
data: { error: merged },
})
} catch (err) {
console.warn(`[pbi-cascade] failed to persist trace for ${failedJobId}:`, err)
}
}
return outcome
}
async function tryDeleteBranch(
repoRoot: string,
branch: string,
outcome: CascadeOutcome,
): Promise<void> {
const result = await deleteRemoteBranch({ repoRoot, branch })
if (result.deleted) {
outcome.deleted_branches.push(branch)
return
}
if (result.reason === 'not-found') {
// Already gone — silent no-op.
return
}
outcome.warnings.push(
`delete-branch ${branch} (${result.reason}): ${result.stderr.slice(0, 120)}`,
)
}
function formatTrace(o: CascadeOutcome): string {
const parts: string[] = ['cancelled_by_self']
if (o.cancelled_job_ids.length) parts.push(`siblings_cancelled=${o.cancelled_job_ids.length}`)
if (o.closed_prs.length) parts.push(`closed=${o.closed_prs.join(',')}`)
if (o.reverted_prs.length) {
parts.push(`reverted=${o.reverted_prs.map((r) => `${r.original}->${r.revertPr}`).join(';')}`)
}
if (o.deleted_branches.length) parts.push(`branches_deleted=${o.deleted_branches.join(',')}`)
if (o.warnings.length) parts.push(`warnings=${o.warnings.length}`)
return parts.join('; ')
}

192
src/flow/effects.ts Normal file
View file

@ -0,0 +1,192 @@
// PBI-9 + PBI-47: declarative effects produced by pure transitions.
// Executor handles each effect idempotently; failures are logged, not thrown.
export type PauseContext = {
pause_reason: 'MERGE_CONFLICT'
pr_url: string
pr_head_sha: string
conflict_files: string[]
claude_question_id: string
resume_instructions: string
paused_at: string
}
export type FlowEffect =
| { type: 'RELEASE_WORKTREE_LOCKS'; jobId: string }
| { type: 'ENABLE_AUTO_MERGE'; prUrl: string; expectedHeadSha: string }
| { type: 'MARK_PR_READY'; prUrl: string }
| {
type: 'CREATE_CLAUDE_QUESTION'
sprintRunId: string
prUrl: string
files: string[]
}
| { type: 'CLOSE_CLAUDE_QUESTION'; questionId: string }
| {
type: 'SET_SPRINT_RUN_STATUS'
sprintRunId: string
status: 'QUEUED' | 'RUNNING' | 'PAUSED' | 'DONE' | 'FAILED' | 'CANCELLED'
pauseContextDraft?: Omit<PauseContext, 'claude_question_id'>
clearPauseContext?: boolean
}
export type AutoMergeOutcome =
| { effect: 'ENABLE_AUTO_MERGE'; ok: true }
| {
effect: 'ENABLE_AUTO_MERGE'
ok: false
reason: 'CHECKS_FAILED' | 'MERGE_CONFLICT' | 'GH_AUTH_ERROR' | 'AUTO_MERGE_NOT_ALLOWED' | 'UNKNOWN'
stderr: string
}
/**
* Execute a list of effects in order. Returns outcome objects only for
* effects whose result the caller needs to react to (auto-merge fail
* triggers MERGE_CONFLICT-event in update-job-status). Other failures
* are logged but swallowed.
*
* CREATE_CLAUDE_QUESTION SET_SPRINT_RUN_STATUS chains: the question_id
* created in the first effect is injected into the pause_context of the
* second.
*/
export async function executeEffects(
effects: FlowEffect[],
): Promise<AutoMergeOutcome[]> {
const outcomes: AutoMergeOutcome[] = []
let lastQuestionId: string | undefined
for (const effect of effects) {
try {
if (effect.type === 'CREATE_CLAUDE_QUESTION') {
lastQuestionId = await createOrReuseClaudeQuestion(effect)
continue
}
if (effect.type === 'SET_SPRINT_RUN_STATUS') {
await applySprintRunStatus(effect, lastQuestionId)
continue
}
const outcome = await executeEffect(effect)
if (outcome) outcomes.push(outcome)
} catch (err) {
console.warn(`[effects] effect ${effect.type} failed (idempotent skip):`, err)
}
}
return outcomes
}
async function executeEffect(effect: FlowEffect): Promise<AutoMergeOutcome | undefined> {
switch (effect.type) {
case 'RELEASE_WORKTREE_LOCKS': {
const { releaseLocksOnTerminal } = await import('../git/job-locks.js')
await releaseLocksOnTerminal(effect.jobId)
return undefined
}
case 'ENABLE_AUTO_MERGE': {
const { enableAutoMergeOnPr } = await import('../git/pr.js')
const result = await enableAutoMergeOnPr({
prUrl: effect.prUrl,
expectedHeadSha: effect.expectedHeadSha,
})
if (result.ok) return { effect: 'ENABLE_AUTO_MERGE', ok: true }
return { effect: 'ENABLE_AUTO_MERGE', ok: false, reason: result.reason, stderr: result.stderr }
}
case 'MARK_PR_READY': {
const { markPullRequestReady } = await import('../git/pr.js')
const result = await markPullRequestReady({ prUrl: effect.prUrl })
if ('error' in result) {
console.warn(`[effects] MARK_PR_READY failed for ${effect.prUrl}: ${result.error}`)
}
return undefined
}
case 'CLOSE_CLAUDE_QUESTION': {
const { prisma } = await import('../prisma.js')
await prisma.claudeQuestion.updateMany({
where: { id: effect.questionId, status: 'open' },
data: { status: 'closed' },
})
return undefined
}
// CREATE_CLAUDE_QUESTION + SET_SPRINT_RUN_STATUS handled in executeEffects.
case 'CREATE_CLAUDE_QUESTION':
case 'SET_SPRINT_RUN_STATUS':
return undefined
}
}
async function createOrReuseClaudeQuestion(effect: {
sprintRunId: string
prUrl: string
files: string[]
}): Promise<string> {
const { prisma } = await import('../prisma.js')
// Reuse existing open question for the same SprintRun + PR if present.
const existing = await prisma.claudeQuestion.findFirst({
where: {
status: 'open',
options: { path: ['sprint_run_id'], equals: effect.sprintRunId } as never,
},
orderBy: { created_at: 'desc' },
select: { id: true },
})
if (existing) return existing.id
// Need product_id + asker (user) to create. Resolve via SprintRun.
const sprintRun = await prisma.sprintRun.findUnique({
where: { id: effect.sprintRunId },
select: {
started_by_id: true,
sprint: { select: { product_id: true } },
},
})
if (!sprintRun) {
throw new Error(`SprintRun ${effect.sprintRunId} not found`)
}
const fileList =
effect.files.length === 0
? '(unknown files — check the PR)'
: effect.files.slice(0, 5).join(', ')
+ (effect.files.length > 5 ? ` + ${effect.files.length - 5} more` : '')
const created = await prisma.claudeQuestion.create({
data: {
product_id: sprintRun.sprint.product_id,
asked_by: sprintRun.started_by_id,
question:
`Merge-conflict on ${effect.prUrl}. Conflict files: ${fileList}. `
+ `Resolve on the branch and push, then resume the sprint.`,
options: {
sprint_run_id: effect.sprintRunId,
pr_url: effect.prUrl,
conflict_files: effect.files,
},
status: 'open',
expires_at: new Date(Date.now() + 7 * 24 * 60 * 60 * 1000), // 7 days
},
select: { id: true },
})
return created.id
}
async function applySprintRunStatus(
effect: Extract<FlowEffect, { type: 'SET_SPRINT_RUN_STATUS' }>,
lastQuestionId: string | undefined,
): Promise<void> {
const { prisma, Prisma } = await (async () => {
const mod = await import('../prisma.js')
const prismaPkg = await import('@prisma/client')
return { prisma: mod.prisma, Prisma: prismaPkg.Prisma }
})()
const data: Record<string, unknown> = { status: effect.status }
if (effect.pauseContextDraft && lastQuestionId) {
data.pause_context = {
...effect.pauseContextDraft,
claude_question_id: lastQuestionId,
}
}
if (effect.clearPauseContext) {
data.pause_context = Prisma.JsonNull
}
await prisma.sprintRun.update({ where: { id: effect.sprintRunId }, data })
}

110
src/flow/pr-flow.ts Normal file
View file

@ -0,0 +1,110 @@
import type { FlowEffect } from './effects.js'
import type { AutoMergeFailReason } from '../git/pr.js'
export type PrStrategy = 'STORY' | 'SPRINT'
export type PrFlowState =
| { kind: 'none'; strategy: PrStrategy }
| { kind: 'branch_pushed'; strategy: PrStrategy; prUrl?: string }
| { kind: 'pr_opened'; strategy: 'STORY'; prUrl: string }
| { kind: 'draft_opened'; strategy: 'SPRINT'; prUrl: string }
| { kind: 'waiting_for_checks'; strategy: 'STORY'; prUrl: string; headSha: string }
| { kind: 'auto_merge_enabled'; strategy: 'STORY'; prUrl: string; headSha: string }
| { kind: 'ready_for_review'; strategy: 'SPRINT'; prUrl: string }
| { kind: 'merged'; strategy: PrStrategy; prUrl: string }
| { kind: 'checks_failed'; strategy: PrStrategy; prUrl: string }
| { kind: 'merge_conflict_paused'; strategy: PrStrategy; prUrl: string; headSha: string }
export type PrFlowEvent =
| { type: 'PR_CREATED'; prUrl: string }
| { type: 'TASK_DONE'; taskId: string; headSha: string }
| { type: 'STORY_COMPLETED'; storyId: string; headSha: string }
| { type: 'SPRINT_COMPLETED'; sprintRunId: string }
| { type: 'MERGE_RESULT'; reason?: AutoMergeFailReason }
export type TransitionResult = { nextState: PrFlowState; effects: FlowEffect[] }
export function transition(state: PrFlowState, event: PrFlowEvent): TransitionResult {
if (state.strategy === 'STORY') {
switch (state.kind) {
case 'none':
case 'branch_pushed':
if (event.type === 'PR_CREATED') {
return {
nextState: { kind: 'pr_opened', strategy: 'STORY', prUrl: event.prUrl },
effects: [],
}
}
break
case 'pr_opened':
if (event.type === 'STORY_COMPLETED') {
return {
nextState: {
kind: 'waiting_for_checks',
strategy: 'STORY',
prUrl: state.prUrl,
headSha: event.headSha,
},
effects: [
{ type: 'ENABLE_AUTO_MERGE', prUrl: state.prUrl, expectedHeadSha: event.headSha },
],
}
}
break
case 'waiting_for_checks':
if (event.type === 'MERGE_RESULT' && !event.reason) {
return {
nextState: {
kind: 'auto_merge_enabled',
strategy: 'STORY',
prUrl: state.prUrl,
headSha: state.headSha,
},
effects: [],
}
}
if (event.type === 'MERGE_RESULT' && event.reason === 'MERGE_CONFLICT') {
return {
nextState: {
kind: 'merge_conflict_paused',
strategy: 'STORY',
prUrl: state.prUrl,
headSha: state.headSha,
},
effects: [],
}
}
if (event.type === 'MERGE_RESULT' && event.reason === 'CHECKS_FAILED') {
return {
nextState: { kind: 'checks_failed', strategy: 'STORY', prUrl: state.prUrl },
effects: [],
}
}
break
}
}
if (state.strategy === 'SPRINT') {
switch (state.kind) {
case 'none':
case 'branch_pushed':
if (event.type === 'PR_CREATED') {
return {
nextState: { kind: 'draft_opened', strategy: 'SPRINT', prUrl: event.prUrl },
effects: [],
}
}
break
case 'draft_opened':
if (event.type === 'SPRINT_COMPLETED') {
return {
nextState: { kind: 'ready_for_review', strategy: 'SPRINT', prUrl: state.prUrl },
effects: [{ type: 'MARK_PR_READY', prUrl: state.prUrl }],
}
}
break
}
}
return { nextState: state, effects: [] }
}

136
src/flow/sprint-run.ts Normal file
View file

@ -0,0 +1,136 @@
import type { FlowEffect, PauseContext } from './effects.js'
export type SprintRunStateKind =
| 'queued'
| 'running'
| 'paused_merge_conflict'
| 'done'
| 'failed'
| 'cancelled'
export type SprintRunState = {
kind: SprintRunStateKind
sprintRunId: string
pauseContext?: PauseContext
}
export type SprintRunEvent =
| { type: 'CLAIM_FIRST_JOB' }
| { type: 'TASK_DONE'; taskId: string }
| { type: 'TASK_FAILED'; taskId: string; error: string }
| {
type: 'MERGE_CONFLICT'
prUrl: string
prHeadSha: string
conflictFiles: string[]
resumeInstructions: string
}
| { type: 'USER_RESUMED' }
| { type: 'USER_CANCELLED' }
| { type: 'ALL_DONE' }
export type TransitionResult = { nextState: SprintRunState; effects: FlowEffect[] }
export function transition(state: SprintRunState, event: SprintRunEvent): TransitionResult {
switch (state.kind) {
case 'queued':
if (event.type === 'CLAIM_FIRST_JOB') {
return {
nextState: { ...state, kind: 'running' },
effects: [
{ type: 'SET_SPRINT_RUN_STATUS', sprintRunId: state.sprintRunId, status: 'RUNNING' },
],
}
}
break
case 'running':
if (event.type === 'TASK_DONE') {
return { nextState: state, effects: [] }
}
if (event.type === 'TASK_FAILED') {
return {
nextState: { ...state, kind: 'failed' },
effects: [
{ type: 'SET_SPRINT_RUN_STATUS', sprintRunId: state.sprintRunId, status: 'FAILED' },
],
}
}
if (event.type === 'ALL_DONE') {
return {
nextState: { ...state, kind: 'done' },
effects: [
{ type: 'SET_SPRINT_RUN_STATUS', sprintRunId: state.sprintRunId, status: 'DONE' },
],
}
}
if (event.type === 'MERGE_CONFLICT') {
const pauseContextDraft: Omit<PauseContext, 'claude_question_id'> = {
pause_reason: 'MERGE_CONFLICT',
pr_url: event.prUrl,
pr_head_sha: event.prHeadSha,
conflict_files: event.conflictFiles,
resume_instructions: event.resumeInstructions,
paused_at: new Date().toISOString(),
}
return {
nextState: { ...state, kind: 'paused_merge_conflict' },
effects: [
{
type: 'CREATE_CLAUDE_QUESTION',
sprintRunId: state.sprintRunId,
prUrl: event.prUrl,
files: event.conflictFiles,
},
{
type: 'SET_SPRINT_RUN_STATUS',
sprintRunId: state.sprintRunId,
status: 'PAUSED',
pauseContextDraft,
},
],
}
}
if (event.type === 'USER_CANCELLED') {
return {
nextState: { ...state, kind: 'cancelled' },
effects: [
{ type: 'SET_SPRINT_RUN_STATUS', sprintRunId: state.sprintRunId, status: 'CANCELLED' },
],
}
}
break
case 'paused_merge_conflict':
if (event.type === 'USER_RESUMED') {
const closeQuestionEffects: FlowEffect[] = state.pauseContext
? [{ type: 'CLOSE_CLAUDE_QUESTION', questionId: state.pauseContext.claude_question_id }]
: []
return {
nextState: { ...state, kind: 'running', pauseContext: undefined },
effects: [
...closeQuestionEffects,
{
type: 'SET_SPRINT_RUN_STATUS',
sprintRunId: state.sprintRunId,
status: 'RUNNING',
clearPauseContext: true,
},
],
}
}
if (event.type === 'USER_CANCELLED') {
return {
nextState: { ...state, kind: 'cancelled', pauseContext: undefined },
effects: [
{
type: 'SET_SPRINT_RUN_STATUS',
sprintRunId: state.sprintRunId,
status: 'CANCELLED',
clearPauseContext: true,
},
],
}
}
break
}
return { nextState: state, effects: [] }
}

103
src/flow/worktree-lease.ts Normal file
View file

@ -0,0 +1,103 @@
import type { FlowEffect } from './effects.js'
export type WorktreeLeaseState =
| { kind: 'idle' }
| { kind: 'acquiring_lock'; jobId: string; productIds: string[] }
| { kind: 'creating_or_reusing'; jobId: string; productIds: string[] }
| { kind: 'syncing'; jobId: string; productIds: string[] }
| { kind: 'ready'; jobId: string; productIds: string[] }
| { kind: 'releasing'; jobId: string }
| { kind: 'released'; jobId: string }
| { kind: 'lock_timeout'; jobId: string; productIds: string[] }
| { kind: 'sync_failed'; jobId: string; productIds: string[]; error: string }
| { kind: 'stale_released'; jobId: string }
export type WorktreeLeaseEvent =
| { type: 'JOB_CLAIMED'; jobId: string; productIds: string[] }
| { type: 'LOCK_ACQUIRED' }
| { type: 'LOCK_TIMEOUT' }
| { type: 'WORKTREE_READY' }
| { type: 'SYNC_DONE' }
| { type: 'SYNC_FAILED'; error: string }
| { type: 'JOB_TERMINAL'; jobId: string }
| { type: 'STALE_RESET'; jobId: string }
export type TransitionResult = {
nextState: WorktreeLeaseState
effects: FlowEffect[]
}
export function transition(
state: WorktreeLeaseState,
event: WorktreeLeaseEvent,
): TransitionResult {
switch (state.kind) {
case 'idle':
if (event.type === 'JOB_CLAIMED') {
return {
nextState: { kind: 'acquiring_lock', jobId: event.jobId, productIds: event.productIds },
effects: [],
}
}
break
case 'acquiring_lock':
if (event.type === 'LOCK_ACQUIRED') {
return {
nextState: { kind: 'creating_or_reusing', jobId: state.jobId, productIds: state.productIds },
effects: [],
}
}
if (event.type === 'LOCK_TIMEOUT') {
return {
nextState: { kind: 'lock_timeout', jobId: state.jobId, productIds: state.productIds },
effects: [],
}
}
break
case 'creating_or_reusing':
if (event.type === 'WORKTREE_READY') {
return {
nextState: { kind: 'syncing', jobId: state.jobId, productIds: state.productIds },
effects: [],
}
}
break
case 'syncing':
if (event.type === 'SYNC_DONE') {
return {
nextState: { kind: 'ready', jobId: state.jobId, productIds: state.productIds },
effects: [],
}
}
if (event.type === 'SYNC_FAILED') {
return {
nextState: {
kind: 'sync_failed',
jobId: state.jobId,
productIds: state.productIds,
error: event.error,
},
effects: [{ type: 'RELEASE_WORKTREE_LOCKS', jobId: state.jobId }],
}
}
break
case 'ready':
if (event.type === 'JOB_TERMINAL') {
return {
nextState: { kind: 'releasing', jobId: state.jobId },
effects: [{ type: 'RELEASE_WORKTREE_LOCKS', jobId: state.jobId }],
}
}
if (event.type === 'STALE_RESET') {
return {
nextState: { kind: 'stale_released', jobId: state.jobId },
effects: [{ type: 'RELEASE_WORKTREE_LOCKS', jobId: state.jobId }],
}
}
break
case 'releasing':
return { nextState: { kind: 'released', jobId: state.jobId }, effects: [] }
}
// Unknown or forbidden transition — keep current state, no effects
return { nextState: state, effects: [] }
}

38
src/git/file-lock.ts Normal file
View file

@ -0,0 +1,38 @@
import lockfile from 'proper-lockfile'
export async function acquireFileLock(lockPath: string): Promise<() => Promise<void>> {
const release = await lockfile.lock(lockPath, {
realpath: false,
stale: 30_000,
update: 5_000,
retries: { retries: 60, factor: 1, minTimeout: 1_000, maxTimeout: 1_000 },
})
let released = false
return async () => {
if (released) return
released = true
await release()
}
}
export async function acquireFileLocksOrdered(
lockPaths: string[],
): Promise<() => Promise<void>> {
const sorted = [...lockPaths].sort()
const releases: Array<() => Promise<void>> = []
try {
for (const p of sorted) {
releases.push(await acquireFileLock(p))
}
} catch (err) {
for (const r of releases.reverse()) {
await r().catch(() => {})
}
throw err
}
return async () => {
for (const r of releases.reverse()) {
await r().catch(() => {})
}
}
}

73
src/git/job-locks.ts Normal file
View file

@ -0,0 +1,73 @@
import * as fs from 'node:fs/promises'
import * as path from 'node:path'
import { acquireFileLocksOrdered } from './file-lock.js'
import {
getProductWorktreeLockPath,
getWorktreeRoot,
} from './worktree-paths.js'
import {
getOrCreateProductWorktree,
syncProductWorktree,
} from './product-worktree.js'
type JobReleases = Map<string, Array<() => Promise<void>>>
const jobReleases: JobReleases = new Map()
export async function setupProductWorktrees(
jobId: string,
productIds: string[],
resolveRepoRoot: (productId: string) => Promise<string | null>,
): Promise<Array<{ productId: string; worktreePath: string }>> {
if (productIds.length === 0) return []
// Ensure parent dir exists so lockfile creation succeeds
await fs.mkdir(path.join(getWorktreeRoot(), '_products'), { recursive: true })
// Lock-first, alphabetically sorted (deadlock prevention for multi-product idea-jobs).
// Locks acquired in sorted order; output preserves caller's input order so that
// worktrees[0] is the primary product (Idea.product_id), regardless of how its
// id sorts alphabetically against secondary products.
const sorted = [...productIds].sort()
const lockPaths = sorted.map(getProductWorktreeLockPath)
const releaseAll = await acquireFileLocksOrdered(lockPaths)
registerJobLockReleases(jobId, [releaseAll])
// After lock-acquire, create/reuse worktrees and sync — iterate input order
// so callers get back [primary, ...secondaries] in their original sequence.
const out: Array<{ productId: string; worktreePath: string }> = []
for (const productId of productIds) {
const repoRoot = await resolveRepoRoot(productId)
if (!repoRoot) continue
const { worktreePath } = await getOrCreateProductWorktree({ repoRoot, productId })
await syncProductWorktree({ worktreePath })
out.push({ productId, worktreePath })
}
return out
}
export function registerJobLockReleases(
jobId: string,
releases: Array<() => Promise<void>>,
): void {
const existing = jobReleases.get(jobId) ?? []
jobReleases.set(jobId, [...existing, ...releases])
}
export async function releaseLocksOnTerminal(jobId: string): Promise<void> {
const releases = jobReleases.get(jobId)
if (!releases) return // idempotent — already released or never locked
jobReleases.delete(jobId)
for (const release of releases) {
try {
await release()
} catch (err) {
console.warn(`[job-locks] release failed for job ${jobId}:`, err)
}
}
}
// For tests
export function _resetJobReleasesForTest(): void {
jobReleases.clear()
}

View file

@ -1,5 +1,7 @@
import { execFile } from 'node:child_process' import { execFile } from 'node:child_process'
import { promisify } from 'node:util' import { promisify } from 'node:util'
import * as path from 'node:path'
import { getWorktreeRoot } from './worktree-paths.js'
const exec = promisify(execFile) const exec = promisify(execFile)
@ -8,16 +10,25 @@ export async function createPullRequest(opts: {
branchName: string branchName: string
title: string title: string
body: string body: string
/** Open as draft PR (mens moet 'm later ready-for-review zetten). Default false. */
draft?: boolean
/**
* PBI-47 (P0): default changed to false. Auto-merge is now enabled
* separately via `enableAutoMergeOnPr` only on the **last task** of a
* STORY-mode story, with a head-SHA guard to prevent racing earlier
* task merges. Callers may still pass `true` for one-off PRs that
* are immediately ready to merge; in that case we use the new typed
* helper rather than the previous fire-and-forget gh call.
*/
enableAutoMerge?: boolean
}): Promise<{ url: string } | { error: string }> { }): Promise<{ url: string } | { error: string }> {
const { worktreePath, branchName, title, body } = opts const { worktreePath, branchName, title, body, draft = false, enableAutoMerge = false } = opts
let url: string let url: string
try { try {
const { stdout } = await exec( const args = ['pr', 'create', '--title', title, '--body', body, '--head', branchName]
'gh', if (draft) args.push('--draft')
['pr', 'create', '--title', title, '--body', body, '--head', branchName], const { stdout } = await exec('gh', args, { cwd: worktreePath })
{ cwd: worktreePath },
)
// gh prints the PR URL as the last non-empty line // gh prints the PR URL as the last non-empty line
const lines = stdout.trim().split('\n').filter(Boolean) const lines = stdout.trim().split('\n').filter(Boolean)
url = lines[lines.length - 1]?.trim() ?? '' url = lines[lines.length - 1]?.trim() ?? ''
@ -36,20 +47,247 @@ export async function createPullRequest(opts: {
return { error: `gh pr create failed: ${msg.slice(0, 300)}` } return { error: `gh pr create failed: ${msg.slice(0, 300)}` }
} }
// Best-effort: enable auto-merge (squash) on the freshly created PR. If the // Legacy opt-in: enableAutoMerge=true and not draft → fire the new typed
// repo doesn't have "Allow auto-merge" turned on, or the token lacks scope, // helper without head-SHA guard (caller didn't supply one). Result is
// gh exits non-zero and we just log. The PR is still valid; auto-merge can // logged but not propagated — same shape as before.
// be turned on manually. We do NOT fail the whole createPullRequest call — if (enableAutoMerge && !draft) {
// the URL was successfully obtained which is the contract this returns. const result = await enableAutoMergeOnPr({ prUrl: url, cwd: worktreePath })
try { if (!result.ok) {
await exec('gh', ['pr', 'merge', '--auto', '--squash', url], { cwd: worktreePath }) console.warn(
} catch (err) { `[createPullRequest] auto-merge enable failed for ${url}: ${result.reason} ${result.stderr.slice(0, 200)}`,
const stderr = )
(err as { stderr?: string }).stderr ?? (err as Error).message ?? '' }
console.warn(
`[createPullRequest] auto-merge enable failed for ${url}: ${stderr.slice(0, 200)}`,
)
} }
return { url } return { url }
} }
export type AutoMergeFailReason =
| 'CHECKS_FAILED'
| 'MERGE_CONFLICT'
| 'GH_AUTH_ERROR'
| 'AUTO_MERGE_NOT_ALLOWED'
| 'UNKNOWN'
export type EnableAutoMergeResult =
| { ok: true }
| { ok: false; reason: AutoMergeFailReason; stderr: string }
function classifyAutoMergeError(stderr: string): AutoMergeFailReason {
if (/conflict|not in mergeable state|dirty/i.test(stderr)) return 'MERGE_CONFLICT'
if (/checks? failed|status check|required check/i.test(stderr)) return 'CHECKS_FAILED'
if (/authentication|HTTP 401|HTTP 403|permission|gh auth/i.test(stderr)) return 'GH_AUTH_ERROR'
if (/auto-?merge.*not.*allowed|auto-?merge.*disabled/i.test(stderr)) return 'AUTO_MERGE_NOT_ALLOWED'
return 'UNKNOWN'
}
/**
* Enable auto-merge (squash) on a PR with an optional head-SHA guard.
*
* PBI-47 (P0): when `expectedHeadSha` is provided we pass `--match-head-commit`
* so GitHub only activates auto-merge if the remote head still matches the
* SHA the caller observed. This prevents racing late pushes from another
* worker triggering a merge of a different commit set.
*/
export async function enableAutoMergeOnPr(opts: {
prUrl: string
expectedHeadSha?: string
cwd?: string
}): Promise<EnableAutoMergeResult> {
try {
const args = ['pr', 'merge', '--auto', '--squash']
if (opts.expectedHeadSha) args.push('--match-head-commit', opts.expectedHeadSha)
args.push(opts.prUrl)
await exec('gh', args, opts.cwd ? { cwd: opts.cwd } : {})
return { ok: true }
} catch (err) {
const stderr =
(err as { stderr?: string }).stderr ?? (err as Error).message ?? ''
return { ok: false, reason: classifyAutoMergeError(stderr), stderr: stderr.slice(0, 500) }
}
}
// Zet een draft-PR over naar "ready for review". Gebruikt bij sprint-mode
// wanneer alle stories in de SprintRun DONE zijn — mens reviewt en mergt zelf.
export async function markPullRequestReady(opts: {
prUrl: string
cwd?: string
}): Promise<{ ok: true } | { error: string }> {
try {
await exec('gh', ['pr', 'ready', opts.prUrl], opts.cwd ? { cwd: opts.cwd } : {})
return { ok: true }
} catch (err) {
const msg = (err as { stderr?: string }).stderr ?? (err as Error).message ?? ''
// gh-CLI fout "Pull request is not in draft state" is benign wanneer de
// PR al ready was (bv. handmatig ready gezet of een tweede call).
if (/not in draft state|already in ready/i.test(msg)) return { ok: true }
return { error: `gh pr ready failed: ${msg.slice(0, 300)}` }
}
}
export type PrState = 'OPEN' | 'MERGED' | 'CLOSED'
export type PrInfo = {
state: PrState
mergeCommit: string | null
baseRefName: string
title: string
}
export async function getPullRequestState(opts: {
prUrl: string
cwd?: string
}): Promise<PrInfo | { error: string }> {
const { prUrl } = opts
try {
const { stdout } = await exec(
'gh',
['pr', 'view', prUrl, '--json', 'state,mergeCommit,baseRefName,title'],
opts.cwd ? { cwd: opts.cwd } : {},
)
const parsed = JSON.parse(stdout) as {
state: string
mergeCommit: { oid: string } | null
baseRefName: string
title: string
}
const state = parsed.state.toUpperCase() as PrState
if (state !== 'OPEN' && state !== 'MERGED' && state !== 'CLOSED') {
return { error: `unexpected PR state: ${parsed.state}` }
}
return {
state,
mergeCommit: parsed.mergeCommit?.oid ?? null,
baseRefName: parsed.baseRefName,
title: parsed.title,
}
} catch (err) {
const msg = (err as { stderr?: string }).stderr ?? (err as Error).message ?? ''
return { error: `gh pr view failed: ${msg.slice(0, 300)}` }
}
}
export async function closePullRequest(opts: {
prUrl: string
comment: string
cwd?: string
}): Promise<{ ok: true } | { error: string }> {
try {
await exec(
'gh',
['pr', 'close', opts.prUrl, '--delete-branch', '--comment', opts.comment],
opts.cwd ? { cwd: opts.cwd } : {},
)
return { ok: true }
} catch (err) {
const msg = (err as { stderr?: string }).stderr ?? (err as Error).message ?? ''
return { error: `gh pr close failed: ${msg.slice(0, 300)}` }
}
}
// Creates a revert-PR for a merged PR. Uses an isolated worktree so it
// never touches the user's main checkout. Returns the new PR URL or an
// error string. The revert PR is opened WITHOUT auto-merge — the user
// must review + merge it manually so an unintended cascade can be undone.
export async function createRevertPullRequest(opts: {
repoRoot: string
mergeSha: string
baseRef: string
originalTitle: string
originalBranch: string
jobId: string
pbiCode: string | null
}): Promise<{ url: string } | { error: string }> {
const {
repoRoot,
mergeSha,
baseRef,
originalTitle,
originalBranch,
jobId,
pbiCode,
} = opts
const worktreeDir = getWorktreeRoot()
const wtPath = path.join(worktreeDir, `revert-${jobId}`)
const revertBranch = `revert/${originalBranch}-${jobId.slice(-8)}`
const run = async (cmd: string, args: string[], cwd: string) => {
await exec(cmd, args, { cwd })
}
// Cleanup helper, best-effort
const cleanup = async () => {
try {
await exec('git', ['worktree', 'remove', '--force', wtPath], { cwd: repoRoot })
} catch {
// ignore — worktree may not exist if creation failed
}
}
try {
await run('git', ['fetch', 'origin', baseRef, mergeSha], repoRoot)
await run('git', ['worktree', 'add', '-b', revertBranch, wtPath, `origin/${baseRef}`], repoRoot)
try {
await run('git', ['revert', '-m', '1', mergeSha, '--no-edit'], wtPath)
} catch (err) {
await cleanup()
const msg = (err as { stderr?: string }).stderr ?? (err as Error).message ?? ''
if (/conflict/i.test(msg)) {
return { error: `git revert conflicts on ${mergeSha}: ${msg.slice(0, 200)}` }
}
return { error: `git revert failed: ${msg.slice(0, 200)}` }
}
await run('git', ['push', '-u', 'origin', revertBranch], wtPath)
const pbiTag = pbiCode ? `PBI ${pbiCode}` : 'PBI'
const title = `Revert: ${originalTitle}`
const body = [
`Auto-revert by Scrum4Me agent.`,
``,
`Reason: ${pbiTag} failed (cascade from job \`${jobId}\`).`,
`Reverts merge commit \`${mergeSha}\`.`,
``,
`**Review carefully before merging** — auto-merge is intentionally NOT enabled on revert PRs.`,
].join('\n')
let prUrl: string
try {
const { stdout } = await exec(
'gh',
[
'pr',
'create',
'--base',
baseRef,
'--head',
revertBranch,
'--title',
title,
'--body',
body,
],
{ cwd: wtPath },
)
const lines = stdout.trim().split('\n').filter(Boolean)
prUrl = lines[lines.length - 1]?.trim() ?? ''
if (!prUrl.startsWith('http')) {
await cleanup()
return { error: `gh pr create produced unexpected output: ${stdout.slice(0, 200)}` }
}
} catch (err) {
await cleanup()
const msg = (err as { stderr?: string }).stderr ?? (err as Error).message ?? ''
return { error: `gh pr create (revert) failed: ${msg.slice(0, 300)}` }
}
await cleanup()
return { url: prUrl }
} catch (err) {
await cleanup()
const msg = (err as { stderr?: string }).stderr ?? (err as Error).message ?? ''
return { error: `revert worktree setup failed: ${msg.slice(0, 300)}` }
}
}

View file

@ -0,0 +1,66 @@
import { execFile } from 'node:child_process'
import { promisify } from 'node:util'
import * as fs from 'node:fs/promises'
import * as path from 'node:path'
import { getProductWorktreePath } from './worktree-paths.js'
const exec = promisify(execFile)
export async function getOrCreateProductWorktree(opts: {
repoRoot: string
productId: string
}): Promise<{ worktreePath: string; created: boolean }> {
const worktreePath = getProductWorktreePath(opts.productId)
await fs.mkdir(path.dirname(worktreePath), { recursive: true })
try {
await fs.access(worktreePath)
return { worktreePath, created: false }
} catch {
// Path bestaat niet — aanmaken
}
await exec('git', ['fetch', 'origin', '--prune'], { cwd: opts.repoRoot })
await exec('git', ['worktree', 'add', '--detach', worktreePath, 'origin/main'], {
cwd: opts.repoRoot,
})
// Resolve REAL exclude-pad (linked worktree heeft .git als file, niet directory)
const { stdout } = await exec('git', ['rev-parse', '--git-path', 'info/exclude'], {
cwd: worktreePath,
})
const excludePath = path.resolve(worktreePath, stdout.trim())
const existing = await fs.readFile(excludePath, 'utf8').catch(() => '')
if (!existing.split('\n').includes('.scratch/')) {
const sep = existing === '' || existing.endsWith('\n') ? '' : '\n'
await fs.appendFile(excludePath, `${sep}.scratch/\n`)
}
return { worktreePath, created: true }
}
export async function syncProductWorktree(opts: { worktreePath: string }): Promise<void> {
const { worktreePath } = opts
await exec('git', ['fetch', 'origin', '--prune'], { cwd: worktreePath })
await exec('git', ['reset', '--hard', 'origin/main'], { cwd: worktreePath })
await exec('git', ['clean', '-fd', '-e', '.scratch/'], { cwd: worktreePath })
// Wis .scratch/ inhoud, behoud de map
const scratch = path.join(worktreePath, '.scratch')
await fs.rm(scratch, { recursive: true, force: true })
await fs.mkdir(scratch, { recursive: true })
}
export async function removeProductWorktree(opts: {
repoRoot: string
productId: string
}): Promise<{ removed: boolean }> {
const worktreePath = getProductWorktreePath(opts.productId)
try {
await exec('git', ['worktree', 'remove', '--force', worktreePath], {
cwd: opts.repoRoot,
})
return { removed: true }
} catch {
return { removed: false }
}
}

View file

@ -51,3 +51,27 @@ export async function pushBranchForJob(opts: {
return { pushed: false, reason: 'unknown', stderr } return { pushed: false, reason: 'unknown', stderr }
} }
} }
export type DeleteRemoteResult =
| { deleted: true }
| { deleted: false; reason: 'not-found' | 'no-credentials' | 'unknown'; stderr: string }
export async function deleteRemoteBranch(opts: {
repoRoot: string
branch: string
}): Promise<DeleteRemoteResult> {
const { repoRoot, branch } = opts
try {
await exec('git', ['push', 'origin', '--delete', branch], { cwd: repoRoot })
return { deleted: true }
} catch (err) {
const stderr = (err as { stderr?: string }).stderr ?? (err as Error).message ?? ''
if (/remote ref does not exist|unable to delete .* remote ref does not exist/i.test(stderr)) {
return { deleted: false, reason: 'not-found', stderr }
}
if (/Authentication failed|could not read Username/i.test(stderr)) {
return { deleted: false, reason: 'no-credentials', stderr }
}
return { deleted: false, reason: 'unknown', stderr }
}
}

19
src/git/worktree-paths.ts Normal file
View file

@ -0,0 +1,19 @@
import * as os from 'node:os'
import * as path from 'node:path'
export const SYSTEM_WORKTREE_DIRS = new Set(['_products'])
export function getWorktreeRoot(): string {
return (
process.env.SCRUM4ME_AGENT_WORKTREE_DIR
?? path.join(os.homedir(), '.scrum4me-agent-worktrees')
)
}
export function getProductWorktreePath(productId: string): string {
return path.join(getWorktreeRoot(), '_products', productId)
}
export function getProductWorktreeLockPath(productId: string): string {
return path.join(getWorktreeRoot(), '_products', `${productId}.lock`)
}

View file

@ -1,8 +1,8 @@
import { execFile } from 'node:child_process' import { execFile } from 'node:child_process'
import { promisify } from 'node:util' import { promisify } from 'node:util'
import * as path from 'node:path' import * as path from 'node:path'
import * as os from 'node:os'
import * as fs from 'node:fs/promises' import * as fs from 'node:fs/promises'
import { getWorktreeRoot } from './worktree-paths.js'
const exec = promisify(execFile) const exec = promisify(execFile)
@ -15,6 +15,19 @@ async function branchExists(repoRoot: string, name: string): Promise<boolean> {
} }
} }
async function remoteBranchExists(repoRoot: string, name: string): Promise<boolean> {
try {
await exec(
'git',
['show-ref', '--verify', '--quiet', `refs/remotes/origin/${name}`],
{ cwd: repoRoot },
)
return true
} catch {
return false
}
}
async function findWorktreeForBranch( async function findWorktreeForBranch(
repoRoot: string, repoRoot: string,
branchName: string, branchName: string,
@ -50,9 +63,7 @@ export async function createWorktreeForJob(opts: {
const { repoRoot, jobId, baseRef = 'origin/main', reuseBranch = false } = opts const { repoRoot, jobId, baseRef = 'origin/main', reuseBranch = false } = opts
let { branchName } = opts let { branchName } = opts
const parent = const parent = getWorktreeRoot()
process.env.SCRUM4ME_AGENT_WORKTREE_DIR ??
path.join(os.homedir(), '.scrum4me-agent-worktrees')
await fs.mkdir(parent, { recursive: true }) await fs.mkdir(parent, { recursive: true })
@ -77,7 +88,27 @@ export async function createWorktreeForJob(opts: {
if (occupant) { if (occupant) {
await exec('git', ['worktree', 'remove', '--force', occupant], { cwd: repoRoot }) await exec('git', ['worktree', 'remove', '--force', occupant], { cwd: repoRoot })
} }
await exec('git', ['worktree', 'add', worktreePath, branchName], { cwd: repoRoot }) // reuseBranch is decided sprint-wide, but git branches are per-repo. For a
// cross-repo sprint the first job targeting THIS repo gets reuseBranch=true
// even though the branch was never created here; a container recreate also
// wipes the local clone. Fall back gracefully instead of failing with
// "invalid reference":
// - local branch exists → reuse it
// - exists on origin only → recreate the local branch tracking origin
// - nowhere → create it fresh from baseRef
if (await branchExists(repoRoot, branchName)) {
await exec('git', ['worktree', 'add', worktreePath, branchName], { cwd: repoRoot })
} else if (await remoteBranchExists(repoRoot, branchName)) {
await exec(
'git',
['worktree', 'add', '-b', branchName, worktreePath, `origin/${branchName}`],
{ cwd: repoRoot },
)
} else {
await exec('git', ['worktree', 'add', '-b', branchName, worktreePath, baseRef], {
cwd: repoRoot,
})
}
return { worktreePath, branchName } return { worktreePath, branchName }
} }
@ -121,9 +152,7 @@ export async function removeWorktreeForJob(opts: {
}): Promise<{ removed: boolean }> { }): Promise<{ removed: boolean }> {
const { repoRoot, jobId, keepBranch = false } = opts const { repoRoot, jobId, keepBranch = false } = opts
const parent = const parent = getWorktreeRoot()
process.env.SCRUM4ME_AGENT_WORKTREE_DIR ??
path.join(os.homedir(), '.scrum4me-agent-worktrees')
const worktreePath = path.join(parent, jobId) const worktreePath = path.join(parent, jobId)

View file

@ -9,10 +9,11 @@ import { registerUpdateTaskPlanTool } from './tools/update-task-plan.js'
import { registerLogImplementationTool } from './tools/log-implementation.js' import { registerLogImplementationTool } from './tools/log-implementation.js'
import { registerLogTestResultTool } from './tools/log-test-result.js' import { registerLogTestResultTool } from './tools/log-test-result.js'
import { registerLogCommitTool } from './tools/log-commit.js' import { registerLogCommitTool } from './tools/log-commit.js'
import { registerCreateTodoTool } from './tools/create-todo.js'
import { registerCreatePbiTool } from './tools/create-pbi.js' import { registerCreatePbiTool } from './tools/create-pbi.js'
import { registerCreateStoryTool } from './tools/create-story.js' import { registerCreateStoryTool } from './tools/create-story.js'
import { registerCreateTaskTool } from './tools/create-task.js' import { registerCreateTaskTool } from './tools/create-task.js'
import { registerCreateSprintTool } from './tools/create-sprint.js'
import { registerUpdateSprintTool } from './tools/update-sprint.js'
import { registerAskUserQuestionTool } from './tools/ask-user-question.js' import { registerAskUserQuestionTool } from './tools/ask-user-question.js'
import { registerGetQuestionAnswerTool } from './tools/get-question-answer.js' import { registerGetQuestionAnswerTool } from './tools/get-question-answer.js'
import { registerListOpenQuestionsTool } from './tools/list-open-questions.js' import { registerListOpenQuestionsTool } from './tools/list-open-questions.js'
@ -27,9 +28,14 @@ import { registerMarkPbiPrMergedTool } from './tools/mark-pbi-pr-merged.js'
import { registerGetIdeaContextTool } from './tools/get-idea-context.js' import { registerGetIdeaContextTool } from './tools/get-idea-context.js'
import { registerUpdateIdeaGrillMdTool } from './tools/update-idea-grill-md.js' import { registerUpdateIdeaGrillMdTool } from './tools/update-idea-grill-md.js'
import { registerUpdateIdeaPlanMdTool } from './tools/update-idea-plan-md.js' import { registerUpdateIdeaPlanMdTool } from './tools/update-idea-plan-md.js'
import { registerUpdateIdeaPlanReviewedTool } from './tools/update-idea-plan-reviewed.js'
import { registerLogIdeaDecisionTool } from './tools/log-idea-decision.js' import { registerLogIdeaDecisionTool } from './tools/log-idea-decision.js'
import { registerGetWorkerSettingsTool } from './tools/get-worker-settings.js' import { registerGetWorkerSettingsTool } from './tools/get-worker-settings.js'
import { registerWorkerHeartbeatTool } from './tools/worker-heartbeat.js' import { registerWorkerHeartbeatTool } from './tools/worker-heartbeat.js'
// PBI-50: SPRINT_IMPLEMENTATION-tools
import { registerVerifySprintTaskTool } from './tools/verify-sprint-task.js'
import { registerUpdateTaskExecutionTool } from './tools/update-task-execution.js'
import { registerJobHeartbeatTool } from './tools/job-heartbeat.js'
import { registerImplementNextStoryPrompt } from './prompts/implement-next-story.js' import { registerImplementNextStoryPrompt } from './prompts/implement-next-story.js'
import { getAuth } from './auth.js' import { getAuth } from './auth.js'
import { registerWorker } from './presence/worker.js' import { registerWorker } from './presence/worker.js'
@ -71,10 +77,12 @@ async function main() {
registerLogImplementationTool(server) registerLogImplementationTool(server)
registerLogTestResultTool(server) registerLogTestResultTool(server)
registerLogCommitTool(server) registerLogCommitTool(server)
registerCreateTodoTool(server)
registerCreatePbiTool(server) registerCreatePbiTool(server)
registerCreateStoryTool(server) registerCreateStoryTool(server)
registerCreateTaskTool(server) registerCreateTaskTool(server)
// PBI-12: sprint lifecycle tools
registerCreateSprintTool(server)
registerUpdateSprintTool(server)
registerAskUserQuestionTool(server) registerAskUserQuestionTool(server)
registerGetQuestionAnswerTool(server) registerGetQuestionAnswerTool(server)
registerListOpenQuestionsTool(server) registerListOpenQuestionsTool(server)
@ -90,10 +98,15 @@ async function main() {
registerGetIdeaContextTool(server) registerGetIdeaContextTool(server)
registerUpdateIdeaGrillMdTool(server) registerUpdateIdeaGrillMdTool(server)
registerUpdateIdeaPlanMdTool(server) registerUpdateIdeaPlanMdTool(server)
registerUpdateIdeaPlanReviewedTool(server)
registerLogIdeaDecisionTool(server) registerLogIdeaDecisionTool(server)
// M13: worker quota-gate tools // M13: worker quota-gate tools
registerGetWorkerSettingsTool(server) registerGetWorkerSettingsTool(server)
registerWorkerHeartbeatTool(server) registerWorkerHeartbeatTool(server)
// PBI-50: SPRINT_IMPLEMENTATION-tools
registerVerifySprintTaskTool(server)
registerUpdateTaskExecutionTool(server)
registerJobHeartbeatTool(server)
registerImplementNextStoryPrompt(server) registerImplementNextStoryPrompt(server)
// Presence bootstrap MUST run before server.connect — the stdio transport // Presence bootstrap MUST run before server.connect — the stdio transport

View file

@ -1,32 +0,0 @@
// Loader voor embedded idea-prompts (M12).
// De .md-bestanden in src/prompts/idea/ zijn een kopie van
// scrum4me/lib/idea-prompts/* — bewust dupliceren voor reproduceerbaarheid
// op elke worker (geen externe anthropic-skills-plugin-dependency).
import { readFileSync } from 'node:fs'
import { dirname, join } from 'node:path'
import { fileURLToPath } from 'node:url'
import type { ClaudeJobKind } from '@prisma/client'
let cached: { grill?: string; makePlan?: string } = {}
function loadPrompt(file: 'grill.md' | 'make-plan.md'): string {
const here = dirname(fileURLToPath(import.meta.url))
// src/lib/idea-prompts.ts → src/lib → src → src/prompts/idea/{file}
const path = join(here, '..', 'prompts', 'idea', file)
return readFileSync(path, 'utf8')
}
export function getIdeaPromptText(kind: ClaudeJobKind): string {
if (kind === 'IDEA_GRILL') {
if (!cached.grill) cached.grill = loadPrompt('grill.md')
return cached.grill
}
if (kind === 'IDEA_MAKE_PLAN') {
if (!cached.makePlan) cached.makePlan = loadPrompt('make-plan.md')
return cached.makePlan
}
// TASK_IMPLEMENTATION en future kinds: geen embedded prompt nodig.
return ''
}

207
src/lib/job-config.ts Normal file
View file

@ -0,0 +1,207 @@
// PBI-67: model + mode-selectie per ClaudeJob-kind.
//
// Sync met Scrum4Me/lib/job-config.ts — als je hier een veld aanpast,
// doe hetzelfde aan de webapp-kant. Bewust duplicate (geen gedeeld
// package) om de MCP-server eigenstandig te houden.
//
// Override-cascade (eerste match wint):
// 1. task.requires_opus === true → forceer Opus
// 2. job.requested_* (snapshot bij enqueue)
// 3. product.preferred_*
// 4. KIND_DEFAULTS hieronder
//
// CLI-flag-mapping (Claude CLI 2.1.x):
// - thinking_budget (number) → mapBudgetToEffort() → --effort {low,medium,high,xhigh,max}
// (de CLI heeft geen --thinking-budget flag — alleen --effort)
// - max_turns blijft audit-only: de CLI heeft géén --max-turns flag.
// De waarde wordt gesnapshot voor cost-attribution maar niet doorgegeven.
// - allowed_tools → --allowedTools (komma-gescheiden lijst)
export type ClaudeModel =
| 'claude-opus-4-7'
| 'claude-sonnet-4-6'
| 'claude-haiku-4-5-20251001'
export type PermissionMode = 'plan' | 'default' | 'acceptEdits' | 'bypassPermissions'
export type JobConfig = {
model: ClaudeModel
thinking_budget: number // 0 = uit
permission_mode: PermissionMode
max_turns: number | null // null = onbegrensd
allowed_tools: string[] | null // null = alle
}
export type JobInput = {
kind: string
requested_model?: string | null
requested_thinking_budget?: number | null
requested_permission_mode?: string | null
}
export type ProductInput = {
preferred_model?: string | null
thinking_budget_default?: number | null
preferred_permission_mode?: string | null
}
export type TaskInput = {
requires_opus?: boolean | null
}
// Tool-allowlists per kind. Bewust géén `wait_for_job`, `check_queue_empty`
// of `get_idea_context` — de runner (scrum4me-docker/bin/run-one-job.ts)
// claimt voor Claude. Vangrail tegen recursieve claims binnen één invocation.
const TASK_TOOLS = [
'Read', 'Edit', 'Write', 'Bash', 'Grep', 'Glob',
'mcp__scrum4me__get_claude_context',
'mcp__scrum4me__update_task_status',
'mcp__scrum4me__update_task_plan',
'mcp__scrum4me__log_implementation',
'mcp__scrum4me__log_test_result',
'mcp__scrum4me__log_commit',
'mcp__scrum4me__verify_task_against_plan',
'mcp__scrum4me__update_job_status',
'mcp__scrum4me__ask_user_question',
'mcp__scrum4me__get_question_answer',
'mcp__scrum4me__list_open_questions',
'mcp__scrum4me__cancel_question',
'mcp__scrum4me__worker_heartbeat',
]
const KIND_DEFAULTS: Record<string, JobConfig> = {
// Idea-kinds en PLAN_CHAT draaien in `acceptEdits` (niet `plan`):
// `plan`-mode wacht op human-approval na elke planning-fase, wat in een
// autonome runner-context betekent dat Claude geen `update_job_status`
// aanroept en de job na lease-expiry FAILED'd. De `allowed_tools`-lijst
// doet de echte sandboxing (geen Bash, geen Edit, alleen Read/Grep/etc).
IDEA_GRILL: {
model: 'claude-sonnet-4-6',
thinking_budget: 12000,
permission_mode: 'acceptEdits',
max_turns: 15,
allowed_tools: [
'Read', 'Grep', 'Glob', 'WebSearch', 'AskUserQuestion',
'mcp__scrum4me__update_idea_grill_md',
'mcp__scrum4me__log_idea_decision',
'mcp__scrum4me__update_job_status',
'mcp__scrum4me__ask_user_question',
'mcp__scrum4me__get_question_answer',
],
},
IDEA_MAKE_PLAN: {
model: 'claude-opus-4-7',
thinking_budget: 24000,
permission_mode: 'acceptEdits',
max_turns: 20,
allowed_tools: [
'Read', 'Grep', 'Glob', 'WebSearch', 'AskUserQuestion', 'Write',
'mcp__scrum4me__update_idea_plan_md',
'mcp__scrum4me__log_idea_decision',
'mcp__scrum4me__update_job_status',
],
},
IDEA_REVIEW_PLAN: {
model: 'claude-opus-4-7',
thinking_budget: 6000,
permission_mode: 'acceptEdits',
max_turns: 1,
allowed_tools: [
'Read', 'Write', 'Grep', 'Glob',
'mcp__scrum4me__update_idea_plan_reviewed',
'mcp__scrum4me__log_idea_decision',
'mcp__scrum4me__update_job_status',
'mcp__scrum4me__ask_user_question',
],
},
PLAN_CHAT: {
model: 'claude-sonnet-4-6',
thinking_budget: 6000,
permission_mode: 'acceptEdits',
max_turns: 5,
allowed_tools: [
'Read', 'Grep', 'AskUserQuestion',
'mcp__scrum4me__update_job_status',
],
},
TASK_IMPLEMENTATION: {
model: 'claude-sonnet-4-6',
thinking_budget: 6000,
permission_mode: 'bypassPermissions',
max_turns: 50,
allowed_tools: TASK_TOOLS,
},
SPRINT_IMPLEMENTATION: {
model: 'claude-sonnet-4-6',
thinking_budget: 6000,
permission_mode: 'bypassPermissions',
max_turns: null,
// Geen `mcp__scrum4me__job_heartbeat` — de runner verlengt de lease
// automatisch via setInterval (zie scrum4me-docker/bin/run-one-job.ts).
allowed_tools: [
...TASK_TOOLS,
'mcp__scrum4me__update_task_execution',
'mcp__scrum4me__verify_sprint_task',
],
},
}
const FALLBACK: JobConfig = {
model: 'claude-sonnet-4-6',
thinking_budget: 6000,
permission_mode: 'default',
max_turns: 50,
allowed_tools: null,
}
export function getKindDefault(kind: string): JobConfig {
return KIND_DEFAULTS[kind] ?? FALLBACK
}
// max_turns en allowed_tools blijven kind-default (geen product/task override
// in V1 — als de behoefte ontstaat, voeg analoge velden toe aan Product/Task).
export function resolveJobConfig(
job: JobInput,
product: ProductInput,
task?: TaskInput,
): JobConfig {
const base = getKindDefault(job.kind)
const model = (
task?.requires_opus
? 'claude-opus-4-7'
: job.requested_model ?? product.preferred_model ?? base.model
) as ClaudeModel
const thinking_budget =
job.requested_thinking_budget ?? product.thinking_budget_default ?? base.thinking_budget
const permission_mode = (job.requested_permission_mode ??
product.preferred_permission_mode ??
base.permission_mode) as PermissionMode
return {
model,
thinking_budget,
permission_mode,
max_turns: base.max_turns,
allowed_tools: base.allowed_tools,
}
}
// Map numeriek thinking_budget naar de Claude CLI 2.1.x --effort flag.
// Returns null als de flag niet meegegeven moet worden (budget = 0).
//
// Mapping (sync met Scrum4Me/lib/job-config.ts):
// 0 → null (geen --effort flag)
// 1..6000 → "medium"
// 6001..12000 → "high"
// 12001..24000→ "xhigh"
// >24000 → "max"
export function mapBudgetToEffort(budget: number): string | null {
if (budget <= 0) return null
if (budget <= 6000) return 'medium'
if (budget <= 12000) return 'high'
if (budget <= 24000) return 'xhigh'
return 'max'
}

49
src/lib/kind-prompts.ts Normal file
View file

@ -0,0 +1,49 @@
// Loader voor embedded prompts per ClaudeJob-kind.
//
// De .md-bestanden in src/prompts/<kind>/ worden bewust meegebakken zodat
// elke runner ze kan inlezen zonder externe plugin-dependency. De runner
// (scrum4me-docker/bin/run-one-job.ts) leest de juiste prompt via
// getKindPromptText() en geeft die door als `claude -p`-prompt.
//
// Variabele-vervanging gebeurt door de runner zelf (bv. $PAYLOAD_PATH).
import { readFileSync } from 'node:fs'
import { dirname, join } from 'node:path'
import { fileURLToPath } from 'node:url'
import type { ClaudeJobKind } from '@prisma/client'
const cache: Partial<Record<ClaudeJobKind, string>> = {}
function loadPrompt(rel: string): string {
const here = dirname(fileURLToPath(import.meta.url))
// src/lib/kind-prompts.ts → src/lib → src → src/prompts/<rel>
const path = join(here, '..', 'prompts', rel)
return readFileSync(path, 'utf8')
}
const KIND_TO_PROMPT_PATH: Partial<Record<ClaudeJobKind, string>> = {
IDEA_GRILL: 'idea/grill.md',
IDEA_MAKE_PLAN: 'idea/make-plan.md',
IDEA_REVIEW_PLAN: 'idea/review-plan.md',
TASK_IMPLEMENTATION: 'task/implementation.md',
SPRINT_IMPLEMENTATION: 'sprint/implementation.md',
PLAN_CHAT: 'plan-chat/chat.md',
}
export function getKindPromptText(kind: ClaudeJobKind): string {
if (cache[kind]) return cache[kind]!
const rel = KIND_TO_PROMPT_PATH[kind]
if (!rel) return ''
const text = loadPrompt(rel)
cache[kind] = text
return text
}
// Back-compat re-export. wait-for-job.ts roept getIdeaPromptText aan voor
// de drie idea-kinds; behouden zodat we de bestaande call-site niet hoeven
// te wijzigen tot een aparte cleanup-pass.
export function getIdeaPromptText(kind: ClaudeJobKind): string {
if (kind !== 'IDEA_GRILL' && kind !== 'IDEA_MAKE_PLAN' && kind !== 'IDEA_REVIEW_PLAN') return ''
return getKindPromptText(kind)
}

22
src/lib/push-trigger.ts Normal file
View file

@ -0,0 +1,22 @@
export type PushPayload = { title: string; body: string; url: string; tag?: string };
export async function triggerPush(userId: string, payload: PushPayload): Promise<void> {
const url = process.env.INTERNAL_PUSH_URL;
const secret = process.env.INTERNAL_PUSH_SECRET;
if (!url || !secret) return; // feature-gated
const controller = new AbortController();
const timeout = setTimeout(() => controller.abort(), 5000);
try {
const res = await fetch(url, {
method: 'POST',
headers: { 'content-type': 'application/json', authorization: `Bearer ${secret}` },
body: JSON.stringify({ userId, payload }),
signal: controller.signal,
});
if (!res.ok) console.warn('[push-trigger] non-2xx', res.status);
} catch (err) {
console.error('[push-trigger]', err);
} finally {
clearTimeout(timeout);
}
}

View file

@ -1,9 +1,11 @@
import type { Prisma, TaskStatus } from '@prisma/client' // **HOUD SYNC** met Scrum4Me/lib/tasks-status-update.ts.
// Beide repos delen dezelfde DB; deze helper moet bit-voor-bit gelijke
// statusovergangen produceren als de Scrum4Me-versie. Bij wijziging hier
// ook in de Scrum4Me-repo updaten en omgekeerd.
import type { Prisma, TaskStatus, StoryStatus, PbiStatus, SprintStatus } from '@prisma/client'
import { prisma } from '../prisma.js' import { prisma } from '../prisma.js'
export type StoryStatusChange = 'promoted' | 'demoted' | null export interface PropagationResult {
export interface UpdateTaskStatusResult {
task: { task: {
id: string id: string
title: string title: string
@ -11,21 +13,38 @@ export interface UpdateTaskStatusResult {
story_id: string story_id: string
implementation_plan: string | null implementation_plan: string | null
} }
storyStatusChange: StoryStatusChange
storyId: string storyId: string
storyChanged: boolean
pbiChanged: boolean
sprintChanged: boolean
sprintRunChanged: boolean
} }
// Update task.status atomically and auto-promote/demote the parent story: // Real-time status-propagatie: bij elke task-statuswijziging wordt de keten
// - All sibling tasks DONE → story.status = DONE // Task → Story → PBI → Sprint → SprintRun herevalueerd binnen één transactie.
// - Story was DONE and a task moves out of DONE → story.status = IN_SPRINT //
// Demote target is IN_SPRINT (not OPEN): OPEN means "back in product backlog", // Regels:
// which is a sprint-management action, not a status side-effect. // Story: ANY task FAILED → FAILED, ELSE ALL DONE → DONE,
export async function updateTaskStatusWithStoryPromotion( // ELSE IN_SPRINT (mits story.sprint_id != null), anders OPEN
// PBI: ANY story FAILED → FAILED, ELSE ALL DONE → DONE, ELSE READY
// (BLOCKED is handmatig en wordt niet overschreven door deze helper)
// Sprint: ANY PBI van een story-in-sprint FAILED → FAILED,
// ELSE ALL PBIs van die stories DONE → COMPLETED,
// ELSE ACTIVE
// SprintRun: Sprint→FAILED → SprintRun=FAILED + cancel openstaand werk +
// zet failed_task_id; Sprint→COMPLETED → SprintRun=DONE; anders
// blijft SprintRun ongewijzigd.
export async function propagateStatusUpwards(
taskId: string, taskId: string,
newStatus: TaskStatus, newStatus: TaskStatus,
client?: Prisma.TransactionClient, client?: Prisma.TransactionClient,
): Promise<UpdateTaskStatusResult> { // PBI-50: optionele expliciete sprint_run_id voor SPRINT_IMPLEMENTATION
const run = async (tx: Prisma.TransactionClient): Promise<UpdateTaskStatusResult> => { // (waar geen ClaudeJob.task_id-koppeling bestaat). Wanneer afwezig valt
// de helper terug op de lookup via ClaudeJob.task_id, met als laatste
// fallback Story → Sprint → SprintRun.findFirst({ status: active }).
sprintRunId?: string,
): Promise<PropagationResult> {
const run = async (tx: Prisma.TransactionClient): Promise<PropagationResult> => {
const task = await tx.task.update({ const task = await tx.task.update({
where: { id: taskId }, where: { id: taskId },
data: { status: newStatus }, data: { status: newStatus },
@ -38,35 +57,232 @@ export async function updateTaskStatusWithStoryPromotion(
}, },
}) })
// Story herevalueren
const siblings = await tx.task.findMany({ const siblings = await tx.task.findMany({
where: { story_id: task.story_id }, where: { story_id: task.story_id },
select: { status: true }, select: { status: true },
}) })
const allDone = siblings.every((s) => s.status === 'DONE') const anyTaskFailed = siblings.some((s) => s.status === 'FAILED')
const allTasksDone =
siblings.length > 0 && siblings.every((s) => s.status === 'DONE')
const story = await tx.story.findUniqueOrThrow({ const story = await tx.story.findUniqueOrThrow({
where: { id: task.story_id }, where: { id: task.story_id },
select: { status: true }, select: { id: true, status: true, pbi_id: true, sprint_id: true },
}) })
let storyStatusChange: StoryStatusChange = null const defaultActive: StoryStatus = story.sprint_id ? 'IN_SPRINT' : 'OPEN'
if (newStatus === 'DONE' && allDone && story.status !== 'DONE') { let nextStoryStatus: StoryStatus
if (anyTaskFailed) nextStoryStatus = 'FAILED'
else if (allTasksDone) nextStoryStatus = 'DONE'
else nextStoryStatus = defaultActive
let storyChanged = false
if (nextStoryStatus !== story.status) {
await tx.story.update({ await tx.story.update({
where: { id: task.story_id }, where: { id: story.id },
data: { status: 'DONE' }, data: { status: nextStoryStatus },
}) })
storyStatusChange = 'promoted' storyChanged = true
} else if (newStatus !== 'DONE' && story.status === 'DONE') {
await tx.story.update({
where: { id: task.story_id },
data: { status: 'IN_SPRINT' },
})
storyStatusChange = 'demoted'
} }
return { task, storyStatusChange, storyId: task.story_id } // PBI herevalueren — BLOCKED met rust laten
const pbi = await tx.pbi.findUniqueOrThrow({
where: { id: story.pbi_id },
select: { id: true, status: true },
})
let pbiChanged = false
if (pbi.status !== 'BLOCKED') {
const pbiStories = await tx.story.findMany({
where: { pbi_id: pbi.id },
select: { status: true },
})
const anyStoryFailed = pbiStories.some((s) => s.status === 'FAILED')
const allStoriesDone =
pbiStories.length > 0 && pbiStories.every((s) => s.status === 'DONE')
let nextPbiStatus: PbiStatus
if (anyStoryFailed) nextPbiStatus = 'FAILED'
else if (allStoriesDone) nextPbiStatus = 'DONE'
else nextPbiStatus = 'READY'
if (nextPbiStatus !== pbi.status) {
await tx.pbi.update({
where: { id: pbi.id },
data: { status: nextPbiStatus },
})
pbiChanged = true
}
}
// Sprint herevalueren — alleen als deze story aan een sprint hangt
let sprintChanged = false
let nextSprintStatus: SprintStatus | null = null
if (story.sprint_id) {
const sprint = await tx.sprint.findUniqueOrThrow({
where: { id: story.sprint_id },
select: { id: true, status: true },
})
const sprintPbiRows = await tx.story.findMany({
where: { sprint_id: sprint.id },
select: { pbi_id: true },
distinct: ['pbi_id'],
})
const sprintPbis = await tx.pbi.findMany({
where: { id: { in: sprintPbiRows.map((s) => s.pbi_id) } },
select: { status: true },
})
const anyPbiFailed = sprintPbis.some((p) => p.status === 'FAILED')
const allPbisDone =
sprintPbis.length > 0 && sprintPbis.every((p) => p.status === 'DONE')
let nextStatus: SprintStatus
if (anyPbiFailed) nextStatus = 'FAILED'
else if (allPbisDone) nextStatus = 'CLOSED'
else nextStatus = 'OPEN'
if (nextStatus !== sprint.status) {
await tx.sprint.update({
where: { id: sprint.id },
data: {
status: nextStatus,
...(nextStatus === 'CLOSED' ? { completed_at: new Date() } : {}),
},
})
sprintChanged = true
nextSprintStatus = nextStatus
}
}
// SprintRun herevalueren. Resolve sprint_run_id in volgorde:
// 1. Expliciete sprintRunId-arg (PBI-50: SPRINT_IMPLEMENTATION-pad).
// 2. ClaudeJob.task_id-lookup (PER_TASK-flow).
// 3. Story → Sprint → SprintRun.findFirst({ status: active }) (geen
// task-job, bv. handmatige task-statuswijziging via UI).
let sprintRunChanged = false
if (nextSprintStatus === 'FAILED' || nextSprintStatus === 'CLOSED') {
let resolvedRunId: string | null = sprintRunId ?? null
let cancelExceptJobId: string | null = null
if (!resolvedRunId) {
const job = await tx.claudeJob.findFirst({
where: { task_id: taskId, sprint_run_id: { not: null } },
orderBy: { created_at: 'desc' },
select: { id: true, sprint_run_id: true },
})
if (job?.sprint_run_id) {
resolvedRunId = job.sprint_run_id
cancelExceptJobId = job.id
}
}
if (!resolvedRunId && story.sprint_id) {
const activeRun = await tx.sprintRun.findFirst({
where: {
sprint_id: story.sprint_id,
status: { in: ['QUEUED', 'RUNNING', 'PAUSED'] },
},
orderBy: { created_at: 'desc' },
select: { id: true },
})
if (activeRun) resolvedRunId = activeRun.id
}
if (resolvedRunId) {
const sprintRun = await tx.sprintRun.findUnique({
where: { id: resolvedRunId },
select: { id: true, status: true },
})
if (
sprintRun &&
(sprintRun.status === 'QUEUED' ||
sprintRun.status === 'RUNNING' ||
sprintRun.status === 'PAUSED')
) {
if (nextSprintStatus === 'FAILED') {
await tx.sprintRun.update({
where: { id: sprintRun.id },
data: {
status: 'FAILED',
finished_at: new Date(),
failed_task_id: taskId,
},
})
// Cancel sibling-jobs binnen dezelfde SprintRun behalve de
// huidige task-job (als die er is). Voor SPRINT_IMPLEMENTATION
// is cancelExceptJobId null en hebben we geen siblings om te
// cancellen — de SPRINT-job zelf blijft actief en de worker
// detecteert dit via job_heartbeat.
await tx.claudeJob.updateMany({
where: {
sprint_run_id: sprintRun.id,
status: { in: ['QUEUED', 'CLAIMED', 'RUNNING'] },
...(cancelExceptJobId ? { id: { not: cancelExceptJobId } } : {}),
},
data: {
status: 'CANCELLED',
finished_at: new Date(),
error: `Cancelled: task ${taskId} failed in same sprint run`,
},
})
sprintRunChanged = true
} else {
// COMPLETED
await tx.sprintRun.update({
where: { id: sprintRun.id },
data: { status: 'DONE', finished_at: new Date() },
})
sprintRunChanged = true
}
}
}
}
return {
task,
storyId: task.story_id,
storyChanged,
pbiChanged,
sprintChanged,
sprintRunChanged,
}
} }
if (client) return run(client) if (client) return run(client)
return prisma.$transaction(run) return prisma.$transaction(run)
} }
// ─── Backwards-compat wrapper ────────────────────────────────────────────────
// Bestaande tools (update-task-status, log-implementation, etc.) verwachten
// de oude { task, storyStatusChange, storyId } shape. We mappen storyChanged
// op promoted/demoted via een eenvoudige heuristiek op nieuwe TaskStatus.
export type StoryStatusChange = 'promoted' | 'demoted' | null
export interface UpdateTaskStatusResult {
task: PropagationResult['task']
storyStatusChange: StoryStatusChange
storyId: string
sprintRunChanged: boolean
}
export async function updateTaskStatusWithStoryPromotion(
taskId: string,
newStatus: TaskStatus,
client?: Prisma.TransactionClient,
sprintRunId?: string,
): Promise<UpdateTaskStatusResult> {
const result = await propagateStatusUpwards(taskId, newStatus, client, sprintRunId)
let storyStatusChange: StoryStatusChange = null
if (result.storyChanged) {
storyStatusChange = newStatus === 'DONE' ? 'promoted' : 'demoted'
}
return {
task: result.task,
storyStatusChange,
storyId: result.storyId,
sprintRunChanged: result.sprintRunChanged,
}
}

View file

@ -26,7 +26,7 @@ export function startHeartbeat(opts: {
} catch { } catch {
// non-fatal — next tick retries // non-fatal — next tick retries
} }
}, opts.intervalMs ?? 5_000) }, opts.intervalMs ?? 10_000)
return { stop: () => clearInterval(timer) } return { stop: () => clearInterval(timer) }
} }

View file

@ -1,21 +1,28 @@
# Grill-prompt voor IDEA_GRILL-jobs # Grill-prompt voor IDEA_GRILL-jobs
> Deze prompt wordt door `wait_for_job` meegestuurd in de payload van een > Deze prompt wordt door `scrum4me-docker/bin/run-one-job.ts` als
> `IDEA_GRILL`-job en gevolgd door de Claude-CLI-worker. Dit bestand wordt > `claude -p`-input meegegeven voor één geclaimde `IDEA_GRILL`-job. Dit
> bewust **niet** vervangen door de externe `anthropic-skills:grill-me`-skill > bestand wordt bewust **niet** vervangen door de externe
> (zie M12 grill-keuze 5: embedded prompts) — Scrum4Me beheert zijn eigen > `anthropic-skills:grill-me`-skill (zie M12 grill-keuze 5: embedded prompts) —
> versie zodat de flow reproduceerbaar is op elke worker. > Scrum4Me beheert zijn eigen versie zodat de flow reproduceerbaar is op
> elke worker.
--- ---
Je bent een **grill-agent** voor Scrum4Me-idee `{idea_code}` (titel: Je bent een **grill-agent** voor een Scrum4Me-idee. De runner heeft de job
`{idea_title}`). al voor je geclaimd; jouw eerste actie is altijd:
Je context (meegegeven in `wait_for_job`-payload): ```
Read $PAYLOAD_PATH
```
- `idea`: het volledige idee-record incl. eventueel bestaande `grill_md` Dat JSON-bestand bevat de volledige context die je nodig hebt:
- `job_id`: nodig voor `update_job_status` aan het einde
- `idea`: het volledige idee-record incl. `id`, `code`, `title`, `description`,
`product_id`, en eventueel bestaande `grill_md`
- `product`: het gekoppelde product (incl. `repo_url` en `definition_of_done`) - `product`: het gekoppelde product (incl. `repo_url` en `definition_of_done`)
- `repo_url`: lokale repo om te lezen (worker bevindt zich daar al) - `primary_worktree_path`: lokale repo om te lezen (je `cwd` zit daar al)
## Doel ## Doel
@ -25,11 +32,11 @@ PBI van kan maken. Eindresultaat is een markdown-document dat je via
## Werkwijze (loop, één vraag per cyclus) ## Werkwijze (loop, één vraag per cyclus)
1. Lees de huidige `idea.title`, `idea.description`, en (indien aanwezig) 1. **Lees `$PAYLOAD_PATH`** met de `Read`-tool. Bewaar `idea.id`, `idea.code`,
`idea.grill_md` — bij re-grill bouw je voort op wat er al staat, je gooit `idea.title`, `idea.grill_md` (mag null zijn), `product.id`, en `job_id`
het niet weg. die heb je nodig in alle MCP-tool-calls hieronder.
2. Verken de repo voor context: `README`, `docs/`, `package.json`, en relevante 2. Verken de repo (`primary_worktree_path` is je `cwd`) voor context:
source-bestanden. Gebruik `Read`/`Grep`/`Glob` zoals normaal. `README`, `docs/`, `package.json`, relevante source. `Read`/`Grep`/`Glob`.
3. Stel **één scherpe vraag tegelijk** via 3. Stel **één scherpe vraag tegelijk** via
`mcp__scrum4me__ask_user_question({ idea_id, question, options? })`. Wacht `mcp__scrum4me__ask_user_question({ idea_id, question, options? })`. Wacht
op het antwoord (`mcp__scrum4me__get_question_answer` of `wait_seconds`). op het antwoord (`mcp__scrum4me__get_question_answer` of `wait_seconds`).
@ -39,7 +46,8 @@ PBI van kan maken. Eindresultaat is een markdown-document dat je via
5. Herhaal tot je voldoende hebt voor een PBI (zie stop-conditie). 5. Herhaal tot je voldoende hebt voor een PBI (zie stop-conditie).
6. Schrijf het eindresultaat via 6. Schrijf het eindresultaat via
`mcp__scrum4me__update_idea_grill_md({ idea_id, markdown })`. `mcp__scrum4me__update_idea_grill_md({ idea_id, markdown })`.
7. Roep `mcp__scrum4me__update_job_status({ job_id, status: 'done', summary })`. 7. Roep `mcp__scrum4me__update_job_status({ job_id, status: 'done', summary })`
— dit sluit de job af. **Verplicht**, ook als de gebruiker afbreekt.
## Stop-conditie ## Stop-conditie
@ -55,7 +63,7 @@ Stop óók als de gebruiker expliciet zegt "klaar" / "genoeg" / "ga door".
## Output-format (strikt) ## Output-format (strikt)
```markdown ```markdown
# Idee — {korte titel} # Idee — <korte titel>
## Scope ## Scope

View file

@ -1,21 +1,29 @@
# Make-Plan-prompt voor IDEA_MAKE_PLAN-jobs # Make-Plan-prompt voor IDEA_MAKE_PLAN-jobs
> Deze prompt wordt door `wait_for_job` meegestuurd in de payload van een > Deze prompt wordt door `scrum4me-docker/bin/run-one-job.ts` als
> `IDEA_MAKE_PLAN`-job. Single-pass, **stel geen vragen** (zie M12 grill-keuze > `claude -p`-input meegegeven voor één geclaimde `IDEA_MAKE_PLAN`-job.
> 8). Twijfels → terug naar grill via UI. > Single-pass, **stel geen vragen** (zie M12 grill-keuze 8). Twijfels →
> terug naar grill via UI.
--- ---
Je bent een **planning-agent** voor Scrum4Me-idee `{idea_code}`. Je bent een **planning-agent** voor een Scrum4Me-idee. De runner heeft de
job al voor je geclaimd; jouw eerste actie is altijd:
Je context (meegegeven in `wait_for_job`-payload): ```
Read $PAYLOAD_PATH
```
Dat JSON-bestand bevat de volledige context die je nodig hebt:
- `job_id`: nodig voor `update_job_status` aan het einde
- `idea.id`, `idea.code`, `idea.title`, `idea.description`
- `idea.grill_md`: het resultaat van de voorafgaande grill-sessie — dit is je - `idea.grill_md`: het resultaat van de voorafgaande grill-sessie — dit is je
primaire input. primaire input.
- `idea.plan_md`: bij re-plan bevat dit het vorige plan; gebruik als - `idea.plan_md`: bij re-plan bevat dit het vorige plan; gebruik als referentie.
referentie.
- `product`: gekoppeld product met `repo_url`, `definition_of_done`, - `product`: gekoppeld product met `repo_url`, `definition_of_done`,
bestaande architectuur in repo. bestaande architectuur in repo.
- `primary_worktree_path`: lokale repo (je `cwd` zit daar al).
## Doel ## Doel
@ -26,11 +34,53 @@ PBI + stories + taken via `materializeIdeaPlanAction`.
## Werkwijze (single-pass) ## Werkwijze (single-pass)
1. Lees `idea.grill_md` volledig. 1. **Lees `$PAYLOAD_PATH`** met de `Read`-tool. Bewaar `idea.id`, `idea.code`,
2. Verken de repo voor patronen, bestaande modules, en `docs/`-structuur. `idea.grill_md`, `idea.plan_md` (mag null zijn), `product.id`, en `job_id`
3. Bouw het plan op in de **strikte format** hieronder. die heb je nodig in alle MCP-tool-calls hieronder.
4. Roep `mcp__scrum4me__update_idea_plan_md({ idea_id, markdown })`. 2. Lees `idea.grill_md` volledig.
5. Roep `mcp__scrum4me__update_job_status({ job_id, status: 'done', summary })`. 3. Verken de repo (`primary_worktree_path` is je `cwd`) voor patronen,
bestaande modules, en `docs/`-structuur.
4. **Bij removal/refactor: doe een dependency-cascade-grep** (zie volgende
sectie). Voeg per geraakte file een taak toe vóór de schema/code-edit zelf.
5. Bouw het plan op in de **strikte format** hieronder.
6. Roep `mcp__scrum4me__update_idea_plan_md({ idea_id, markdown })`.
7. Roep `mcp__scrum4me__update_job_status({ job_id, status: 'done', summary })`
— dit sluit de job af. **Verplicht**, ook bij parse-failure.
## Dependency-cascade-grep (verplicht bij removal/refactor)
Wanneer het idee een **bestaand symbool, model, route of component
verwijdert of hernoemt**, MOET je éérst de consumers in kaart brengen voordat
je het plan vaststelt. Anders breekt `next build` op type-errors die `lint`
en `vitest run` niet pakken (zie hieronder waarom).
**Concreet:**
- Verwijder je een Prisma-model `Foo`?
```bash
grep -rn "prisma\.foo\b\|prisma\.foos\b" actions/ app/ components/ lib/ \
--include="*.ts" --include="*.tsx"
```
Voeg per geraakt bestand één of meer taken toe ("schoon `actions/foos.ts`
op", "verwijder `app/(app)/foos/`-route", "haal Foo-tegel uit
`app/page.tsx`-feature-grid", etc.) **vóór** de schema-edit-taak.
- Verwijder je een component / utility / type? Idem: grep op de
bestandspaden en exports en plan per consumer een taak.
- Hernoem je een model/route/component? Plan per geraakt bestand een edit-taak.
- Wijzig je een `prisma.x.create`-veld (verplicht ↔ optioneel)? Grep op
`prisma.x.create` en `prisma.x.update` voor type-mismatches.
- Voeg óók een **eind-taak** toe: `npm run typecheck` (= `tsc --noEmit`)
als sanity-check, los van `lint && test && build`. Type-errors verschijnen
daar het eerst en zijn 10× sneller dan een full `next build`.
**Waarom zo strikt?** `eslint` doet geen diepe type-check. `vitest` met
esbuild-transpile slaat type-errors over. `next build` is de eerste step die
álles type-checkt — en die zit aan het einde van de pijp. Een gemist
consumer-bestand wordt pas zichtbaar bij verify, niet bij implementation.
## STEL GEEN VRAGEN ## STEL GEEN VRAGEN

View file

@ -0,0 +1,210 @@
# Review-Plan-prompt voor IDEA_REVIEW_PLAN-jobs
> Deze prompt wordt door `wait_for_job` meegestuurd in de payload van een
> `IDEA_REVIEW_PLAN`-job. Dit is een **iteratieve review met actieve plan-revisie**
> en convergence-detectie. Je coördineert drie review-rondes, herschrijft het plan
> na elke ronde, en slaat het review-log op via `update_idea_plan_reviewed`.
---
Je bent een **plan-review-orchestrator** voor Scrum4Me-idee `{idea_code}`.
Je context (meegegeven in `wait_for_job`-payload):
- `idea.plan_md`: het te reviewen plan-document (YAML frontmatter + body)
- `idea.grill_md`: context uit de grill-fase (scope, acceptatie, risico's)
- `product`: gekoppeld product met `definition_of_done` en repo-context
- `repo_url`: lokale repo om bestaande patronen/code te raadplegen
## Doel
Drie iteratieve review-rondes uitvoeren, gericht op verschillende aspecten. Na
elke ronde herschrijf je het plan actief en sla je de herziene versie op in de
database. De reviews werken op convergentie af: zodra het plan stabiel is
(< 5% wijzigingen twee rondes achter elkaar), vraag je om goedkeuring.
**Belangrijk:** het plan wordt bij elke ronde daadwerkelijk verbeterd en
gepersisteerd via `update_idea_plan_md`. Dit is geen passieve review — je
coördineert een actief verbeterproces.
## Werkwijze
### Setup (voor ronde 1)
1. Lees `idea.plan_md` volledig — dit is de startversie van het plan.
2. Lees `idea.grill_md` voor scope/acceptatiecriteria-context.
3. **Laad codex** (verplicht, niet optioneel):
- Glob + Read alle `docs/patterns/**/*.md` → architectuurpatronen
- Glob + Read alle `docs/architecture/**/*.md` → systeemdesign
- Read `CLAUDE.md` → hardstop-regels (nooit schenden)
- Gebruik deze als leidraad bij elke review-ronde
4. Initialiseer `review_log`:
```json
{ "plan_file": "{idea_code}", "created_at": "<now>",
"rounds": [], "approval": { "status": "pending" } }
```
### Per Review-Ronde
**Ronde 1 — Structuur & Syntax (Haiku-perspectief: snel en scherp)**
- Rol: structuur-reviewer — focus op correctheid, niet op inhoud
- Controleer: YAML parseable, alle verplichte velden aanwezig, geen lege strings,
priority-waarden valid (14), markdown-structuur intact
- Herschrijf plan_md: corrigeer structuurfouten en formatting
- *Opmerking multi-model:* directe Haiku API-call is momenteel niet beschikbaar
via job-config; voer deze rol zelf uit met een compacte, syntax-gerichte blik
**Ronde 2 — Logica & Patronen (Sonnet-perspectief: diep en patroon-bewust)**
- Rol: architectuur-reviewer — focus op logica, volledigheid en patroonconformiteit
- Controleer: stories volgen uit grill-criteria, tasks zijn concreet
(bestandsnamen, commando's), patterns uit `docs/patterns/` worden gevolgd,
`verify_required` coherent, dependency-cascades geadresseerd
- Herschrijf plan_md: vul gaten aan, maak tasks specifieker, voeg missende stappen toe
**Ronde 3 — Risico & Edge Cases (Opus-perspectief: kritisch en breed)**
- Rol: risico-reviewer — focus op wat mis kan gaan
- Controleer: grote taken gesplitst, refactors hebben undo-strategie,
schema-changes hebben migratie-taken, type-checking expliciet, concurrency
geadresseerd, error-handling per actie, feature-flags voor grote changes
- Herschrijf plan_md: voeg risico-mitigatie toe, split te grote taken
### Plan Revision (na elke ronde — verplicht)
Na het uitvoeren van de review-criteria:
1. Sla de huidige versie op als `plan_before` in `review_log.rounds[N]`.
2. Herschrijf `plan_md` — integreer de gevonden verbeteringen.
3. Bereken `diff_pct = changed_lines / total_lines * 100`.
4. Sla de herziene versie op als `plan_after` in `review_log.rounds[N]`.
5. **Persisteer de herziene versie** via:
```
update_idea_plan_md({ idea_id: <id>, plan_md: <herziene tekst> })
```
Dit slaat het verbeterde plan op in de database zodat de gebruiker
de progressie ziet. Sla dit stap niet over — ook al zijn er weinig
wijzigingen.
### Convergence Detection
Na elke ronde (m.u.v. ronde 0):
```
diff_pct_this_round = changed_lines / total_lines * 100
if diff_pct_this_round < 5 AND prev_round_diff_pct < 5:
→ CONVERGED
```
Indien converged (of na ronde 2 als max bereikt):
- Sla op: `review_log.convergence = { stable_at_round: N, final_diff_pct, convergence_metric: "plan_stability" }`
- Vraag goedkeuring via `ask_user_question`
## Review-Criteria per Ronde
### Ronde 1 — Structuur & Syntax
- [ ] Frontmatter YAML parseable
- [ ] Alle verplichte velden aanwezig (`pbi.title`, `stories`, `tasks`)
- [ ] Priority-waarden valid (14)
- [ ] Geen lege strings in verplichte velden
- [ ] Markdown-structuur correct (headers, code-blocks)
### Ronde 2 — Logica & Patronen
- [ ] Stories volgen logisch uit grill-acceptance-criteria
- [ ] Tasks zijn concreet (bestandsnamen, commando's, niet abstract)
- [ ] Dependency-cascade-checks uitgevoerd (bij removal/refactor)
- [ ] Patronen uit `docs/patterns/` worden gevolgd
- [ ] Implementatie-plan per task is actionable
- [ ] `verify_required` waarden coherent met task-scope
### Ronde 3 — Risico & Edge Cases
- [ ] Grote taken (> 4u) zijn gesplitst in subtaken
- [ ] Refactors hebben een undo/rollback-strategie
- [ ] Schema-changes hebben migratie-taken
- [ ] Type-checking wordt expliciet geverifieerd (einde-taak)
- [ ] Concurrency-issues / race-conditions geadresseerd
- [ ] Error-handling per actie duidelijk
- [ ] Feature-flags ingebouwd voor grote of riskante changes
## Stappen (uitgebreid algoritme)
1. **Init**
- Lees plan_md + grill_md.
- Laad codex (docs/patterns, docs/architecture, CLAUDE.md).
- Initialiseer `review_log`.
2. **Loop: for round in [0, 1, 2]**
- Voer review uit (focus per ronde: structuur / logica / risico).
- Sla `plan_before` op.
- Herschrijf plan_md op basis van bevindingen.
- Roep `update_idea_plan_md` aan met de herziene tekst.
- Sla `plan_after` + `issues` + `score` + `diff_pct` op in review_log.
- Check convergence (na ronde 1+).
- Break indien converged.
3. **Approval Gate**
- Vraag via `ask_user_question`:
"Plan beoordeeld ({N} rondes, {X}% eindwijziging). Goedkeuren?"
- Opties: `["Ja, accepteren", "Nee, aanpassingen gewenst", "Opnieuw reviewen"]`
- "Ja": `approval.status = 'approved'` → ga door naar Save & Close.
- "Nee": `approval.status = 'rejected'` → sluit af (user kan handmatig editen).
- "Opnieuw": max 2 extra rondes (rondes 34), dan dwingend approval vragen.
4. **Save & Close**
- Call `update_idea_plan_reviewed({ idea_id, review_log, approval_status })`.
- Call `update_job_status({ job_id, status: 'done', summary: review_log.summary })`.
## Output-format review_log (strikt JSON)
```json
{
"plan_file": "IDEA-016",
"created_at": "ISO8601",
"rounds": [
{
"round": 0,
"model": "claude-opus-4-7",
"role": "Structure Review",
"focus": "YAML parsing, format, syntax",
"plan_before": "<origineel plan_md>",
"plan_after": "<herzien plan_md na ronde>",
"issues": [
{
"category": "structure|logic|risk|pattern",
"severity": "error|warning|info",
"suggestion": "wat te fixen"
}
],
"score": 75,
"plan_diff_lines": 12,
"converged": false,
"timestamp": "ISO8601"
}
],
"convergence": {
"stable_at_round": 2,
"final_diff_pct": 2.1,
"convergence_metric": "plan_stability"
},
"approval": {
"status": "pending|approved|rejected",
"timestamp": "ISO8601"
},
"summary": "12 zinnen samenvatting: X rondes, Y% wijziging, status"
}
```
## Foutgevallen
- **Plan parse-fout**: `update_job_status('failed', error: 'plan_parse_failed')` — stop.
- **update_idea_plan_md mislukt**: log error in review_log, ga door met review — niet fataal.
- **Gebruiker annuleert**: sluit netjes af; job wordt door server op CANCELLED gezet.
- **Vraag verloopt**: sla partial review-log op via `update_idea_plan_reviewed`, markeer als `rejected`.
## Aannames & Limieten
- **Multi-model:** directe Haiku/Sonnet API-calls zijn niet beschikbaar via de huidige
job-config architectuur. Alle rondes draaien op het geconfigureerde Opus model.
De rollen (structuur / logica / risico) worden wel strikt gescheiden gehouden.
Toekomst: directe model-switching via Anthropic API.
- Plan bevat geen versleutelde data (review-log opgeslagen als JSON in DB).
- Repo is leesbaar; geen network-fouts verwacht.
- Max 2 extra review-rondes buiten de initiële 3 (max 5 rondes totaal).
- Per ronde: max 10 issues gelogd (overige → samenvatting in `summary`).

View file

@ -0,0 +1,16 @@
# PLAN_CHAT-prompt (placeholder)
> Deze prompt is een placeholder. PLAN_CHAT is in de KIND_DEFAULTS-matrix
> opgenomen maar wordt nog niet actief gebruikt door de queue. Wanneer dit
> kind in productie genomen wordt, vervang deze tekst door de finale instructie.
---
Je bent gestart voor een `PLAN_CHAT`-job. De payload staat in:
```
$PAYLOAD_PATH
```
Lees de payload en doe wat erin staat. Sluit af met
`mcp__scrum4me__update_job_status({ job_id, status: 'done' })`.

View file

@ -0,0 +1,77 @@
# SPRINT_IMPLEMENTATION-prompt
> Deze prompt wordt door `scrum4me-docker/bin/run-one-job.ts` als `claude -p`-input
> meegegeven voor één geclaimde `SPRINT_IMPLEMENTATION`-job. Eén job = de hele
> sprint-run sequentieel afhandelen.
---
Je bent gestart voor één geclaimde `SPRINT_IMPLEMENTATION`-job. De payload bevat
een **frozen scope-snapshot** met alle te verwerken taken:
```
$PAYLOAD_PATH
```
Lees die payload eerst. Belangrijke velden:
- `worktree_path`: de geïsoleerde worktree waar al je werk landt.
- `branch_name`: de feature-branch (bv. `feat/sprint-<id>`); bij PR-strategy
SPRINT zit alle werk in één branch.
- `task_executions[]`: ordered lijst van `SprintTaskExecution`-rijen. Verwerk in
`order`-volgorde. Elke entry heeft `task_id`, `plan_snapshot`, `verify_required`,
`verify_only`, en `base_sha` (alleen voor entry order=0).
- `pbis[]`, `stories[]`: context voor begrip; geen wijzigingen daarop.
- `sprint_run.id`: nodig voor `update_task_status` cascade-prop. Geef altijd
`sprint_run_id` mee aan `update_task_status`.
## Hard regels
- **GEEN** `mcp__scrum4me__wait_for_job` aanroepen. De runner heeft geclaimd.
- **GEEN** `mcp__scrum4me__job_heartbeat` aanroepen. De runner verlengt de
lease automatisch elke 60 seconden via setInterval — jij hoeft daar niets
voor te doen, ook niet tijdens lange Bash-calls.
- Werk uitsluitend in `worktree_path` op `branch_name`. Eén branch voor de hele
sprint-run (bij STORY-strategy: één per story, zie `sprint_run.pr_strategy`).
- Verwerk taken in de exacte `order`-volgorde uit `task_executions[]`.
## Workflow per task_execution
Voor elke entry in `task_executions[]` (in order-volgorde):
1. **Start**: `update_task_execution({ execution_id, status: 'RUNNING' })` en
`update_task_status({ task_id, status: 'in_progress', sprint_run_id })`.
2. **Lees** het `plan_snapshot` uit de execution + de bredere context uit
`task`/`story`/`pbi` in de payload.
3. **Implementeer** de taak in `worktree_path`. Commit per logische laag met
`git add -A && git commit`.
4. **Per laag loggen**:
- `mcp__scrum4me__log_implementation`
- `mcp__scrum4me__log_commit`
- `mcp__scrum4me__log_test_result` (PASSED/FAILED)
5. **Verify-gate** (als `verify_required === true`):
`mcp__scrum4me__verify_sprint_task({ execution_id })`. Bij DIVERGENT: stop de
sprint en sluit af met `update_job_status('failed')`.
6. **Afronden taak**:
- Bij ALIGNED/PARTIAL: `update_task_status({ task_id, status: 'done', sprint_run_id })`
en `update_task_execution({ execution_id, status: 'DONE' })`.
- Bij EMPTY (no-op): `update_task_execution({ execution_id, status: 'SKIPPED' })`
en `update_task_status({ task_id, status: 'done', sprint_run_id })`.
## Sprint afronden
Na de laatste `task_execution`:
- **Verify-gate run**: optioneel een algemene `npm run verify` op de hele worktree.
- **Sluit de job af**: `mcp__scrum4me__update_job_status({ job_id, status: 'done', summary })`
met een samenvatting van wat is afgerond. De `update_job_status`-tool detecteert
automatisch dat dit een SPRINT_IMPLEMENTATION-job is en doet de PR-promotion volgens
`Product.auto_pr` en `sprint_run.pr_strategy`.
Bij een blokkerende fout halverwege: `update_job_status({ job_id, status: 'failed', error })`
en stop. De runner zorgt voor lease-cleanup.
## Vragen aan de gebruiker
Voor blokkerende keuzes: `mcp__scrum4me__ask_user_question` + wacht op antwoord
met `mcp__scrum4me__get_question_answer`. Probeer dit te vermijden in een sprint-
run — ga uit van het frozen plan-snapshot.

View file

@ -0,0 +1,58 @@
# TASK_IMPLEMENTATION-prompt
> Deze prompt wordt door `scrum4me-docker/bin/run-one-job.ts` als `claude -p`-input
> meegegeven voor één geclaimde `TASK_IMPLEMENTATION`-job. De runner heeft de job
> al voor je geclaimd; jouw taak is alleen de uitvoering.
---
Je bent gestart voor één geclaimde `TASK_IMPLEMENTATION`-job uit de Scrum4Me-queue.
De volledige job-payload (inclusief task, story, pbi, sprint, product, config en
worktree_path) staat in:
```
$PAYLOAD_PATH
```
Lees die payload eerst met `Read $PAYLOAD_PATH`. Werk **uitsluitend** in het
`worktree_path` dat erin staat — alle git-operations, bestandsbewerkingen en
verifies horen daar te landen.
## Hard regels
- **GEEN** `mcp__scrum4me__wait_for_job` aanroepen. De runner heeft al voor je
geclaimd. Eén Claude-invocation = één job.
- **GEEN** `mcp__scrum4me__check_queue_empty`. Je sluit af na deze ene job.
- Werk in het toegewezen worktree-pad; geen edits in andere directories.
- Volg `task.implementation_plan` uit de payload als die niet leeg is — dat is
het door de mens of een eerdere planning-sessie vastgelegde recept.
## Workflow
1. **Status op in_progress**: `mcp__scrum4me__update_task_status({ task_id, status: 'in_progress' })`.
2. **Plan lezen**: Lees `task.implementation_plan` uit de payload + relevante
project-docs (`docs/specs/functional.md`, eventueel `docs/patterns/*.md`).
3. **Implementeer** de taak: lees → verander → test → commit per logische laag.
Gebruik `git add -A && git commit` per laag, **geen** `git push`.
4. **Logging per laag**:
- `mcp__scrum4me__log_implementation` met een korte beschrijving van wat je
gewijzigd hebt en waarom.
- `mcp__scrum4me__log_commit` met `commit_hash` en `commit_message` na elke
commit (haal hash uit `git rev-parse HEAD`).
- `mcp__scrum4me__log_test_result` met PASSED/FAILED en uitleg na elke
`npm test` of build-run.
5. **Verify-gate**: roep `mcp__scrum4me__verify_task_against_plan({ task_id })`
aan om de wijzigingen tegen het plan te toetsen.
6. **Sluit af**:
- Bij succes: `update_task_status({ task_id, status: 'done' })` en
`update_job_status({ job_id, status: 'done', summary })`.
- Bij failure (kan de taak niet voltooien): `update_task_status({ task_id, status: 'failed' })`
en `update_job_status({ job_id, status: 'failed', error })`.
- Bij geen-werk-nodig (no-op): `update_job_status({ job_id, status: 'skipped', summary })`.
## Vragen aan de gebruiker
Als je een blokkerende keuze tegenkomt waarvoor je input nodig hebt, gebruik
`mcp__scrum4me__ask_user_question` en wacht op het antwoord met
`mcp__scrum4me__get_question_answer`. Vraag **niet** voor zaken die je zelf
kunt afleiden uit het plan.

View file

@ -5,6 +5,8 @@ const TASK_DB_TO_API = {
IN_PROGRESS: 'in_progress', IN_PROGRESS: 'in_progress',
REVIEW: 'review', REVIEW: 'review',
DONE: 'done', DONE: 'done',
FAILED: 'failed',
EXCLUDED: 'excluded',
} as const satisfies Record<TaskStatus, string> } as const satisfies Record<TaskStatus, string>
const TASK_API_TO_DB: Record<string, TaskStatus> = { const TASK_API_TO_DB: Record<string, TaskStatus> = {
@ -12,18 +14,22 @@ const TASK_API_TO_DB: Record<string, TaskStatus> = {
in_progress: 'IN_PROGRESS', in_progress: 'IN_PROGRESS',
review: 'REVIEW', review: 'REVIEW',
done: 'DONE', done: 'DONE',
failed: 'FAILED',
excluded: 'EXCLUDED',
} }
const STORY_DB_TO_API = { const STORY_DB_TO_API = {
OPEN: 'open', OPEN: 'open',
IN_SPRINT: 'in_sprint', IN_SPRINT: 'in_sprint',
DONE: 'done', DONE: 'done',
FAILED: 'failed',
} as const satisfies Record<StoryStatus, string> } as const satisfies Record<StoryStatus, string>
const STORY_API_TO_DB: Record<string, StoryStatus> = { const STORY_API_TO_DB: Record<string, StoryStatus> = {
open: 'OPEN', open: 'OPEN',
in_sprint: 'IN_SPRINT', in_sprint: 'IN_SPRINT',
done: 'DONE', done: 'DONE',
failed: 'FAILED',
} }
export type TaskStatusApi = (typeof TASK_DB_TO_API)[TaskStatus] export type TaskStatusApi = (typeof TASK_DB_TO_API)[TaskStatus]

View file

@ -10,6 +10,7 @@ import { prisma } from '../prisma.js'
import { requireWriteAccess } from '../auth.js' import { requireWriteAccess } from '../auth.js'
import { userCanAccessStory, userOwnsIdea } from '../access.js' import { userCanAccessStory, userOwnsIdea } from '../access.js'
import { toolError, toolJson, withToolErrors } from '../errors.js' import { toolError, toolJson, withToolErrors } from '../errors.js'
import { triggerPush } from '../lib/push-trigger.js'
const PENDING_TTL_HOURS = 24 const PENDING_TTL_HOURS = 24
const POLL_INTERVAL_MS = 2_000 const POLL_INTERVAL_MS = 2_000
@ -127,6 +128,13 @@ export function registerAskUserQuestionTool(server: McpServer) {
}, },
}) })
void triggerPush(auth.userId, {
title: 'Claude heeft een vraag',
body: question.slice(0, 120),
url: '/notifications',
tag: `claude-q-${created.id}`,
})
// Async-mode (default): return direct. // Async-mode (default): return direct.
if (!wait_seconds || wait_seconds === 0) { if (!wait_seconds || wait_seconds === 0) {
return toolJson(summarize(created)) return toolJson(summarize(created))

View file

@ -1,12 +1,11 @@
import { z } from 'zod' import { z } from 'zod'
import * as fs from 'node:fs/promises' import * as fs from 'node:fs/promises'
import * as path from 'node:path'
import * as os from 'node:os'
import type { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js' import type { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js'
import { prisma } from '../prisma.js' import { prisma } from '../prisma.js'
import { requireWriteAccess } from '../auth.js' import { requireWriteAccess } from '../auth.js'
import { toolJson, withToolErrors } from '../errors.js' import { toolJson, withToolErrors } from '../errors.js'
import { removeWorktreeForJob } from '../git/worktree.js' import { removeWorktreeForJob } from '../git/worktree.js'
import { getWorktreeRoot, SYSTEM_WORKTREE_DIRS } from '../git/worktree-paths.js'
import { resolveRepoRoot } from './wait-for-job.js' import { resolveRepoRoot } from './wait-for-job.js'
const TERMINAL_STATUSES = new Set(['DONE', 'FAILED', 'CANCELLED']) const TERMINAL_STATUSES = new Set(['DONE', 'FAILED', 'CANCELLED'])
@ -15,16 +14,20 @@ const ACTIVE_STATUSES = new Set(['QUEUED', 'CLAIMED', 'RUNNING'])
const inputSchema = z.object({}) const inputSchema = z.object({})
export async function getWorktreeParent(): Promise<string> { export async function getWorktreeParent(): Promise<string> {
return ( return getWorktreeRoot()
process.env.SCRUM4ME_AGENT_WORKTREE_DIR ??
path.join(os.homedir(), '.scrum4me-agent-worktrees')
)
} }
export async function listWorktreeJobIds(worktreeParent: string): Promise<string[]> { export async function listWorktreeJobIds(worktreeParent: string): Promise<string[]> {
try { try {
const entries = await fs.readdir(worktreeParent, { withFileTypes: true }) const entries = await fs.readdir(worktreeParent, { withFileTypes: true })
return entries.filter((e) => e.isDirectory()).map((e) => e.name) return entries
.filter(
(e) =>
e.isDirectory()
&& !SYSTEM_WORKTREE_DIRS.has(e.name)
&& !e.name.endsWith('.lock'),
)
.map((e) => e.name)
} catch { } catch {
return [] return []
} }

113
src/tools/create-sprint.ts Normal file
View file

@ -0,0 +1,113 @@
// MCP authoring tool: create een Sprint binnen een product.
//
// Status start altijd op OPEN; geen reuse-check op bestaande OPEN-sprints
// (per plan-to-pbi-flow.md "altijd nieuwe sprint"). Code wordt auto-gegenereerd
// als S-{YYYY-MM-DD}-{N} per product per datum, met retry bij race-condition
// op de unique constraint (@@unique([product_id, code])).
import { z } from 'zod'
import type { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js'
import { Prisma } from '@prisma/client'
import { prisma } from '../prisma.js'
import { requireWriteAccess } from '../auth.js'
import { userCanAccessProduct } from '../access.js'
import { toolError, toolJson, withToolErrors } from '../errors.js'
const SPRINT_AUTO_RE = /^S-(\d{4}-\d{2}-\d{2})-(\d+)$/
const MAX_CODE_ATTEMPTS = 3
function todayIsoDate(): string {
return new Date().toISOString().slice(0, 10)
}
async function generateNextSprintCode(productId: string): Promise<string> {
const today = todayIsoDate()
const sprints = await prisma.sprint.findMany({
where: { product_id: productId, code: { startsWith: `S-${today}-` } },
select: { code: true },
})
let max = 0
for (const s of sprints) {
const m = s.code?.match(SPRINT_AUTO_RE)
// Dubbele check op de datum — defensive tegen filterveranderingen
// of mock-data die niet door de DB-where heen ging.
if (m && m[1] === today) {
const n = Number.parseInt(m[2], 10)
if (!Number.isNaN(n) && n > max) max = n
}
}
return `S-${today}-${max + 1}`
}
function isCodeUniqueConflict(error: unknown): boolean {
if (!(error instanceof Prisma.PrismaClientKnownRequestError)) return false
if (error.code !== 'P2002') return false
const target = (error.meta as { target?: string[] | string } | undefined)?.target
if (!target) return false
return Array.isArray(target) ? target.includes('code') : target.includes('code')
}
export const inputSchema = z.object({
product_id: z.string().min(1),
code: z.string().min(1).max(30).optional(),
sprint_goal: z.string().min(1).max(500),
start_date: z.string().date().optional(),
})
export async function handleCreateSprint(
{ product_id, code, sprint_goal, start_date }: z.infer<typeof inputSchema>,
) {
return withToolErrors(async () => {
const auth = await requireWriteAccess()
if (!(await userCanAccessProduct(product_id, auth.userId))) {
return toolError(`Product ${product_id} not found or not accessible`)
}
const resolvedStartDate = start_date ? new Date(start_date) : new Date()
const baseSelect = {
id: true,
code: true,
sprint_goal: true,
status: true,
start_date: true,
created_at: true,
} as const
if (code) {
const sprint = await prisma.sprint.create({
data: { product_id, code, sprint_goal, status: 'OPEN', start_date: resolvedStartDate },
select: baseSelect,
})
return toolJson(sprint)
}
let lastError: unknown
for (let attempt = 0; attempt < MAX_CODE_ATTEMPTS; attempt++) {
const generated = await generateNextSprintCode(product_id)
try {
const sprint = await prisma.sprint.create({
data: { product_id, code: generated, sprint_goal, status: 'OPEN', start_date: resolvedStartDate },
select: baseSelect,
})
return toolJson(sprint)
} catch (e) {
if (isCodeUniqueConflict(e)) { lastError = e; continue }
throw e
}
}
throw lastError ?? new Error('Kon geen unieke sprint-code genereren')
})
}
export function registerCreateSprintTool(server: McpServer) {
server.registerTool(
'create_sprint',
{
title: 'Create Sprint',
description:
'Create a new sprint for a product with status=OPEN. Code auto-generated as S-{YYYY-MM-DD}-{N} per product per date if not provided. Forbidden for demo accounts.',
inputSchema,
},
handleCreateSprint,
)
}

View file

@ -1,8 +1,9 @@
// MCP authoring tool: create een Story onder een bestaande PBI. // MCP authoring tool: create een Story onder een bestaande PBI.
// //
// product_id wordt afgeleid uit de PBI (denormalized FK conform CLAUDE.md // product_id wordt afgeleid uit de PBI (denormalized FK conform CLAUDE.md
// convention — nooit vertrouwen op client-input). status='OPEN' default; // convention — nooit vertrouwen op client-input). Zonder sprint_id is
// landt in de Product Backlog, niet auto in een sprint. // status='OPEN' en landt de story in de Product Backlog; mét sprint_id
// wordt de story direct aan die sprint gekoppeld (status='IN_SPRINT').
import { z } from 'zod' import { z } from 'zod'
import type { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js' import type { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js'
@ -46,75 +47,108 @@ const inputSchema = z.object({
acceptance_criteria: z.string().max(4000).optional(), acceptance_criteria: z.string().max(4000).optional(),
priority: z.number().int().min(1).max(4), priority: z.number().int().min(1).max(4),
sort_order: z.number().optional(), sort_order: z.number().optional(),
// Optionele sprint-koppeling: bij creatie de story direct aan een sprint
// hangen (status=IN_SPRINT). De sprint moet bij hetzelfde product horen.
sprint_id: z.string().min(1).optional(),
}) })
export async function handleCreateStory(
{
pbi_id,
title,
description,
acceptance_criteria,
priority,
sort_order,
sprint_id,
}: z.infer<typeof inputSchema>,
) {
return withToolErrors(async () => {
const auth = await requireWriteAccess()
const pbi = await prisma.pbi.findUnique({
where: { id: pbi_id },
select: { product_id: true },
})
if (!pbi) return toolError(`PBI ${pbi_id} not found`)
if (!(await userCanAccessProduct(pbi.product_id, auth.userId))) {
return toolError(`PBI ${pbi_id} not accessible`)
}
// Optionele sprint-koppeling: valideer dat de sprint bestaat én bij
// hetzelfde product hoort — voorkomt een cross-product koppeling.
if (sprint_id !== undefined) {
const sprint = await prisma.sprint.findUnique({
where: { id: sprint_id },
select: { product_id: true },
})
if (!sprint) return toolError(`Sprint ${sprint_id} not found`)
if (sprint.product_id !== pbi.product_id) {
return toolError(
`Sprint ${sprint_id} belongs to a different product than PBI ${pbi_id}`,
)
}
}
let resolvedSortOrder = sort_order
if (resolvedSortOrder === undefined) {
const last = await prisma.story.findFirst({
where: { pbi_id, priority },
orderBy: { sort_order: 'desc' },
select: { sort_order: true },
})
resolvedSortOrder = (last?.sort_order ?? 0) + 1.0
}
let lastError: unknown
for (let attempt = 0; attempt < MAX_CODE_ATTEMPTS; attempt++) {
const code = await generateNextStoryCode(pbi.product_id)
try {
const story = await prisma.story.create({
data: {
pbi_id,
product_id: pbi.product_id, // denormalized uit DB-parent, niet uit input
sprint_id: sprint_id ?? null,
code,
title,
description: description ?? null,
acceptance_criteria: acceptance_criteria ?? null,
priority,
sort_order: resolvedSortOrder,
status: sprint_id ? 'IN_SPRINT' : 'OPEN',
},
select: {
id: true,
code: true,
title: true,
description: true,
acceptance_criteria: true,
priority: true,
sort_order: true,
status: true,
sprint_id: true,
created_at: true,
},
})
return toolJson(story)
} catch (e) {
if (isCodeUniqueConflict(e)) { lastError = e; continue }
throw e
}
}
throw lastError ?? new Error('Kon geen unieke Story-code genereren')
})
}
export function registerCreateStoryTool(server: McpServer) { export function registerCreateStoryTool(server: McpServer) {
server.registerTool( server.registerTool(
'create_story', 'create_story',
{ {
title: 'Create story', title: 'Create story',
description: description:
'Add a story under an existing PBI. Status defaults to OPEN (lands in product backlog, not in a sprint). Sort_order auto-set to last+1 within the PBI/priority group if not provided. Forbidden for demo accounts.', 'Add a story under an existing PBI. Optionally link it to a sprint via sprint_id — when given, the story is created with status=IN_SPRINT and the sprint must belong to the same product as the PBI; otherwise status=OPEN and the story lands in the product backlog. Sort_order auto-set to last+1 within the PBI/priority group if not provided. Forbidden for demo accounts.',
inputSchema, inputSchema,
}, },
async ({ pbi_id, title, description, acceptance_criteria, priority, sort_order }) => handleCreateStory,
withToolErrors(async () => {
const auth = await requireWriteAccess()
const pbi = await prisma.pbi.findUnique({
where: { id: pbi_id },
select: { product_id: true },
})
if (!pbi) return toolError(`PBI ${pbi_id} not found`)
if (!(await userCanAccessProduct(pbi.product_id, auth.userId))) {
return toolError(`PBI ${pbi_id} not accessible`)
}
let resolvedSortOrder = sort_order
if (resolvedSortOrder === undefined) {
const last = await prisma.story.findFirst({
where: { pbi_id, priority },
orderBy: { sort_order: 'desc' },
select: { sort_order: true },
})
resolvedSortOrder = (last?.sort_order ?? 0) + 1.0
}
let lastError: unknown
for (let attempt = 0; attempt < MAX_CODE_ATTEMPTS; attempt++) {
const code = await generateNextStoryCode(pbi.product_id)
try {
const story = await prisma.story.create({
data: {
pbi_id,
product_id: pbi.product_id, // denormalized uit DB-parent, niet uit input
code,
title,
description: description ?? null,
acceptance_criteria: acceptance_criteria ?? null,
priority,
sort_order: resolvedSortOrder,
status: 'OPEN',
},
select: {
id: true,
code: true,
title: true,
description: true,
acceptance_criteria: true,
priority: true,
sort_order: true,
status: true,
created_at: true,
},
})
return toolJson(story)
} catch (e) {
if (isCodeUniqueConflict(e)) { lastError = e; continue }
throw e
}
}
throw lastError ?? new Error('Kon geen unieke Story-code genereren')
}),
) )
} }

View file

@ -1,42 +0,0 @@
import { z } from 'zod'
import type { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js'
import { prisma } from '../prisma.js'
import { requireWriteAccess } from '../auth.js'
import { userCanAccessProduct } from '../access.js'
import { toolError, toolJson, withToolErrors } from '../errors.js'
const inputSchema = z.object({
title: z.string().min(1),
description: z.string().max(2000).optional(),
product_id: z.string().min(1).optional(),
})
export function registerCreateTodoTool(server: McpServer) {
server.registerTool(
'create_todo',
{
title: 'Create todo',
description:
'Add a todo for the authenticated user, optionally scoped to a product. ' +
'Forbidden for demo accounts.',
inputSchema,
},
async ({ title, description, product_id }) =>
withToolErrors(async () => {
const auth = await requireWriteAccess()
if (product_id && !(await userCanAccessProduct(product_id, auth.userId))) {
return toolError(`Product ${product_id} not found or not accessible`)
}
const todo = await prisma.todo.create({
data: {
user_id: auth.userId,
product_id: product_id ?? null,
title,
description: description ?? null,
},
select: { id: true, title: true, description: true, created_at: true },
})
return toolJson(todo)
}),
)
}

View file

@ -47,7 +47,7 @@ export function registerGetClaudeContextTool(server: McpServer) {
} }
const activeSprint = await prisma.sprint.findFirst({ const activeSprint = await prisma.sprint.findFirst({
where: { product_id, status: 'ACTIVE' }, where: { product_id, status: 'OPEN' },
orderBy: { created_at: 'desc' }, orderBy: { created_at: 'desc' },
select: { id: true, sprint_goal: true, status: true }, select: { id: true, sprint_goal: true, status: true },
}) })
@ -99,19 +99,21 @@ export function registerGetClaudeContextTool(server: McpServer) {
} }
} }
const openTodos = await prisma.todo.findMany({ const openIdeas = await prisma.idea.findMany({
where: { where: {
user_id: auth.userId, user_id: auth.userId,
done: false,
archived: false, archived: false,
status: { not: 'PLANNED' },
OR: [{ product_id: product_id }, { product_id: null }], OR: [{ product_id: product_id }, { product_id: null }],
}, },
orderBy: { created_at: 'asc' }, orderBy: { created_at: 'asc' },
take: 50, take: 50,
select: { select: {
id: true, id: true,
code: true,
title: true, title: true,
description: true, description: true,
status: true,
created_at: true, created_at: true,
}, },
}) })
@ -120,7 +122,7 @@ export function registerGetClaudeContextTool(server: McpServer) {
product, product,
active_sprint: activeSprint, active_sprint: activeSprint,
next_story: nextStory, next_story: nextStory,
open_todos: openTodos, open_ideas: openIdeas,
}) })
}), }),
) )

View file

@ -0,0 +1,81 @@
// PBI-50 F3-T3: job_heartbeat
//
// Verlengt ClaudeJob.lease_until met 5 min zodat resetStaleClaimedJobs een
// long-running job (bv. SPRINT_IMPLEMENTATION over 30+ min) niet ten onrechte
// als stale markt. Worker draait een achtergrond-loop elke 60s.
//
// Voor SPRINT-jobs: respons bevat sprint_run_status zodat de worker zijn
// loop kan breken bij ≠ RUNNING (bv. UI-side cancel of MERGE_CONFLICT-pause).
import { z } from 'zod'
import type { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js'
import { prisma } from '../prisma.js'
import { requireWriteAccess } from '../auth.js'
import { toolError, toolJson, withToolErrors } from '../errors.js'
const inputSchema = z.object({
job_id: z.string().min(1),
})
export function registerJobHeartbeatTool(server: McpServer) {
server.registerTool(
'job_heartbeat',
{
title: 'Job heartbeat',
description:
'Extend the lease on a CLAIMED/RUNNING job by 5 minutes. Token must own the job. ' +
'For SPRINT_IMPLEMENTATION jobs: response includes sprint_run_status so the worker ' +
'can break its task-loop on UI-side cancel/pause without an extra query. ' +
'Worker should call this every ~60s during long-running batches. ' +
'Forbidden for demo accounts.',
inputSchema,
},
async ({ job_id }) =>
withToolErrors(async () => {
const auth = await requireWriteAccess()
// Atomic conditional UPDATE so a non-owner / non-active job is rejected
// without a separate read.
const updated = await prisma.$queryRaw<
Array<{ id: string; lease_until: Date; kind: string; sprint_run_id: string | null }>
>`
UPDATE claude_jobs
SET lease_until = NOW() + INTERVAL '5 minutes'
WHERE id = ${job_id}
AND claimed_by_token_id = ${auth.tokenId}
AND status IN ('CLAIMED', 'RUNNING')
RETURNING id, lease_until, kind::text AS kind, sprint_run_id
`
if (updated.length === 0) {
return toolError(
`Job ${job_id} not found, not claimed by your token, or in terminal state`,
)
}
const row = updated[0]
let sprint_run_status: string | null = null
let sprint_run_pause_reason: string | null = null
if (row.kind === 'SPRINT_IMPLEMENTATION' && row.sprint_run_id) {
const sprintRun = await prisma.sprintRun.findUnique({
where: { id: row.sprint_run_id },
select: { status: true, pause_context: true },
})
sprint_run_status = sprintRun?.status ?? null
// Extract pause_reason from pause_context Json (best-effort)
const ctx = sprintRun?.pause_context as
| { pause_reason?: string }
| null
| undefined
sprint_run_pause_reason = ctx?.pause_reason ?? null
}
return toolJson({
ok: true,
job_id: row.id,
lease_until: row.lease_until.toISOString(),
sprint_run_status,
sprint_run_pause_reason,
})
}),
)
}

View file

@ -0,0 +1,126 @@
// MCP-tool: writes the review-log result after an IDEA_REVIEW_PLAN job and
// transitions idea.status. Only an explicit approval_status='approved' moves
// the idea to PLAN_REVIEWED; anything else (rejected, pending, or omitted)
// goes to PLAN_REVIEW_FAILED — a human must then decide. The tool never
// silently approves.
//
// Called by the worker as the final step of a review-plan session.
import { z } from 'zod'
import type { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js'
import { prisma } from '../prisma.js'
import { requireWriteAccess } from '../auth.js'
import { userOwnsIdea } from '../access.js'
import { toolError, toolJson, withToolErrors } from '../errors.js'
export const inputSchema = z.object({
idea_id: z.string().min(1),
review_log: z.object({}).passthrough(), // Full ReviewLog from orchestrator (JSON object)
approval_status: z
.enum(['pending', 'approved', 'rejected'] as const)
.optional(),
})
export async function handleUpdateIdeaPlanReviewed(
{ idea_id, review_log, approval_status }: z.infer<typeof inputSchema>,
) {
return withToolErrors(async () => {
const auth = await requireWriteAccess()
if (!(await userOwnsIdea(idea_id, auth.userId))) {
return toolError('Idea not found')
}
// Alleen een expliciete 'approved' brengt het idee naar PLAN_REVIEWED.
// 'rejected', 'pending' én een weggelaten approval_status betekenen
// allemaal "niet auto-goedgekeurd — mens moet beslissen" en gaan naar
// PLAN_REVIEW_FAILED. Nooit stilzwijgend goedkeuren (de vorige
// `: 'PLAN_REVIEWED'`-default deed dat wel bij pending/undefined).
const nextStatus =
approval_status === 'approved' ? 'PLAN_REVIEWED' : 'PLAN_REVIEW_FAILED'
// Log summary metrics from review_log
const logSummary = buildReviewLogSummary(review_log)
const result = await prisma.$transaction([
prisma.idea.update({
where: { id: idea_id },
data: {
plan_review_log: review_log as any,
reviewed_at: new Date(),
status: nextStatus,
},
select: { id: true, status: true, code: true },
}),
prisma.ideaLog.create({
data: {
idea_id,
type: 'PLAN_REVIEW_RESULT',
content: logSummary.summary,
metadata: {
approval_status,
convergence_status: logSummary.convergence_status,
final_score: logSummary.final_score,
rounds_completed: logSummary.rounds_completed,
},
},
}),
])
return toolJson({
ok: true,
idea: result[0],
review_log_summary: logSummary,
})
})
}
export function registerUpdateIdeaPlanReviewedTool(server: McpServer) {
server.registerTool(
'update_idea_plan_reviewed',
{
title: 'Mark plan as reviewed',
description:
'Save review-log after a plan review cycle and transition idea.status. ' +
'Only approval_status="approved" → PLAN_REVIEWED; "rejected", "pending", ' +
'or an omitted approval_status → PLAN_REVIEW_FAILED (needs manual ' +
'approval — never silently approved). Forbidden for demo accounts.',
inputSchema,
},
handleUpdateIdeaPlanReviewed,
)
}
function buildReviewLogSummary(
reviewLog: Record<string, any>,
): {
summary: string
convergence_status: string
final_score: number
rounds_completed: number
} {
const rounds = Array.isArray(reviewLog.rounds) ? reviewLog.rounds : []
const convergence = reviewLog.convergence || {}
const finalScore =
rounds.length > 0 ? rounds[rounds.length - 1].score ?? 0 : 0
const convergenceStatus =
convergence.stable_at_round !== undefined
? `stable at round ${convergence.stable_at_round}`
: convergence.final_diff_pct !== undefined
? `${convergence.final_diff_pct}% diff`
: 'pending'
const summary =
`Plan reviewed in ${rounds.length} rounds. ` +
`Convergence: ${convergenceStatus}. ` +
`Final score: ${finalScore}/100. ` +
`Status: ${reviewLog.approval?.status || 'pending'}.`
return {
summary,
convergence_status: convergenceStatus,
final_score: finalScore,
rounds_completed: rounds.length,
}
}

View file

@ -1,6 +1,12 @@
// update_job_status — agent rapporteert voortgang: running | done | failed. // update_job_status — agent rapporteert voortgang: running | done | failed | skipped.
// Auth: Bearer-token moet matchen claimed_by_token_id van de job. // Auth: Bearer-token moet matchen claimed_by_token_id van de job.
// Triggert automatisch een SSE-event naar de UI via pg_notify. // Triggert automatisch een SSE-event naar de UI via pg_notify.
//
// 'skipped' is de no-op exit voor TASK_IMPLEMENTATION jobs waar verify_task_against_plan
// EMPTY oplevert omdat de wijzigingen al in origin/main staan (parallel werk, eerdere
// PR, race tussen siblings). Geen verify-gate, geen PR, geen cascade. De worker moet
// de bijbehorende task apart op DONE zetten via update_task_status als de inhoudelijke
// vereisten al zijn voldaan.
import { z } from 'zod' import { z } from 'zod'
import type { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js' import type { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js'
@ -11,13 +17,35 @@ import { prisma } from '../prisma.js'
import { requireWriteAccess } from '../auth.js' import { requireWriteAccess } from '../auth.js'
import { toolJson, toolError, withToolErrors } from '../errors.js' import { toolJson, toolError, withToolErrors } from '../errors.js'
import { removeWorktreeForJob } from '../git/worktree.js' import { removeWorktreeForJob } from '../git/worktree.js'
import { getWorktreeRoot } from '../git/worktree-paths.js'
import { releaseLocksOnTerminal } from '../git/job-locks.js'
import { resolveRepoRoot } from './wait-for-job.js' import { resolveRepoRoot } from './wait-for-job.js'
import { pushBranchForJob } from '../git/push.js' import { pushBranchForJob } from '../git/push.js'
import { createPullRequest } from '../git/pr.js' import { createPullRequest, markPullRequestReady } from '../git/pr.js'
import { cancelPbiOnFailure } from '../cancel/pbi-cascade.js'
import { propagateStatusUpwards } from '../lib/tasks-status-update.js'
import { triggerPush } from '../lib/push-trigger.js'
import { transition as prFlowTransition } from '../flow/pr-flow.js'
import { transition as sprintRunTransition } from '../flow/sprint-run.js'
import { executeEffects } from '../flow/effects.js'
import { execFile as execFileCb } from 'node:child_process'
import { promisify } from 'node:util'
const execGh = promisify(execFileCb)
async function fetchConflictFiles(prUrl: string): Promise<string[]> {
try {
const { stdout } = await execGh('gh', ['pr', 'view', prUrl, '--json', 'files'])
const parsed = JSON.parse(stdout) as { files?: Array<{ path: string }> }
return parsed.files?.map((f) => f.path) ?? []
} catch {
return []
}
}
const inputSchema = z.object({ const inputSchema = z.object({
job_id: z.string().min(1), job_id: z.string().min(1),
status: z.enum(['running', 'done', 'failed']), status: z.enum(['running', 'done', 'failed', 'skipped']),
branch: z.string().min(1).optional(), branch: z.string().min(1).optional(),
summary: z.string().max(1_000).optional(), summary: z.string().max(1_000).optional(),
error: z.string().max(2_000).optional(), error: z.string().max(2_000).optional(),
@ -26,12 +54,13 @@ const inputSchema = z.object({
output_tokens: z.number().int().nonnegative().optional(), output_tokens: z.number().int().nonnegative().optional(),
cache_read_tokens: z.number().int().nonnegative().optional(), cache_read_tokens: z.number().int().nonnegative().optional(),
cache_write_tokens: z.number().int().nonnegative().optional(), cache_write_tokens: z.number().int().nonnegative().optional(),
actual_thinking_tokens: z.number().int().nonnegative().optional(),
}) })
export async function cleanupWorktreeForTerminalStatus( export async function cleanupWorktreeForTerminalStatus(
productId: string, productId: string,
jobId: string, jobId: string,
status: 'done' | 'failed', status: 'done' | 'failed' | 'skipped',
branch: string | undefined, branch: string | undefined,
): Promise<void> { ): Promise<void> {
const repoRoot = await resolveRepoRoot(productId) const repoRoot = await resolveRepoRoot(productId)
@ -42,31 +71,57 @@ export async function cleanupWorktreeForTerminalStatus(
return return
} }
// Branch-per-story: only remove the worktree if no sibling job in the same // Branch-shared check: bepaal welke siblings dezelfde branch reuse'n.
// story is still active. If siblings are queued/claimed/running they will // - SPRINT pr_strategy → alle TASK_IMPLEMENTATION jobs in dezelfde
// re-use this branch — destroying the worktree now wastes the next claim. // sprint_run delen feat/sprint-<id>.
// - STORY pr_strategy / legacy → alle TASK_IMPLEMENTATION jobs in
// dezelfde story delen feat/story-<id>.
// Bij active siblings: defer cleanup (en in elk geval keepBranch=true)
// zodat de volgende claim de branch kan reuse'n.
const job = await prisma.claudeJob.findUnique({ const job = await prisma.claudeJob.findUnique({
where: { id: jobId }, where: { id: jobId },
select: { task: { select: { story_id: true } } }, select: {
task: { select: { story_id: true } },
sprint_run_id: true,
sprint_run: { select: { pr_strategy: true } },
},
}) })
if (job?.task) {
const activeSiblings = await prisma.claudeJob.count({ let activeSiblings = 0
let scope = ''
if (job?.sprint_run && job.sprint_run.pr_strategy === 'SPRINT') {
activeSiblings = await prisma.claudeJob.count({
where: {
sprint_run_id: job.sprint_run_id,
status: { in: ['QUEUED', 'CLAIMED', 'RUNNING'] },
id: { not: jobId },
},
})
scope = `sprint_run ${job.sprint_run_id}`
} else if (job?.task) {
activeSiblings = await prisma.claudeJob.count({
where: { where: {
task: { story_id: job.task.story_id }, task: { story_id: job.task.story_id },
status: { in: ['QUEUED', 'CLAIMED', 'RUNNING'] }, status: { in: ['QUEUED', 'CLAIMED', 'RUNNING'] },
id: { not: jobId }, id: { not: jobId },
}, },
}) })
if (activeSiblings > 0) { scope = `story ${job.task.story_id}`
console.log(
`[update_job_status] cleanup deferred for job=${jobId}: ${activeSiblings} sibling(s) still active in story ${job.task.story_id}`,
)
return
}
} }
// Keep branch when job is done and a branch was reported (agent pushed) if (activeSiblings > 0) {
const keepBranch = status === 'done' && branch !== undefined console.log(
`[update_job_status] cleanup deferred for job=${jobId}: ${activeSiblings} sibling(s) still active in ${scope}`,
)
return
}
// Keep branch when:
// - job is done en agent rapporteerde push (branch !== undefined), of
// - SPRINT pr_strategy job is skipped — andere stories delen branch.
const keepBranch =
(status === 'done' && branch !== undefined) ||
(status === 'skipped' && job?.sprint_run?.pr_strategy === 'SPRINT')
try { try {
await removeWorktreeForJob({ repoRoot, jobId, keepBranch }) await removeWorktreeForJob({ repoRoot, jobId, keepBranch })
} catch (err) { } catch (err) {
@ -83,26 +138,53 @@ export type DoneUpdatePlan = {
branchOverride: string | undefined branchOverride: string | undefined
errorOverride: string | undefined errorOverride: string | undefined
skipWorktreeCleanup: boolean skipWorktreeCleanup: boolean
headSha: string | undefined
} }
export async function prepareDoneUpdate( export async function prepareDoneUpdate(
jobId: string, jobId: string,
branch: string | undefined, branch: string | undefined,
): Promise<DoneUpdatePlan> { ): Promise<DoneUpdatePlan> {
const worktreeDir = // Resolve branch in deze volgorde:
process.env.SCRUM4ME_AGENT_WORKTREE_DIR ?? path.join(os.homedir(), '.scrum4me-agent-worktrees') // 1. Expliciete `branch`-arg van Claude (meestal niet meegegeven).
// 2. ClaudeJob.branch uit de DB — gezet door attachWorktreeToJob met de
// juiste pr_strategy: feat/sprint-<id> voor SPRINT, feat/story-<id>
// voor STORY met sibling-reuse.
// 3. Legacy fallback feat/job-<8> — alleen voor jobs zonder DB-branch
// (zou niet moeten voorkomen na PBI-50).
let resolvedBranch = branch
if (!resolvedBranch) {
const dbJob = await prisma.claudeJob.findUnique({
where: { id: jobId },
select: { branch: true },
})
resolvedBranch = dbJob?.branch ?? undefined
}
const branchName = resolvedBranch ?? `feat/job-${jobId.slice(-8)}`
const worktreeDir = getWorktreeRoot()
const worktreePath = path.join(worktreeDir, jobId) const worktreePath = path.join(worktreeDir, jobId)
const branchName = branch ?? `feat/job-${jobId.slice(-8)}`
const pushResult = await pushBranchForJob({ worktreePath, branchName }) const pushResult = await pushBranchForJob({ worktreePath, branchName })
if (pushResult.pushed) { if (pushResult.pushed) {
let headSha: string | undefined
try {
const { execFile } = await import('node:child_process')
const { promisify } = await import('node:util')
const exec = promisify(execFile)
const { stdout } = await exec('git', ['rev-parse', 'HEAD'], { cwd: worktreePath })
headSha = stdout.trim()
} catch (err) {
console.warn(`[prepareDoneUpdate] failed to resolve HEAD sha for job ${jobId}:`, err)
}
return { return {
dbStatus: 'DONE', dbStatus: 'DONE',
pushedAt: new Date(), pushedAt: new Date(),
branchOverride: branchName, branchOverride: branchName,
errorOverride: undefined, errorOverride: undefined,
skipWorktreeCleanup: false, skipWorktreeCleanup: false,
headSha,
} }
} }
@ -113,6 +195,7 @@ export async function prepareDoneUpdate(
branchOverride: undefined, branchOverride: undefined,
errorOverride: undefined, errorOverride: undefined,
skipWorktreeCleanup: false, skipWorktreeCleanup: false,
headSha: undefined,
} }
} }
@ -124,6 +207,7 @@ export async function prepareDoneUpdate(
branchOverride: undefined, branchOverride: undefined,
errorOverride: `push failed (${pushResult.reason}): ${snippet}`, errorOverride: `push failed (${pushResult.reason}): ${snippet}`,
skipWorktreeCleanup: true, skipWorktreeCleanup: true,
headSha: undefined,
} }
} }
@ -191,20 +275,147 @@ export function checkVerifyGate(
return { allowed: true } return { allowed: true }
} }
// PBI-50 F4-T1: aggregate verify-gate voor SPRINT_IMPLEMENTATION DONE.
// Bron: alleen SprintTaskExecution-rows voor deze job. Per row:
// DONE → checkVerifyGate met snapshot-velden (gate per row)
// SKIPPED → alleen toegestaan als verify_required_snapshot === 'ANY'
// FAILED/PENDING/RUNNING → blocker (sprint mag niet DONE met openstaand werk)
// Bij overall pass → { allowed: true }; anders error met opsomming.
export async function checkSprintVerifyGate(
sprintJobId: string,
): Promise<{ allowed: true } | { allowed: false; error: string }> {
const executions = await prisma.sprintTaskExecution.findMany({
where: { sprint_job_id: sprintJobId },
orderBy: { order: 'asc' },
select: {
id: true,
task_id: true,
order: true,
status: true,
verify_result: true,
verify_summary: true,
verify_required_snapshot: true,
verify_only_snapshot: true,
task: { select: { code: true, title: true } },
},
})
if (executions.length === 0) {
return {
allowed: false,
error:
'Sprint-job heeft geen SprintTaskExecution-rows. ' +
'Dit duidt op een claim-bug; reclaim de sprint.',
}
}
const blockers: string[] = []
for (const exec of executions) {
const taskLabel = `${exec.task.code}: ${exec.task.title}`
if (exec.status === 'PENDING' || exec.status === 'RUNNING') {
blockers.push(`[${exec.status}] ${taskLabel} — onafgemaakt werk`)
continue
}
if (exec.status === 'FAILED') {
blockers.push(`[FAILED] ${taskLabel}`)
continue
}
if (exec.status === 'SKIPPED') {
if (exec.verify_required_snapshot !== 'ANY') {
blockers.push(
`[SKIPPED] ${taskLabel} — alleen toegestaan bij verify_required=ANY`,
)
}
continue
}
// DONE: per-row gate
const gate = checkVerifyGate(
exec.verify_result,
exec.verify_only_snapshot,
exec.verify_required_snapshot,
exec.verify_summary ?? undefined,
)
if (!gate.allowed) {
blockers.push(`[DONE-gate] ${taskLabel}: ${gate.error}`)
}
}
if (blockers.length === 0) return { allowed: true }
return {
allowed: false,
error:
`Sprint kan niet DONE — ${blockers.length} task(s) blokkeren:\n` +
blockers.map((b) => ` - ${b}`).join('\n'),
}
}
// PBI-50 F4-T2: idempotent SprintRun-finalisering.
// Invariant: alleen aanroepen wanneer alle stories in de sprint status
// DONE/FAILED/CANCELLED hebben. Effect: SprintRun.status → DONE +
// finished_at = NOW(). Idempotent — bij al-DONE: no-op.
export async function finalizeSprintRunOnDone(sprintRunId: string): Promise<void> {
const sprintRun = await prisma.sprintRun.findUnique({
where: { id: sprintRunId },
select: { id: true, status: true, sprint_id: true },
})
if (!sprintRun) return
if (sprintRun.status === 'DONE') return // idempotent
// Check alle stories in deze sprint zijn klaar
const openStories = await prisma.story.count({
where: {
sprint_id: sprintRun.sprint_id,
status: { notIn: ['DONE', 'FAILED'] },
},
})
if (openStories > 0) return // nog werk over — niet finaliseren
await prisma.sprintRun.update({
where: { id: sprintRunId },
data: { status: 'DONE', finished_at: new Date() },
})
}
const DB_STATUS_MAP = { const DB_STATUS_MAP = {
running: 'RUNNING', running: 'RUNNING',
done: 'DONE', done: 'DONE',
failed: 'FAILED', failed: 'FAILED',
skipped: 'SKIPPED',
} as const } as const
export function resolveNextAction( export function resolveNextAction(
queueCount: number, queueCount: number,
status: 'running' | 'done' | 'failed', status: 'running' | 'done' | 'failed' | 'skipped',
): 'wait_for_job_again' | 'queue_empty' | 'idle' { ): 'wait_for_job_again' | 'queue_empty' | 'idle' {
if (status === 'running') return 'idle' if (status === 'running') return 'idle'
return queueCount > 0 ? 'wait_for_job_again' : 'queue_empty' return queueCount > 0 ? 'wait_for_job_again' : 'queue_empty'
} }
export type JobTimestampUpdate = {
claimed_at?: Date
started_at?: Date
finished_at?: Date
}
// Bepaalt welke lifecycle-timestamps update_job_status schrijft bij een
// status-overgang. Set-once (backfill alleen als nu null) houdt de invariant
// claimed_at ≤ started_at ≤ finished_at: een job die CLAIMED → done gaat
// zonder `running`-rapport krijgt alsnog een started_at, en claimed_at
// (normaal door wait_for_job bij claim gezet) wordt nooit overschreven.
export function resolveJobTimestamps(
status: 'running' | 'done' | 'failed' | 'skipped',
current: { claimed_at: Date | null; started_at: Date | null },
now: Date = new Date(),
): JobTimestampUpdate {
const isTerminal = status === 'done' || status === 'failed' || status === 'skipped'
const update: JobTimestampUpdate = {}
if (current.claimed_at == null) update.claimed_at = now
if (current.started_at == null && (status === 'running' || isTerminal)) {
update.started_at = now
}
if (isTerminal) update.finished_at = now
return update
}
export async function maybeCreateAutoPr(opts: { export async function maybeCreateAutoPr(opts: {
jobId: string jobId: string
productId: string productId: string
@ -221,29 +432,86 @@ export async function maybeCreateAutoPr(opts: {
}) })
if (!product?.auto_pr) return null if (!product?.auto_pr) return null
const job = await prisma.claudeJob.findUnique({
where: { id: jobId },
select: {
sprint_run_id: true,
sprint_run: {
select: { id: true, pr_strategy: true, sprint: { select: { sprint_goal: true } } },
},
},
})
const task = await prisma.task.findUnique({ const task = await prisma.task.findUnique({
where: { id: taskId }, where: { id: taskId },
select: { select: {
title: true, title: true,
repo_url: true,
story: { select: { id: true, code: true, title: true } }, story: { select: { id: true, code: true, title: true } },
}, },
}) })
if (!task) return null if (!task) return null
// Branch-per-story: if a sibling job in the same story already opened a PR, // Cross-repo sprints: een sprint kan taken hebben die via task.repo_url een
// reuse its URL. This avoids one PR per sub-task. // ander repo targeten. PRs en branches zijn per-repo, dus een sibling-PR mag
const sibling = await prisma.claudeJob.findFirst({ // alleen hergebruikt worden als die sibling hetzelfde repo targette. null/leeg
// repo_url = het product-repo; twee taken zitten in dezelfde repo-bucket als
// hun (repo_url ?? null) gelijk is.
const thisRepoKey = task.repo_url ?? null
// PBI-46 SPRINT-mode: hergebruik 1 draft-PR voor de hele SprintRun (per repo).
// Mens zet 'm ready-for-review zodra de SprintRun DONE is.
if (job?.sprint_run && job.sprint_run.pr_strategy === 'SPRINT') {
const sprintSiblings = await prisma.claudeJob.findMany({
where: {
sprint_run_id: job.sprint_run_id,
pr_url: { not: null },
id: { not: jobId },
},
select: { pr_url: true, task: { select: { repo_url: true } } },
orderBy: { created_at: 'asc' },
})
const sameRepoSibling = sprintSiblings.find(
(s) => (s.task?.repo_url ?? null) === thisRepoKey,
)
if (sameRepoSibling?.pr_url) return sameRepoSibling.pr_url
// Eerste DONE in deze SprintRun → maak draft-PR aan, geen auto-merge.
const goal = job.sprint_run.sprint.sprint_goal
const sprintTitle = `Sprint: ${goal}`.slice(0, 200)
const body = summary
? `${summary}\n\n---\n\n*Draft PR voor sprint-run \`${job.sprint_run.id}\`. Wordt ready-for-review zodra alle stories DONE zijn (auto-merge bewust uit voor sprint-mode).*`
: `*Draft PR voor sprint-run \`${job.sprint_run.id}\`. Wordt ready-for-review zodra alle stories DONE zijn (auto-merge bewust uit voor sprint-mode).*`
const result = await createPullRequest({
worktreePath,
branchName,
title: sprintTitle,
body,
draft: true,
enableAutoMerge: false,
})
if ('url' in result) return result.url
console.warn(`[update_job_status] sprint draft-PR skipped for job ${jobId}:`, result.error)
return null
}
// STORY-mode (default of legacy): branch-per-story, sibling-tasks delen PR
// — maar alleen siblings die hetzelfde repo targeten (zie thisRepoKey).
const storySiblings = await prisma.claudeJob.findMany({
where: { where: {
task: { story_id: task.story.id }, task: { story_id: task.story.id },
pr_url: { not: null }, pr_url: { not: null },
id: { not: jobId }, id: { not: jobId },
}, },
select: { pr_url: true }, select: { pr_url: true, task: { select: { repo_url: true } } },
orderBy: { created_at: 'asc' }, orderBy: { created_at: 'asc' },
}) })
if (sibling?.pr_url) return sibling.pr_url const sameRepoStorySibling = storySiblings.find(
(s) => (s.task?.repo_url ?? null) === thisRepoKey,
)
if (sameRepoStorySibling?.pr_url) return sameRepoStorySibling.pr_url
// First DONE-task in the story → create a story-scoped PR
const storyTitle = task.story.code ? `${task.story.code}: ${task.story.title}` : task.story.title const storyTitle = task.story.code ? `${task.story.code}: ${task.story.title}` : task.story.title
const body = summary const body = summary
? `${summary}\n\n---\n\n*Auto-generated by Scrum4Me agent (first task in story; PR-body will accumulate as sibling tasks complete).*` ? `${summary}\n\n---\n\n*Auto-generated by Scrum4Me agent (first task in story; PR-body will accumulate as sibling tasks complete).*`
@ -256,6 +524,68 @@ export async function maybeCreateAutoPr(opts: {
return null return null
} }
// PBI-50 F4-T2: SPRINT_BATCH PR-flow. Eén draft-PR voor de hele sprint,
// title = sprint.sprint_goal. Mens reviewt + mergt zelf — geen auto-merge.
// Lijkt op de SPRINT-mode van maybeCreateAutoPr maar zonder task-context.
export async function maybeCreateSprintBatchPr(opts: {
jobId: string
productId: string
worktreePath: string
branchName: string
summary: string | undefined
}): Promise<string | null> {
const { jobId, productId, worktreePath, branchName, summary } = opts
const product = await prisma.product.findUnique({
where: { id: productId },
select: { auto_pr: true },
})
if (!product?.auto_pr) return null
const job = await prisma.claudeJob.findUnique({
where: { id: jobId },
select: {
sprint_run_id: true,
sprint_run: {
select: { id: true, sprint: { select: { sprint_goal: true } } },
},
},
})
if (!job?.sprint_run) return null
// Resume-pad: oude SprintRun heeft mogelijk al een PR via vorige run-job.
// Lookup via SprintRunChain (previous_run_id) of via sibling-SPRINT-job.
const previousRun = await prisma.sprintRun.findUnique({
where: { id: job.sprint_run.id },
select: { previous_run_id: true },
})
if (previousRun?.previous_run_id) {
const prevPr = await prisma.claudeJob.findFirst({
where: { sprint_run_id: previousRun.previous_run_id, pr_url: { not: null } },
select: { pr_url: true },
})
if (prevPr?.pr_url) return prevPr.pr_url
}
const goal = job.sprint_run.sprint.sprint_goal
const sprintTitle = `Sprint: ${goal}`.slice(0, 200)
const body = summary
? `${summary}\n\n---\n\n*Draft PR voor sprint-batch \`${job.sprint_run.id}\` (single-session). Wordt ready-for-review zodra alle tasks DONE zijn.*`
: `*Draft PR voor sprint-batch \`${job.sprint_run.id}\` (single-session). Wordt ready-for-review zodra alle tasks DONE zijn.*`
const result = await createPullRequest({
worktreePath,
branchName,
title: sprintTitle,
body,
draft: true,
enableAutoMerge: false,
})
if ('url' in result) return result.url
console.warn(`[update_job_status] sprint-batch draft-PR skipped for job ${jobId}:`, result.error)
return null
}
export function registerUpdateJobStatusTool(server: McpServer) { export function registerUpdateJobStatusTool(server: McpServer) {
server.registerTool( server.registerTool(
'update_job_status', 'update_job_status',
@ -263,13 +593,20 @@ export function registerUpdateJobStatusTool(server: McpServer) {
title: 'Update job status', title: 'Update job status',
description: description:
'Report progress on a claimed ClaudeJob. Allowed transitions from CLAIMED/RUNNING: ' + 'Report progress on a claimed ClaudeJob. Allowed transitions from CLAIMED/RUNNING: ' +
'running (start), done (finished), failed (error). ' + 'running (start), done (finished), failed (error), skipped (no-op exit). ' +
'The Bearer token must match the token that claimed the job. ' + 'The Bearer token must match the token that claimed the job. ' +
'Stamps started_at on running and finished_at on done/failed/skipped, and backfills ' +
'claimed_at/started_at when missing so claimed_at ≤ started_at ≤ finished_at always holds. ' +
'Before marking done: call verify_task_against_plan first — done is rejected when ' + 'Before marking done: call verify_task_against_plan first — done is rejected when ' +
'verify_result is null, EMPTY (unless task.verify_only is true), or when the verify level ' + 'verify_result is null, EMPTY (unless task.verify_only is true), or when the verify level ' +
'doesnt meet task.verify_required: ALIGNED-only is strict; ALIGNED_OR_PARTIAL accepts ' + 'doesnt meet task.verify_required: ALIGNED-only is strict; ALIGNED_OR_PARTIAL accepts ' +
'PARTIAL/DIVERGENT but requires a non-empty summary (≥20 chars) explaining the drift; ANY ' + 'PARTIAL/DIVERGENT but requires a non-empty summary (≥20 chars) explaining the drift; ANY ' +
'accepts everything. ' + 'accepts everything. ' +
"Use 'skipped' for TASK_IMPLEMENTATION when verify_task_against_plan returns EMPTY because " +
'the requested changes are already present in origin/main (parallel work, earlier PR, race ' +
"between siblings). 'skipped' requires a non-empty error (≥10 chars) describing the reason " +
"(e.g. 'no_op_changes_already_in_main') and skips the verify-gate, auto-PR and PBI fail-cascade. " +
'Mark the underlying task DONE separately via update_task_status if its requirements are met. ' +
'Automatically emits an SSE event so the Scrum4Me UI updates in real time. ' + 'Automatically emits an SSE event so the Scrum4Me UI updates in real time. ' +
'Optionally accepts token-usage fields (model_id + input/output/cache_read/cache_write tokens) ' + 'Optionally accepts token-usage fields (model_id + input/output/cache_read/cache_write tokens) ' +
'for cost tracking — typically populated by a PostToolUse hook from the local Claude Code transcript, ' + 'for cost tracking — typically populated by a PostToolUse hook from the local Claude Code transcript, ' +
@ -288,6 +625,7 @@ export function registerUpdateJobStatusTool(server: McpServer) {
output_tokens, output_tokens,
cache_read_tokens, cache_read_tokens,
cache_write_tokens, cache_write_tokens,
actual_thinking_tokens,
}) => }) =>
withToolErrors(async () => { withToolErrors(async () => {
const auth = await requireWriteAccess() const auth = await requireWriteAccess()
@ -298,11 +636,14 @@ export function registerUpdateJobStatusTool(server: McpServer) {
select: { select: {
id: true, id: true,
status: true, status: true,
claimed_at: true,
started_at: true,
claimed_by_token_id: true, claimed_by_token_id: true,
user_id: true, user_id: true,
product_id: true, product_id: true,
task_id: true, task_id: true,
idea_id: true, idea_id: true,
sprint_run_id: true,
kind: true, kind: true,
verify_result: true, verify_result: true,
task: { select: { verify_only: true, verify_required: true } }, task: { select: { verify_only: true, verify_required: true } },
@ -313,16 +654,43 @@ export function registerUpdateJobStatusTool(server: McpServer) {
if (job.claimed_by_token_id !== tokenId) { if (job.claimed_by_token_id !== tokenId) {
return toolError('PERMISSION_DENIED: This job was not claimed by your token') return toolError('PERMISSION_DENIED: This job was not claimed by your token')
} }
if (job.status === 'CANCELLED') {
// PBI fail-cascade got here first. The agent must abandon any
// local work and call wait_for_job again instead of forcing this
// job into DONE/FAILED.
return toolError(
'JOB_CANCELLED: This job was cancelled by the PBI fail-cascade. ' +
'Discard your local changes and call wait_for_job for the next item.',
)
}
if (!['CLAIMED', 'RUNNING'].includes(job.status)) { if (!['CLAIMED', 'RUNNING'].includes(job.status)) {
return toolError(`Job is already in terminal state: ${job.status.toLowerCase()}`) return toolError(`Job is already in terminal state: ${job.status.toLowerCase()}`)
} }
// 'skipped' = no-op exit. Only valid for TASK_IMPLEMENTATION (verify=EMPTY
// patroon) en vereist een non-empty error met ≥10 chars uitleg, zoals
// 'no_op_changes_already_in_main'. Geen verify-gate, geen PR, geen
// PBI fail-cascade, geen propagation naar task/story/PBI.
if (status === 'skipped') {
if (job.kind !== 'TASK_IMPLEMENTATION') {
return toolError(
`'skipped' is alleen toegestaan voor TASK_IMPLEMENTATION (kind=${job.kind})`,
)
}
if (!error || error.trim().length < 10) {
return toolError(
"'skipped' vereist non-empty error met reden (≥10 chars), bv. 'no_op_changes_already_in_main'",
)
}
}
// For DONE: push first, adjust DB status based on result // For DONE: push first, adjust DB status based on result
let actualStatus = status let actualStatus = status
let pushedAt: Date | undefined let pushedAt: Date | undefined
let branchToWrite = branch let branchToWrite = branch
let errorToWrite = error let errorToWrite = error
let skipWorktreeCleanup = false let skipWorktreeCleanup = false
let headShaToWrite: string | undefined
if (status === 'done') { if (status === 'done') {
// M12: idea-jobs hebben geen task/plan_snapshot/branch — skip de // M12: idea-jobs hebben geen task/plan_snapshot/branch — skip de
@ -333,6 +701,19 @@ export function registerUpdateJobStatusTool(server: McpServer) {
actualStatus = 'done' actualStatus = 'done'
// pushedAt blijft undefined, branch/error overrides ook // pushedAt blijft undefined, branch/error overrides ook
skipWorktreeCleanup = true skipWorktreeCleanup = true
} else if (job.kind === 'SPRINT_IMPLEMENTATION') {
// PBI-50 F4-T2: aggregate verify-gate via SprintTaskExecution-rows.
// Geen single-task verify_result op de SPRINT-job zelf.
const gate = await checkSprintVerifyGate(job_id)
if (!gate.allowed) return toolError(gate.error)
const plan = await prepareDoneUpdate(job_id, branch)
actualStatus = plan.dbStatus === 'DONE' ? 'done' : 'failed'
pushedAt = plan.pushedAt
if (plan.branchOverride !== undefined) branchToWrite = plan.branchOverride
if (plan.errorOverride !== undefined) errorToWrite = plan.errorOverride
skipWorktreeCleanup = plan.skipWorktreeCleanup
headShaToWrite = plan.headSha
} else { } else {
const gate = checkVerifyGate( const gate = checkVerifyGate(
job.verify_result ?? null, job.verify_result ?? null,
@ -348,11 +729,13 @@ export function registerUpdateJobStatusTool(server: McpServer) {
if (plan.branchOverride !== undefined) branchToWrite = plan.branchOverride if (plan.branchOverride !== undefined) branchToWrite = plan.branchOverride
if (plan.errorOverride !== undefined) errorToWrite = plan.errorOverride if (plan.errorOverride !== undefined) errorToWrite = plan.errorOverride
skipWorktreeCleanup = plan.skipWorktreeCleanup skipWorktreeCleanup = plan.skipWorktreeCleanup
headShaToWrite = plan.headSha
} }
} }
// Auto-PR: best-effort, only when push actually happened. // Auto-PR: best-effort, only when push actually happened.
// M12: idee-jobs hebben geen task_id en geen branch — skip auto-PR. // M12: idee-jobs hebben geen task_id en geen branch — skip auto-PR.
// PBI-50: SPRINT_IMPLEMENTATION krijgt een eigen PR-flow (sprint-goal als title).
let prUrl: string | null = null let prUrl: string | null = null
if ( if (
actualStatus === 'done' && actualStatus === 'done' &&
@ -361,9 +744,7 @@ export function registerUpdateJobStatusTool(server: McpServer) {
job.kind === 'TASK_IMPLEMENTATION' && job.kind === 'TASK_IMPLEMENTATION' &&
job.task_id job.task_id
) { ) {
const worktreeDir = const worktreeDir = getWorktreeRoot()
process.env.SCRUM4ME_AGENT_WORKTREE_DIR ??
path.join(os.homedir(), '.scrum4me-agent-worktrees')
prUrl = await maybeCreateAutoPr({ prUrl = await maybeCreateAutoPr({
jobId: job_id, jobId: job_id,
productId: job.product_id, productId: job.product_id,
@ -375,6 +756,23 @@ export function registerUpdateJobStatusTool(server: McpServer) {
console.warn(`[update_job_status] auto-PR error for job ${job_id}:`, err) console.warn(`[update_job_status] auto-PR error for job ${job_id}:`, err)
return null return null
}) })
} else if (
actualStatus === 'done' &&
pushedAt &&
branchToWrite &&
job.kind === 'SPRINT_IMPLEMENTATION'
) {
const worktreeDir = getWorktreeRoot()
prUrl = await maybeCreateSprintBatchPr({
jobId: job_id,
productId: job.product_id,
worktreePath: path.join(worktreeDir, job_id),
branchName: branchToWrite,
summary,
}).catch((err) => {
console.warn(`[update_job_status] sprint-batch PR error for job ${job_id}:`, err)
return null
})
} }
const dbStatus = DB_STATUS_MAP[actualStatus as keyof typeof DB_STATUS_MAP] const dbStatus = DB_STATUS_MAP[actualStatus as keyof typeof DB_STATUS_MAP]
@ -383,18 +781,23 @@ export function registerUpdateJobStatusTool(server: McpServer) {
where: { id: job_id }, where: { id: job_id },
data: { data: {
status: dbStatus, status: dbStatus,
...(actualStatus === 'running' ? { started_at: now } : {}), ...resolveJobTimestamps(
...(actualStatus === 'done' || actualStatus === 'failed' ? { finished_at: now } : {}), actualStatus,
{ claimed_at: job.claimed_at, started_at: job.started_at },
now,
),
...(branchToWrite !== undefined ? { branch: branchToWrite } : {}), ...(branchToWrite !== undefined ? { branch: branchToWrite } : {}),
...(pushedAt !== undefined ? { pushed_at: pushedAt } : {}), ...(pushedAt !== undefined ? { pushed_at: pushedAt } : {}),
...(summary !== undefined ? { summary } : {}), ...(summary !== undefined ? { summary } : {}),
...(errorToWrite !== undefined ? { error: errorToWrite } : {}), ...(errorToWrite !== undefined ? { error: errorToWrite } : {}),
...(prUrl !== null ? { pr_url: prUrl } : {}), ...(prUrl !== null ? { pr_url: prUrl } : {}),
...(headShaToWrite !== undefined ? { head_sha: headShaToWrite } : {}),
...(model_id !== undefined ? { model_id } : {}), ...(model_id !== undefined ? { model_id } : {}),
...(input_tokens !== undefined ? { input_tokens } : {}), ...(input_tokens !== undefined ? { input_tokens } : {}),
...(output_tokens !== undefined ? { output_tokens } : {}), ...(output_tokens !== undefined ? { output_tokens } : {}),
...(cache_read_tokens !== undefined ? { cache_read_tokens } : {}), ...(cache_read_tokens !== undefined ? { cache_read_tokens } : {}),
...(cache_write_tokens !== undefined ? { cache_write_tokens } : {}), ...(cache_write_tokens !== undefined ? { cache_write_tokens } : {}),
...(actual_thinking_tokens !== undefined ? { actual_thinking_tokens } : {}),
}, },
select: { select: {
id: true, id: true,
@ -407,9 +810,143 @@ export function registerUpdateJobStatusTool(server: McpServer) {
error: true, error: true,
started_at: true, started_at: true,
finished_at: true, finished_at: true,
head_sha: true,
}, },
}) })
// PBI-46 sprint-flow: propageer Task → Story → PBI → Sprint → SprintRun
// bij elke task-statusovergang (DONE of FAILED). De helper handelt ook
// sibling-cancel binnen dezelfde SprintRun af bij FAILED.
// Idea-jobs hebben geen task_id en worden hier overgeslagen.
let sprintRunBecameDone = false
let storyBecameDone = false
if (
(actualStatus === 'done' || actualStatus === 'failed') &&
job.kind === 'TASK_IMPLEMENTATION' &&
job.task_id
) {
try {
const propagation = await propagateStatusUpwards(
job.task_id,
actualStatus === 'done' ? 'DONE' : 'FAILED',
)
sprintRunBecameDone = actualStatus === 'done' && propagation.sprintRunChanged
storyBecameDone = actualStatus === 'done' && propagation.storyChanged
} catch (err) {
console.warn(
`[update_job_status] propagateStatusUpwards error for task ${job.task_id}:`,
err,
)
}
}
// PBI-47 (P0): STORY-mode auto-merge timing fix.
// Only enable auto-merge when this DONE was the *last* task of a STORY
// (story.status flipped to DONE) and pr_strategy === STORY. The
// pr-flow transition emits ENABLE_AUTO_MERGE with the head_sha guard.
if (
storyBecameDone &&
updated.pr_url &&
headShaToWrite &&
job.kind === 'TASK_IMPLEMENTATION'
) {
const storyCtx = await prisma.claudeJob.findUnique({
where: { id: job_id },
select: {
task: { select: { story: { select: { status: true } } } },
sprint_run: { select: { pr_strategy: true } },
},
})
if (
storyCtx?.sprint_run?.pr_strategy === 'STORY'
&& storyCtx.task?.story.status === 'DONE'
) {
const result = prFlowTransition(
{ kind: 'pr_opened', strategy: 'STORY', prUrl: updated.pr_url },
{
type: 'STORY_COMPLETED',
storyId: '',
headSha: headShaToWrite,
},
)
const outcomes = await executeEffects(result.effects)
// PBI-47 (C2): route MERGE_CONFLICT to sprint-run flow → PAUSED.
// Other reasons (CHECKS_FAILED, GH_AUTH_ERROR, AUTO_MERGE_NOT_ALLOWED, UNKNOWN)
// remain warnings; CHECKS_FAILED is already covered by the task-FAIL cascade.
for (const o of outcomes) {
if (o.effect === 'ENABLE_AUTO_MERGE' && !o.ok) {
console.warn(
`[update_job_status] auto-merge fail for ${updated.pr_url}: ${o.reason} ${o.stderr.slice(0, 200)}`,
)
if (o.reason === 'MERGE_CONFLICT') {
const sprintRunId = await prisma.claudeJob
.findUnique({
where: { id: job_id },
select: { sprint_run_id: true },
})
.then((j) => j?.sprint_run_id)
if (sprintRunId) {
const conflictFiles = await fetchConflictFiles(updated.pr_url)
const conflictResult = sprintRunTransition(
{ kind: 'running', sprintRunId },
{
type: 'MERGE_CONFLICT',
prUrl: updated.pr_url,
prHeadSha: headShaToWrite ?? '',
conflictFiles,
resumeInstructions:
'Resolve the conflict on this branch, push, then resume the sprint via the UI.',
},
)
await executeEffects(conflictResult.effects)
}
}
}
}
}
}
// SPRINT-mode: bij sprint-DONE de draft-PR ready-for-review zetten.
// Mens reviewt + mergt zelf — geen auto-merge in deze modus.
// PBI-49 P2: gebruik niet alleen updated.pr_url — als de laatste task
// verify-only is of geen wijzigingen pusht, krijgt die geen pr_url.
// Zoek de eerst aangemaakte PR op binnen de SprintRun als fallback.
if (sprintRunBecameDone) {
const ctx = await prisma.claudeJob
.findUnique({
where: { id: job_id },
select: {
sprint_run_id: true,
sprint_run: { select: { pr_strategy: true, status: true } },
},
})
if (
ctx?.sprint_run?.pr_strategy === 'SPRINT'
&& ctx.sprint_run.status === 'DONE'
&& ctx.sprint_run_id
) {
const sprintPrUrl = updated.pr_url
?? (await prisma.claudeJob.findFirst({
where: { sprint_run_id: ctx.sprint_run_id, pr_url: { not: null } },
orderBy: { created_at: 'asc' },
select: { pr_url: true },
}))?.pr_url
?? null
if (sprintPrUrl) {
try {
const ready = await markPullRequestReady({ prUrl: sprintPrUrl })
if ('error' in ready) {
console.warn(
`[update_job_status] markPullRequestReady failed for ${sprintPrUrl}: ${ready.error}`,
)
}
} catch (err) {
console.warn(`[update_job_status] markPullRequestReady error:`, err)
}
}
}
}
// M12: bij failed voor IDEA_*-jobs: zet idea.status op // M12: bij failed voor IDEA_*-jobs: zet idea.status op
// GRILL_FAILED / PLAN_FAILED + log JOB_EVENT. Bij done laten we de // GRILL_FAILED / PLAN_FAILED + log JOB_EVENT. Bij done laten we de
// idea-status met rust — die wordt door update_idea_*_md gezet. // idea-status met rust — die wordt door update_idea_*_md gezet.
@ -442,35 +979,140 @@ export function registerUpdateJobStatusTool(server: McpServer) {
try { try {
const pg = new Client({ connectionString: process.env.DATABASE_URL }) const pg = new Client({ connectionString: process.env.DATABASE_URL })
await pg.connect() await pg.connect()
await pg.query( const notifyPayload: Record<string, unknown> = {
`SELECT pg_notify('scrum4me_changes', $1)`, type: 'claude_job_status',
[ job_id: updated.id,
JSON.stringify({ user_id: job.user_id,
type: 'claude_job_status', product_id: job.product_id,
job_id: updated.id, status: actualStatus,
task_id: job.task_id, branch: updated.branch ?? undefined,
user_id: job.user_id, pushed_at: updated.pushed_at?.toISOString() ?? undefined,
product_id: job.product_id, pr_url: updated.pr_url ?? undefined,
status: actualStatus, verify_result: updated.verify_result?.toLowerCase() ?? undefined,
branch: updated.branch ?? undefined, summary: updated.summary ?? undefined,
pushed_at: updated.pushed_at?.toISOString() ?? undefined, error: updated.error ?? undefined,
pr_url: updated.pr_url ?? undefined, }
verify_result: updated.verify_result?.toLowerCase() ?? undefined, if (job.task_id) notifyPayload.task_id = job.task_id
summary: updated.summary ?? undefined, if (job.idea_id) {
error: updated.error ?? undefined, notifyPayload.idea_id = job.idea_id
}), notifyPayload.kind = job.kind
], }
) await pg.query(`SELECT pg_notify('scrum4me_changes', $1)`, [JSON.stringify(notifyPayload)])
await pg.end() await pg.end()
} catch { } catch {
// non-fatal — status is already persisted // non-fatal — status is already persisted
} }
if (actualStatus === 'failed' || actualStatus === 'done') {
const isFailed = actualStatus === 'failed'
void triggerPush(job.user_id, {
title: isFailed ? 'Job gefaald' : 'Job klaar',
body: (updated.summary ?? updated.error ?? `Job ${updated.id}`).slice(0, 120),
url: updated.pr_url ?? '/dashboard',
tag: `job-${updated.id}`,
})
}
// Best-effort worktree cleanup on terminal transitions (skip if push failed — worktree preserved) // Best-effort worktree cleanup on terminal transitions (skip if push failed — worktree preserved)
if ((actualStatus === 'done' || actualStatus === 'failed') && !skipWorktreeCleanup) { if (
(actualStatus === 'done' || actualStatus === 'failed' || actualStatus === 'skipped') &&
!skipWorktreeCleanup
) {
await cleanupWorktreeForTerminalStatus(job.product_id, job_id, actualStatus, branchToWrite) await cleanupWorktreeForTerminalStatus(job.product_id, job_id, actualStatus, branchToWrite)
} }
// PBI fail-cascade: when a TASK_IMPLEMENTATION job ends in FAILED,
// cancel all queued/claimed/running siblings under the same PBI and
// undo any pushed commits (close open PRs / open revert-PRs for
// already-merged ones). Idempotent + non-blocking — never throws.
// PBI-50: SPRINT_IMPLEMENTATION SKIPS this — cascade naar tasks/stories/
// PBIs is al gebeurd via per-task update_task_status('failed')-calls
// van de worker. Sprint-job heeft geen task_id; cancelPbi-flow past niet.
if (actualStatus === 'failed' && job.kind === 'TASK_IMPLEMENTATION' && job.task_id) {
await cancelPbiOnFailure(job_id)
}
// PBI-50 F4-T2: SPRINT_IMPLEMENTATION DONE → finalize SprintRun.
if (
actualStatus === 'done' &&
job.kind === 'SPRINT_IMPLEMENTATION' &&
job.sprint_run_id
) {
try {
await finalizeSprintRunOnDone(job.sprint_run_id)
// Mark draft-PR ready-for-review als de SprintRun nu DONE is
const finalRun = await prisma.sprintRun.findUnique({
where: { id: job.sprint_run_id },
select: { status: true },
})
if (finalRun?.status === 'DONE' && updated.pr_url) {
try {
const ready = await markPullRequestReady({ prUrl: updated.pr_url })
if ('error' in ready) {
console.warn(
`[update_job_status] sprint-batch markPullRequestReady failed for ${updated.pr_url}: ${ready.error}`,
)
}
} catch (err) {
console.warn(`[update_job_status] sprint-batch markPullRequestReady error:`, err)
}
}
} catch (err) {
console.warn(`[update_job_status] finalizeSprintRunOnDone error:`, err)
}
}
// PBI-50 F4-T3: SPRINT_IMPLEMENTATION FAILED →
// - Detect QUOTA_PAUSE: error-prefix → PAUSED met pause_context.
// - Anders: vul SprintRun.failure_reason + failed_task_id (uit error).
if (actualStatus === 'failed' && job.kind === 'SPRINT_IMPLEMENTATION' && job.sprint_run_id) {
const isQuotaPause = (errorToWrite ?? '').startsWith('QUOTA_PAUSE:')
if (isQuotaPause) {
// Vind laatst-DONE execution voor pause-context
const lastDone = await prisma.sprintTaskExecution.findFirst({
where: { sprint_job_id: job_id, status: 'DONE' },
orderBy: { order: 'desc' },
select: { id: true, order: true, task_id: true },
})
await prisma.sprintRun.update({
where: { id: job.sprint_run_id },
data: {
status: 'PAUSED',
pause_context: {
pause_reason: 'QUOTA_DEPLETED',
paused_at: new Date().toISOString(),
resume_instructions:
'Wacht tot quota is gereset, dan resume de SprintRun via de UI. Een nieuwe SprintRun wordt gemaakt met previous_run_id en branch hergebruik.',
last_completed_execution_id: lastDone?.id ?? null,
last_completed_order: lastDone?.order ?? null,
last_completed_task_id: lastDone?.task_id ?? null,
pr_url: updated.pr_url ?? null,
pr_head_sha: updated.head_sha ?? null,
conflict_files: [],
claude_question_id: '',
} as any,
},
})
} else {
const failedTaskId = (errorToWrite ?? '').match(/task[:\s]+([a-z0-9]+)/i)?.[1] ?? null
await prisma.sprintRun.update({
where: { id: job.sprint_run_id },
data: {
status: 'FAILED',
failure_reason: errorToWrite?.slice(0, 500) ?? null,
failed_task_id: failedTaskId,
finished_at: new Date(),
},
})
}
}
// PBI-9: release product-worktree locks on terminal transitions.
// No-op for jobs without registered locks (i.e. TASK_IMPLEMENTATION).
if (actualStatus === 'done' || actualStatus === 'failed' || actualStatus === 'skipped') {
await releaseLocksOnTerminal(job_id)
}
const queueCount = await prisma.claudeJob.count({ const queueCount = await prisma.claudeJob.count({
where: { user_id: userId, status: 'QUEUED' }, where: { user_id: userId, status: 'QUEUED' },
}) })

102
src/tools/update-sprint.ts Normal file
View file

@ -0,0 +1,102 @@
// MCP tool: update een Sprint.
//
// Generieke update — wijzigt elke combinatie van status, sprint_goal,
// start_date en end_date. Géén state-machine validatie (zie
// docs/plans/sprint-mcp-tools.md): last-write-wins, het resubmit/heropen-pad
// zit elders. Bij status → CLOSED/FAILED/ARCHIVED zonder expliciete end_date
// wordt end_date automatisch op vandaag gezet. Bij status → CLOSED wordt
// daarnaast `completed_at` op now() gezet (parity met
// src/lib/tasks-status-update.ts dat hetzelfde doet bij auto-close via
// task-status-cascade; zo houden reporting en UI één bron van waarheid voor
// completion-tijd).
import { z } from 'zod'
import type { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js'
import type { SprintStatus } from '@prisma/client'
import { prisma } from '../prisma.js'
import { requireWriteAccess } from '../auth.js'
import { userCanAccessProduct } from '../access.js'
import { toolError, toolJson, withToolErrors } from '../errors.js'
const TERMINAL_STATUSES = new Set<SprintStatus>(['CLOSED', 'FAILED', 'ARCHIVED'])
export const inputSchema = z.object({
sprint_id: z.string().min(1),
status: z.enum(['OPEN', 'CLOSED', 'ARCHIVED', 'FAILED']).optional(),
sprint_goal: z.string().min(1).max(500).optional(),
end_date: z.string().date().optional(),
start_date: z.string().date().optional(),
})
export async function handleUpdateSprint(
{ sprint_id, status, sprint_goal, end_date, start_date }: z.infer<typeof inputSchema>,
) {
return withToolErrors(async () => {
if (
status === undefined &&
sprint_goal === undefined &&
end_date === undefined &&
start_date === undefined
) {
return toolError('Minstens één veld vereist om te wijzigen')
}
const auth = await requireWriteAccess()
const sprint = await prisma.sprint.findUnique({
where: { id: sprint_id },
select: { id: true, product_id: true },
})
if (!sprint) {
return toolError(`Sprint ${sprint_id} not found`)
}
if (!(await userCanAccessProduct(sprint.product_id, auth.userId))) {
return toolError(`Sprint ${sprint_id} not accessible`)
}
const data: {
status?: SprintStatus
sprint_goal?: string
start_date?: Date
end_date?: Date
completed_at?: Date
} = {}
if (status !== undefined) data.status = status
if (sprint_goal !== undefined) data.sprint_goal = sprint_goal
if (start_date !== undefined) data.start_date = new Date(start_date)
if (end_date !== undefined) {
data.end_date = new Date(end_date)
} else if (status !== undefined && TERMINAL_STATUSES.has(status)) {
data.end_date = new Date()
}
if (status === 'CLOSED') data.completed_at = new Date()
const updated = await prisma.sprint.update({
where: { id: sprint_id },
data,
select: {
id: true,
code: true,
sprint_goal: true,
status: true,
start_date: true,
end_date: true,
completed_at: true,
},
})
return toolJson(updated)
})
}
export function registerUpdateSprintTool(server: McpServer) {
server.registerTool(
'update_sprint',
{
title: 'Update Sprint',
description:
'Update a sprint: status, sprint_goal, start_date and/or end_date. At least one field required. No state-machine validation — last-write-wins. When status goes to CLOSED/FAILED/ARCHIVED and end_date is not provided, end_date is set to today. When status goes to CLOSED, completed_at is set to now (parity with auto-close via task-cascade). Forbidden for demo accounts.',
inputSchema,
},
handleUpdateSprint,
)
}

View file

@ -0,0 +1,110 @@
// PBI-50 F3-T2: update_task_execution
//
// SPRINT_IMPLEMENTATION-flow lifecycle-tool. Worker roept dit aan voor elke
// task in de batch om de SprintTaskExecution-row te muteren:
// PENDING → RUNNING → DONE/FAILED/SKIPPED
// Idempotent: dezelfde call kan veilig herhaald worden.
import { z } from 'zod'
import type { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js'
import { prisma } from '../prisma.js'
import { requireWriteAccess } from '../auth.js'
import { toolError, toolJson, withToolErrors } from '../errors.js'
const inputSchema = z.object({
execution_id: z.string().min(1),
status: z.enum(['PENDING', 'RUNNING', 'DONE', 'FAILED', 'SKIPPED']),
base_sha: z.string().optional(),
head_sha: z.string().optional(),
skip_reason: z.string().max(2000).optional(),
})
export function registerUpdateTaskExecutionTool(server: McpServer) {
server.registerTool(
'update_task_execution',
{
title: 'Update SprintTaskExecution status',
description:
'Mutate a SprintTaskExecution row in a SPRINT_IMPLEMENTATION batch. ' +
'Status: PENDING|RUNNING|DONE|FAILED|SKIPPED. Worker calls this for each ' +
'task transition. Token must own the parent SPRINT_IMPLEMENTATION ClaudeJob. ' +
'Idempotent — safe to retry. Schrijft started_at (RUNNING) en finished_at ' +
'(DONE/FAILED/SKIPPED). Forbidden for demo accounts.',
inputSchema,
},
async ({ execution_id, status, base_sha, head_sha, skip_reason }) =>
withToolErrors(async () => {
const auth = await requireWriteAccess()
const execution = await prisma.sprintTaskExecution.findUnique({
where: { id: execution_id },
select: {
id: true,
sprint_job_id: true,
sprint_job: {
select: { claimed_by_token_id: true, status: true, kind: true },
},
},
})
if (!execution) {
return toolError(`SprintTaskExecution ${execution_id} not found`)
}
if (execution.sprint_job.kind !== 'SPRINT_IMPLEMENTATION') {
return toolError(
`Execution ${execution_id} hangs at job kind ${execution.sprint_job.kind}, expected SPRINT_IMPLEMENTATION`,
)
}
if (execution.sprint_job.claimed_by_token_id !== auth.tokenId) {
return toolError(
`Forbidden: token does not own SPRINT_IMPLEMENTATION job for execution ${execution_id}`,
)
}
if (
execution.sprint_job.status !== 'CLAIMED' &&
execution.sprint_job.status !== 'RUNNING'
) {
return toolError(
`Sprint job is in terminal state ${execution.sprint_job.status}`,
)
}
const now = new Date()
const updated = await prisma.sprintTaskExecution.update({
where: { id: execution_id },
data: {
status,
...(base_sha !== undefined ? { base_sha } : {}),
...(head_sha !== undefined ? { head_sha } : {}),
...(skip_reason !== undefined ? { skip_reason } : {}),
...(status === 'RUNNING' ? { started_at: now } : {}),
...(status === 'DONE' || status === 'FAILED' || status === 'SKIPPED'
? { finished_at: now }
: {}),
},
select: {
id: true,
status: true,
base_sha: true,
head_sha: true,
verify_result: true,
verify_summary: true,
skip_reason: true,
started_at: true,
finished_at: true,
},
})
return toolJson({
execution_id: updated.id,
status: updated.status,
base_sha: updated.base_sha,
head_sha: updated.head_sha,
verify_result: updated.verify_result,
verify_summary: updated.verify_summary,
skip_reason: updated.skip_reason,
started_at: updated.started_at?.toISOString() ?? null,
finished_at: updated.finished_at?.toISOString() ?? null,
})
}),
)
}

View file

@ -1,5 +1,6 @@
import { z } from 'zod' import { z } from 'zod'
import type { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js' import type { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js'
import { prisma } from '../prisma.js'
import { requireWriteAccess } from '../auth.js' import { requireWriteAccess } from '../auth.js'
import { userCanAccessTask } from '../access.js' import { userCanAccessTask } from '../access.js'
import { toolError, toolJson, withToolErrors } from '../errors.js' import { toolError, toolJson, withToolErrors } from '../errors.js'
@ -9,6 +10,10 @@ import { updateTaskStatusWithStoryPromotion } from '../lib/tasks-status-update.j
const inputSchema = z.object({ const inputSchema = z.object({
task_id: z.string().min(1), task_id: z.string().min(1),
status: z.enum(TASK_STATUS_API_VALUES as [string, ...string[]]), status: z.enum(TASK_STATUS_API_VALUES as [string, ...string[]]),
// PBI-50: optionele sprint_run_id voor SPRINT_IMPLEMENTATION-flow.
// Wanneer aanwezig: server valideert dat task in deze sprint zit, run
// actief is, en de huidige token een ClaudeJob in deze run heeft geclaimt.
sprint_run_id: z.string().min(1).optional(),
}) })
export function registerUpdateTaskStatusTool(server: McpServer) { export function registerUpdateTaskStatusTool(server: McpServer) {
@ -17,11 +22,14 @@ export function registerUpdateTaskStatusTool(server: McpServer) {
{ {
title: 'Update task status', title: 'Update task status',
description: description:
'Set the status of a task. Allowed values: todo, in_progress, review, done. ' + 'Set the status of a task. Allowed values: todo, in_progress, review, done, failed. ' +
'Optional sprint_run_id binds the update to a SPRINT_IMPLEMENTATION run for ' +
'cascade-propagation; the server validates that the task belongs to the sprint ' +
'and that the calling token has claimed a job in that run. ' +
'Forbidden for demo accounts.', 'Forbidden for demo accounts.',
inputSchema, inputSchema,
}, },
async ({ task_id, status }) => async ({ task_id, status, sprint_run_id }) =>
withToolErrors(async () => { withToolErrors(async () => {
const auth = await requireWriteAccess() const auth = await requireWriteAccess()
const dbStatus = taskStatusFromApi(status) const dbStatus = taskStatusFromApi(status)
@ -31,15 +39,74 @@ export function registerUpdateTaskStatusTool(server: McpServer) {
if (!(await userCanAccessTask(task_id, auth.userId))) { if (!(await userCanAccessTask(task_id, auth.userId))) {
return toolError(`Task ${task_id} not found or not accessible`) return toolError(`Task ${task_id} not found or not accessible`)
} }
const { task, storyStatusChange } = await updateTaskStatusWithStoryPromotion(
task_id, // PBI-50: validate explicit sprint_run_id binding.
dbStatus, if (sprint_run_id) {
) const sprintRun = await prisma.sprintRun.findUnique({
where: { id: sprint_run_id },
select: { id: true, status: true, sprint_id: true },
})
if (!sprintRun) {
return toolError(`SprintRun ${sprint_run_id} not found`)
}
if (
sprintRun.status !== 'QUEUED' &&
sprintRun.status !== 'RUNNING' &&
sprintRun.status !== 'PAUSED'
) {
return toolError(
`SprintRun ${sprint_run_id} is in terminal state ${sprintRun.status}; cannot update task status against it`,
)
}
// Task moet in deze sprint zitten
const task = await prisma.task.findUnique({
where: { id: task_id },
select: { story: { select: { sprint_id: true } } },
})
if (!task || task.story.sprint_id !== sprintRun.sprint_id) {
return toolError(
`Task ${task_id} is not in sprint ${sprintRun.sprint_id} (sprint_run ${sprint_run_id})`,
)
}
// Token-coupling: huidige token moet een actieve ClaudeJob in deze
// SprintRun hebben geclaimt (typisch de SPRINT_IMPLEMENTATION-job).
const tokenJob = await prisma.claudeJob.findFirst({
where: {
sprint_run_id,
claimed_by_token_id: auth.tokenId,
status: { in: ['CLAIMED', 'RUNNING'] },
},
select: { id: true },
})
if (!tokenJob) {
return toolError(
`Forbidden: current token has no active claim in sprint_run ${sprint_run_id}`,
)
}
}
const { task, storyStatusChange, sprintRunChanged } =
await updateTaskStatusWithStoryPromotion(task_id, dbStatus, undefined, sprint_run_id)
// Voor SPRINT-flow: stuur expliciete sprint_run_status mee zodat
// worker zijn loop kan breken bij FAILED/PAUSED zonder extra query.
let sprintRunStatusChange: string | null = null
if (sprintRunChanged && sprint_run_id) {
const updated = await prisma.sprintRun.findUnique({
where: { id: sprint_run_id },
select: { status: true },
})
sprintRunStatusChange = updated?.status ?? null
}
return toolJson({ return toolJson({
id: task.id, id: task.id,
status: taskStatusToApi(task.status), status: taskStatusToApi(task.status),
implementation_plan: task.implementation_plan, implementation_plan: task.implementation_plan,
story_status_change: storyStatusChange, story_status_change: storyStatusChange,
sprint_run_status_change: sprintRunStatusChange,
}) })
}), }),
) )

View file

@ -0,0 +1,151 @@
// PBI-50 F3-T1: verify_sprint_task
//
// Execution-aware verify-tool voor SPRINT_IMPLEMENTATION-flow.
// Verschilt van verify_task_against_plan in:
// - input via execution_id (niet task_id)
// - base_sha komt uit SprintTaskExecution.base_sha; voor task[1..N] zonder
// base_sha vult de tool dynamisch met head_sha van vorige DONE-execution
// - plan_snapshot komt uit execution.plan_snapshot (frozen op claim-tijd)
// - resultaat opgeslagen op execution-row, niet op ClaudeJob.verify_result
// - response geeft allowed_for_done direct mee
import { execFile } from 'node:child_process'
import { promisify } from 'node:util'
import { z } from 'zod'
import type { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js'
import { prisma } from '../prisma.js'
import { requireWriteAccess } from '../auth.js'
import { toolError, toolJson, withToolErrors } from '../errors.js'
import { classifyDiffAgainstPlan } from '../verify/classify.js'
import { checkVerifyGate } from './update-job-status.js'
const exec = promisify(execFile)
const inputSchema = z.object({
execution_id: z.string().min(1),
worktree_path: z.string().min(1),
summary: z.string().max(2000).optional(),
})
export function registerVerifySprintTaskTool(server: McpServer) {
server.registerTool(
'verify_sprint_task',
{
title: 'Verify SprintTaskExecution against frozen plan',
description:
'Run `git diff <base_sha>...HEAD` in the worktree and classify against the ' +
'frozen plan_snapshot of this SprintTaskExecution. Returns ALIGNED|PARTIAL|EMPTY|' +
'DIVERGENT plus reasoning + allowed_for_done (computed via the standard verify-gate ' +
'with the execution\'s frozen verify_required/verify_only). ' +
'For task[1..N] zonder base_sha vult de tool die in op basis van de head_sha van de ' +
'vorige DONE-execution. Optional summary is opgeslagen voor PARTIAL/DIVERGENT-rationale ' +
'en gebruikt door de gate. ' +
'Call this BEFORE update_task_execution(DONE) for each task in the sprint batch. ' +
'Forbidden for demo accounts.',
inputSchema,
annotations: { readOnlyHint: false },
},
async ({ execution_id, worktree_path, summary }) =>
withToolErrors(async () => {
const auth = await requireWriteAccess()
const execution = await prisma.sprintTaskExecution.findUnique({
where: { id: execution_id },
select: {
id: true,
sprint_job_id: true,
order: true,
base_sha: true,
plan_snapshot: true,
verify_required_snapshot: true,
verify_only_snapshot: true,
sprint_job: {
select: { claimed_by_token_id: true, status: true, kind: true },
},
},
})
if (!execution) {
return toolError(`SprintTaskExecution ${execution_id} not found`)
}
if (execution.sprint_job.kind !== 'SPRINT_IMPLEMENTATION') {
return toolError(
`Execution ${execution_id} hangs at job kind ${execution.sprint_job.kind}, expected SPRINT_IMPLEMENTATION`,
)
}
if (execution.sprint_job.claimed_by_token_id !== auth.tokenId) {
return toolError(
`Forbidden: token does not own SPRINT_IMPLEMENTATION job for execution ${execution_id}`,
)
}
// Resolve base_sha. Voor task[0] is dit gevuld bij claim. Voor
// task[1..N] wordt dit dynamisch gevuld op basis van de vorige
// DONE-execution's head_sha. Persist na fill zodat herhaalde calls
// dezelfde base gebruiken.
let baseSha = execution.base_sha
if (!baseSha) {
const previousDone = await prisma.sprintTaskExecution.findFirst({
where: {
sprint_job_id: execution.sprint_job_id,
order: { lt: execution.order },
status: 'DONE',
head_sha: { not: null },
},
orderBy: { order: 'desc' },
select: { head_sha: true },
})
if (!previousDone?.head_sha) {
return toolError(
`MISSING_BASE_SHA: execution ${execution_id} has no base_sha and no previous DONE-execution with head_sha. Did you skip update_task_execution(DONE) on a prior task?`,
)
}
baseSha = previousDone.head_sha
await prisma.sprintTaskExecution.update({
where: { id: execution_id },
data: { base_sha: baseSha },
})
}
let diff: string
try {
const { stdout } = await exec('git', ['diff', `${baseSha}...HEAD`], {
cwd: worktree_path,
})
diff = stdout
} catch (err) {
return toolError(
`git diff failed in worktree (${worktree_path}): ${(err as Error).message ?? 'unknown error'}`,
)
}
const { result, reasoning } = classifyDiffAgainstPlan({
diff,
plan: execution.plan_snapshot,
})
await prisma.sprintTaskExecution.update({
where: { id: execution_id },
data: {
verify_result: result,
...(summary !== undefined ? { verify_summary: summary } : {}),
},
})
const gate = checkVerifyGate(
result,
execution.verify_only_snapshot,
execution.verify_required_snapshot,
summary,
)
return toolJson({
execution_id: execution.id,
result: result.toLowerCase() as 'aligned' | 'partial' | 'empty' | 'divergent',
reasoning,
base_sha: baseSha,
allowed_for_done: gate.allowed,
reason: gate.allowed ? null : gate.error,
})
}),
)
}

View file

@ -15,8 +15,15 @@ const inputSchema = z.object({
worktree_path: z.string().min(1), worktree_path: z.string().min(1),
}) })
export async function getDiffInWorktree(worktreePath: string): Promise<string> { export async function getDiffInWorktree(
const { stdout } = await exec('git', ['diff', 'origin/main...HEAD'], { cwd: worktreePath }) worktreePath: string,
baseSha?: string,
): Promise<string> {
// PBI-47 (P0): when base_sha is provided, diff against the per-job base
// captured at claim-time so verify only sees the current task's changes.
// Falls back to origin/main only for legacy callers without base_sha.
const range = baseSha ? `${baseSha}...HEAD` : 'origin/main...HEAD'
const { stdout } = await exec('git', ['diff', range], { cwd: worktreePath })
return stdout return stdout
} }
@ -58,7 +65,7 @@ export function registerVerifyTaskAgainstPlanTool(server: McpServer) {
where: { status: { in: ['CLAIMED', 'RUNNING'] } }, where: { status: { in: ['CLAIMED', 'RUNNING'] } },
orderBy: { created_at: 'desc' }, orderBy: { created_at: 'desc' },
take: 1, take: 1,
select: { id: true, plan_snapshot: true }, select: { id: true, plan_snapshot: true, base_sha: true },
}, },
}, },
}) })
@ -67,9 +74,19 @@ export function registerVerifyTaskAgainstPlanTool(server: McpServer) {
const activeJob = task.claude_jobs[0] ?? null const activeJob = task.claude_jobs[0] ?? null
// PBI-47 (P0): require base_sha so diff is scoped to this job's work,
// not the full origin/main...HEAD which would include sibling commits
// on a reused story/sprint branch.
if (activeJob && !activeJob.base_sha) {
return toolError(
'MISSING_BASE_SHA: This claim has no base_sha. '
+ 'Re-claim the task (cancel + wait_for_job) so a fresh base_sha is captured.',
)
}
let diff: string let diff: string
try { try {
diff = await getDiffInWorktree(worktree_path) diff = await getDiffInWorktree(worktree_path, activeJob?.base_sha ?? undefined)
} catch (err) { } catch (err) {
return toolError( return toolError(
`git diff failed in worktree (${worktree_path}): ${(err as Error).message ?? 'unknown error'}`, `git diff failed in worktree (${worktree_path}): ${(err as Error).message ?? 'unknown error'}`,

View file

@ -7,10 +7,18 @@ import { Client } from 'pg'
import * as fs from 'node:fs/promises' import * as fs from 'node:fs/promises'
import * as os from 'node:os' import * as os from 'node:os'
import * as path from 'node:path' import * as path from 'node:path'
import { execFile } from 'node:child_process'
import { promisify } from 'node:util'
import { prisma } from '../prisma.js' import { prisma } from '../prisma.js'
const execFileP = promisify(execFile)
import { requireWriteAccess } from '../auth.js' import { requireWriteAccess } from '../auth.js'
import { toolJson, toolError, withToolErrors } from '../errors.js' import { toolJson, toolError, withToolErrors } from '../errors.js'
import { createWorktreeForJob } from '../git/worktree.js' import { createWorktreeForJob } from '../git/worktree.js'
import { getWorktreeRoot } from '../git/worktree-paths.js'
import { setupProductWorktrees, releaseLocksOnTerminal } from '../git/job-locks.js'
import { pushBranchForJob } from '../git/push.js'
import { resolveJobConfig } from '../lib/job-config.js'
/** Parse `https://github.com/<owner>/<name>(.git)?` → `<name>`. */ /** Parse `https://github.com/<owner>/<name>(.git)?` → `<name>`. */
export function repoNameFromUrl(repoUrl: string | null | undefined): string | null { export function repoNameFromUrl(repoUrl: string | null | undefined): string | null {
@ -111,6 +119,35 @@ export async function resolveBranchForJob(
jobId: string, jobId: string,
storyId: string, storyId: string,
): Promise<{ branchName: string; reused: boolean }> { ): Promise<{ branchName: string; reused: boolean }> {
// Sprint-flow (PBI-46): als deze job aan een SprintRun hangt, kies de branch
// op basis van Product.pr_strategy:
// SPRINT → feat/sprint-<sprint_run_id-suffix> (één branch voor hele run)
// STORY → feat/story-<story_id-suffix> (één branch per story; sibling-tasks delen 'm)
// Voor legacy task-jobs zonder sprint_run_id valt de logica terug op het
// bestaande feat/story-<storyId>-pad.
const job = await prisma.claudeJob.findUnique({
where: { id: jobId },
select: {
sprint_run_id: true,
sprint_run: { select: { id: true, pr_strategy: true } },
},
})
if (job?.sprint_run && job.sprint_run.pr_strategy === 'SPRINT') {
const branchName = `feat/sprint-${job.sprint_run.id.slice(-8)}`
const sibling = await prisma.claudeJob.findFirst({
where: {
sprint_run_id: job.sprint_run_id,
branch: branchName,
id: { not: jobId },
},
orderBy: { created_at: 'asc' },
select: { branch: true },
})
return { branchName, reused: sibling !== null }
}
// STORY-mode (default) of legacy: branch per story
const sibling = await prisma.claudeJob.findFirst({ const sibling = await prisma.claudeJob.findFirst({
where: { where: {
task: { story_id: storyId }, task: { story_id: storyId },
@ -152,6 +189,32 @@ export async function attachWorktreeToJob(
branchName, branchName,
reuseBranch: reused, reuseBranch: reused,
}) })
// PBI-47 (P0): capture base_sha so verify_task_against_plan can diff
// against the claim-time HEAD instead of origin/main. For reused branches
// (siblings already pushed), base_sha = SHA of the worktree HEAD now.
// For fresh branches, base_sha = origin/main HEAD which createWorktreeForJob
// just checked out.
let baseSha: string | null = null
try {
const { stdout } = await execFileP('git', ['rev-parse', 'HEAD'], { cwd: worktreePath })
baseSha = stdout.trim()
} catch (err) {
console.warn(`[attachWorktreeToJob] failed to resolve base_sha for ${jobId}:`, err)
}
// Persist branch + base_sha. update_job_status (prepareDoneUpdate)
// leest claudeJob.branch om naar de juiste ref te pushen — zonder deze
// update valt 'ie terug op het legacy `feat/job-<8>` patroon en faalt
// de push met "src refspec ... does not match any" voor sprint/story
// strategy branches.
await prisma.claudeJob.update({
where: { id: jobId },
data: {
branch: actualBranch,
...(baseSha ? { base_sha: baseSha } : {}),
},
})
return { worktree_path: worktreePath, branch_name: actualBranch, reused_branch: reused } return { worktree_path: worktreePath, branch_name: actualBranch, reused_branch: reused }
} catch (err) { } catch (err) {
await rollbackClaim(jobId) await rollbackClaim(jobId)
@ -171,40 +234,96 @@ const inputSchema = z.object({
const STALE_ERROR_MSG = 'agent did not complete job within 2 attempts' const STALE_ERROR_MSG = 'agent did not complete job within 2 attempts'
export async function resetStaleClaimedJobs(userId: string): Promise<void> { export async function resetStaleClaimedJobs(userId: string): Promise<void> {
// Jobs that exceeded the retry limit → FAILED // PBI-50: lease-driven stale-detection. Jobs in CLAIMED of RUNNING met
const failedRows = await prisma.$queryRaw< // verlopen lease_until (default 5 min, verlengd door job_heartbeat) worden
Array<{ id: string; task_id: string; product_id: string }> // gereset. Legacy jobs zonder lease_until vallen terug op de oude
>` // claimed_at + 30-min-regel.
type StaleRow = {
id: string
task_id: string | null
product_id: string
kind: string
sprint_run_id: string | null
branch: string | null
}
const failedRows = await prisma.$queryRaw<StaleRow[]>`
UPDATE claude_jobs UPDATE claude_jobs
SET status = 'FAILED', SET status = 'FAILED',
finished_at = NOW(), finished_at = NOW(),
error = ${STALE_ERROR_MSG} error = ${STALE_ERROR_MSG}
WHERE user_id = ${userId} WHERE user_id = ${userId}
AND status = 'CLAIMED' AND status IN ('CLAIMED', 'RUNNING')
AND claimed_at < NOW() - INTERVAL '30 minutes'
AND retry_count >= 2 AND retry_count >= 2
RETURNING id, task_id, product_id AND (
lease_until < NOW()
OR (lease_until IS NULL AND claimed_at < NOW() - INTERVAL '30 minutes')
)
RETURNING id, task_id, product_id, kind::text AS kind, sprint_run_id, branch
` `
// Jobs under the retry limit → back to QUEUED, increment retry_count
const requeuedRows = await prisma.$queryRaw< const requeuedRows = await prisma.$queryRaw<
Array<{ id: string; task_id: string; product_id: string; retry_count: number }> (StaleRow & { retry_count: number })[]
>` >`
UPDATE claude_jobs UPDATE claude_jobs
SET status = 'QUEUED', SET status = 'QUEUED',
claimed_by_token_id = NULL, claimed_by_token_id = NULL,
claimed_at = NULL, claimed_at = NULL,
plan_snapshot = NULL, plan_snapshot = NULL,
lease_until = NULL,
retry_count = retry_count + 1 retry_count = retry_count + 1
WHERE user_id = ${userId} WHERE user_id = ${userId}
AND status = 'CLAIMED' AND status IN ('CLAIMED', 'RUNNING')
AND claimed_at < NOW() - INTERVAL '30 minutes'
AND retry_count < 2 AND retry_count < 2
RETURNING id, task_id, product_id, retry_count AND (
lease_until < NOW()
OR (lease_until IS NULL AND claimed_at < NOW() - INTERVAL '30 minutes')
)
RETURNING id, task_id, product_id, kind::text AS kind, sprint_run_id, branch, retry_count
` `
if (failedRows.length === 0 && requeuedRows.length === 0) return if (failedRows.length === 0 && requeuedRows.length === 0) return
// PBI-9: release any product-worktree locks held by these stale jobs.
for (const j of failedRows) await releaseLocksOnTerminal(j.id)
for (const j of requeuedRows) await releaseLocksOnTerminal(j.id)
// PBI-50: voor stale FAILED SPRINT_IMPLEMENTATION jobs — push de branch
// zodat het werk niet verloren gaat (geen mark-ready / PR-promotie),
// en zet SprintRun.failure_reason met een verwijzing naar de laatst
// RUNNING execution voor diagnose.
for (const j of failedRows.filter((r) => r.kind === 'SPRINT_IMPLEMENTATION')) {
if (j.branch && j.product_id) {
const repoRoot = await resolveRepoRoot(j.product_id).catch(() => null)
if (repoRoot) {
const worktreeDir = getWorktreeRoot()
const worktreePath = path.join(worktreeDir, j.id)
try {
await pushBranchForJob({ worktreePath, branchName: j.branch })
} catch (err) {
console.warn(`[stale-reset] push failed for stale sprint-job ${j.id}:`, err)
}
}
}
if (j.sprint_run_id) {
const lastRunning = await prisma.sprintTaskExecution.findFirst({
where: { sprint_job_id: j.id, status: 'RUNNING' },
orderBy: { order: 'desc' },
select: { order: true, task_id: true },
})
const reasonSuffix = lastRunning
? `, last execution: order ${lastRunning.order} task ${lastRunning.task_id}`
: ''
await prisma.sprintRun.update({
where: { id: j.sprint_run_id },
data: {
status: 'FAILED',
failure_reason: `stale: lease verlopen${reasonSuffix}`,
},
})
}
}
// Notify UI via SSE for each transition (best-effort) // Notify UI via SSE for each transition (best-effort)
try { try {
const pg = new Client({ connectionString: process.env.DATABASE_URL }) const pg = new Client({ connectionString: process.env.DATABASE_URL })
@ -247,29 +366,54 @@ export async function tryClaimJob(
tokenId: string, tokenId: string,
productId?: string, productId?: string,
): Promise<string | null> { ): Promise<string | null> {
// Atomic claim in a single transaction — also captures plan_snapshot from task // Atomic claim in a single transaction — also captures plan_snapshot from task.
//
// PBI-50: claim-filter discrimineert via cj.kind:
// - IDEA_GRILL/IDEA_MAKE_PLAN/PLAN_CHAT: standalone idea-jobs.
// - TASK_IMPLEMENTATION/SPRINT_IMPLEMENTATION: alleen via actieve SprintRun
// (status QUEUED of RUNNING). Legacy task-jobs zonder sprint_run_id en
// jobs in PAUSED/FAILED/CANCELLED/DONE SprintRuns worden overgeslagen.
// Bij eerste claim van een nog QUEUED SprintRun → status RUNNING.
//
// PBI-50 lease: lease_until = NOW() + 5min op claim. resetStaleClaimedJobs
// reset bij verlopen lease.
const rows = await prisma.$transaction(async (tx) => { const rows = await prisma.$transaction(async (tx) => {
// SELECT FOR UPDATE OF claude_jobs SKIP LOCKED — LEFT JOIN tasks zodat
// idea-jobs (task_id IS NULL, M12) ook gevonden worden. plan_snapshot
// blijft dan NULL/'' voor idea-jobs — niet nodig (geen verify-flow).
const found = productId const found = productId
? await tx.$queryRaw<Array<{ id: string; implementation_plan: string | null }>>` ? await tx.$queryRaw<
SELECT cj.id, t.implementation_plan Array<{ id: string; implementation_plan: string | null; sprint_run_id: string | null }>
>`
SELECT cj.id, t.implementation_plan, cj.sprint_run_id
FROM claude_jobs cj FROM claude_jobs cj
LEFT JOIN tasks t ON t.id = cj.task_id LEFT JOIN tasks t ON t.id = cj.task_id
LEFT JOIN sprint_runs sr ON sr.id = cj.sprint_run_id
WHERE cj.user_id = ${userId} WHERE cj.user_id = ${userId}
AND cj.product_id = ${productId} AND cj.product_id = ${productId}
AND cj.status = 'QUEUED' AND cj.status = 'QUEUED'
AND (
cj.kind IN ('IDEA_GRILL', 'IDEA_MAKE_PLAN', 'PLAN_CHAT')
OR (cj.kind IN ('TASK_IMPLEMENTATION', 'SPRINT_IMPLEMENTATION')
AND cj.sprint_run_id IS NOT NULL
AND sr.status IN ('QUEUED', 'RUNNING'))
)
ORDER BY cj.created_at ASC ORDER BY cj.created_at ASC
LIMIT 1 LIMIT 1
FOR UPDATE OF cj SKIP LOCKED FOR UPDATE OF cj SKIP LOCKED
` `
: await tx.$queryRaw<Array<{ id: string; implementation_plan: string | null }>>` : await tx.$queryRaw<
SELECT cj.id, t.implementation_plan Array<{ id: string; implementation_plan: string | null; sprint_run_id: string | null }>
>`
SELECT cj.id, t.implementation_plan, cj.sprint_run_id
FROM claude_jobs cj FROM claude_jobs cj
LEFT JOIN tasks t ON t.id = cj.task_id LEFT JOIN tasks t ON t.id = cj.task_id
LEFT JOIN sprint_runs sr ON sr.id = cj.sprint_run_id
WHERE cj.user_id = ${userId} WHERE cj.user_id = ${userId}
AND cj.status = 'QUEUED' AND cj.status = 'QUEUED'
AND (
cj.kind IN ('IDEA_GRILL', 'IDEA_MAKE_PLAN', 'PLAN_CHAT')
OR (cj.kind IN ('TASK_IMPLEMENTATION', 'SPRINT_IMPLEMENTATION')
AND cj.sprint_run_id IS NOT NULL
AND sr.status IN ('QUEUED', 'RUNNING'))
)
ORDER BY cj.created_at ASC ORDER BY cj.created_at ASC
LIMIT 1 LIMIT 1
FOR UPDATE OF cj SKIP LOCKED FOR UPDATE OF cj SKIP LOCKED
@ -279,21 +423,36 @@ export async function tryClaimJob(
const jobId = found[0].id const jobId = found[0].id
const snapshot = found[0].implementation_plan ?? '' const snapshot = found[0].implementation_plan ?? ''
const sprintRunId = found[0].sprint_run_id
await tx.$executeRaw` await tx.$executeRaw`
UPDATE claude_jobs UPDATE claude_jobs
SET status = 'CLAIMED', SET status = 'CLAIMED',
claimed_by_token_id = ${tokenId}, claimed_by_token_id = ${tokenId},
claimed_at = NOW(), claimed_at = NOW(),
plan_snapshot = ${snapshot} plan_snapshot = ${snapshot},
lease_until = NOW() + INTERVAL '5 minutes'
WHERE id = ${jobId} WHERE id = ${jobId}
` `
// SprintRun QUEUED → RUNNING bij eerste claim, in dezelfde tx zodat
// concurrent claims dezelfde overgang niet dubbel doen (UPDATE skipt
// rows die al RUNNING zijn).
if (sprintRunId) {
await tx.$executeRaw`
UPDATE sprint_runs
SET status = 'RUNNING',
started_at = COALESCE(started_at, NOW()),
updated_at = NOW()
WHERE id = ${sprintRunId} AND status = 'QUEUED'
`
}
return [{ id: jobId }] return [{ id: jobId }]
}) })
return rows.length > 0 ? rows[0].id : null return rows.length > 0 ? rows[0].id : null
} }
async function getFullJobContext(jobId: string) { export async function getFullJobContext(jobId: string) {
const job = await prisma.claudeJob.findUnique({ const job = await prisma.claudeJob.findUnique({
where: { id: jobId }, where: { id: jobId },
include: { include: {
@ -310,23 +469,87 @@ async function getFullJobContext(jobId: string) {
idea: { idea: {
include: { include: {
pbi: { select: { id: true, code: true, title: true } }, pbi: { select: { id: true, code: true, title: true } },
secondary_products: {
include: { product: { select: { id: true, repo_url: true } } },
},
},
},
product: {
select: {
id: true,
name: true,
repo_url: true,
definition_of_done: true,
preferred_model: true,
thinking_budget_default: true,
preferred_permission_mode: true,
}, },
}, },
product: { select: { id: true, name: true, repo_url: true, definition_of_done: true } },
}, },
}) })
if (!job) return null if (!job) return null
// PBI-67: model + mode-selectie. Resolved op claim-moment; override-cascade
// task.requires_opus → job.requested_* → product.preferred_* → kind-default.
const config = resolveJobConfig(
{
kind: job.kind,
requested_model: job.requested_model,
requested_thinking_budget: job.requested_thinking_budget,
requested_permission_mode: job.requested_permission_mode,
},
{
preferred_model: job.product.preferred_model,
thinking_budget_default: job.product.thinking_budget_default,
preferred_permission_mode: job.product.preferred_permission_mode,
},
job.task ? { requires_opus: job.task.requires_opus } : undefined,
)
// M12: branch on kind. Idea-jobs hebben geen task/story/pbi/sprint; ze // M12: branch on kind. Idea-jobs hebben geen task/story/pbi/sprint; ze
// hebben in plaats daarvan idea + embedded prompt_text. // hebben in plaats daarvan idea + embedded prompt_text.
if (job.kind === 'IDEA_GRILL' || job.kind === 'IDEA_MAKE_PLAN') { if (job.kind === 'IDEA_GRILL' || job.kind === 'IDEA_MAKE_PLAN' || job.kind === 'IDEA_REVIEW_PLAN') {
if (!job.idea) return null if (!job.idea) return null
const { idea } = job const { idea } = job
const { getIdeaPromptText } = await import('../lib/idea-prompts.js') const { getIdeaPromptText } = await import('../lib/kind-prompts.js')
// Setup persistent product-worktrees for this idea-job (PBI-9).
// Primary product is gated by repo_url via resolveRepoRoot returning null.
// Secondary products from IdeaProduct[] need explicit repo_url filter.
const involvedProductIds: string[] = []
if (idea.product_id) involvedProductIds.push(idea.product_id)
for (const ip of idea.secondary_products ?? []) {
if (ip.product?.repo_url && !involvedProductIds.includes(ip.product_id)) {
involvedProductIds.push(ip.product_id)
}
}
// PBI-49 P1: rollback the claim if worktree setup fails so the job
// doesn't hang in CLAIMED until the 30-min stale-reset, and any partial
// locks are released. Mirrors attachWorktreeToJob's task-pad behaviour.
let worktrees: Array<{ productId: string; worktreePath: string }> = []
if (involvedProductIds.length > 0) {
try {
worktrees = await setupProductWorktrees(
job.id,
involvedProductIds,
(pid) => resolveRepoRoot(pid),
)
} catch (err) {
console.warn(
`[wait-for-job] product-worktree setup failed for idea-job ${job.id}; rolling back claim:`,
err,
)
await releaseLocksOnTerminal(job.id)
await rollbackClaim(job.id)
return null
}
}
return { return {
job_id: job.id, job_id: job.id,
kind: job.kind, kind: job.kind,
status: 'claimed', status: 'claimed',
config,
idea: { idea: {
id: idea.id, id: idea.id,
code: idea.code, code: idea.code,
@ -346,7 +569,188 @@ async function getFullJobContext(jobId: string) {
pbi: idea.pbi, pbi: idea.pbi,
repo_url: job.product.repo_url, repo_url: job.product.repo_url,
prompt_text: getIdeaPromptText(job.kind), prompt_text: getIdeaPromptText(job.kind),
branch_suggestion: `feat/idea-${idea.code.toLowerCase()}-${job.kind === 'IDEA_GRILL' ? 'grill' : 'plan'}`, branch_suggestion: `feat/idea-${idea.code.toLowerCase()}-${(() => {
if (job.kind === 'IDEA_GRILL') return 'grill'
if (job.kind === 'IDEA_REVIEW_PLAN') return 'review'
return 'plan'
})()}`,
product_worktrees: worktrees.map((w) => ({
product_id: w.productId,
worktree_path: w.worktreePath,
})),
primary_worktree_path: worktrees[0]?.worktreePath ?? null,
}
}
// PBI-50: SPRINT_IMPLEMENTATION — single-session sprint runner.
// Eén ClaudeJob per SprintRun handelt sequentieel alle TO_DO-tasks af.
// Bij claim: maak frozen scope-snapshot via SprintTaskExecution-rows,
// resolve worktree (verse branch of hergebruikt via previous_run_id),
// capture base_sha. Worker werkt uitsluitend op deze frozen snapshot.
if (job.kind === 'SPRINT_IMPLEMENTATION') {
if (!job.sprint_run_id) {
await rollbackClaim(job.id)
return null
}
const sprintRun = await prisma.sprintRun.findUnique({
where: { id: job.sprint_run_id },
include: {
sprint: {
include: {
product: true,
stories: {
where: { status: { not: 'DONE' } },
include: {
pbi: {
select: { id: true, code: true, title: true, priority: true, sort_order: true, status: true },
},
tasks: {
where: { status: 'TO_DO' },
orderBy: [{ priority: 'asc' }, { sort_order: 'asc' }],
},
},
orderBy: [{ priority: 'asc' }, { sort_order: 'asc' }],
},
},
},
},
})
if (!sprintRun) {
await rollbackClaim(job.id)
return null
}
const repoRoot = await resolveRepoRoot(sprintRun.sprint.product_id)
if (!repoRoot) {
await rollbackClaim(job.id)
return null
}
// Branch resolution: previous_run_id + branch → reuse; anders verse.
const isResume = !!(sprintRun.previous_run_id && sprintRun.branch)
const branchName = isResume
? sprintRun.branch!
: `feat/sprint-${job.sprint_run_id.slice(-8)}`
let worktreePath: string
let baseSha: string
try {
const wt = await createWorktreeForJob({
repoRoot,
jobId: job.id,
branchName,
reuseBranch: isResume,
})
worktreePath = wt.worktreePath
const { stdout: headSha } = await execFileP('git', ['rev-parse', 'HEAD'], {
cwd: worktreePath,
})
baseSha = headSha.trim()
} catch (err) {
console.warn(`[wait-for-job] sprint-worktree setup failed for ${job.id}:`, err)
await rollbackClaim(job.id)
return null
}
// Verzamel ordered tasks in flat list, behoud volgorde
const orderedTasks = sprintRun.sprint.stories.flatMap((s) =>
s.tasks.map((t) => ({ ...t, story_pbi_id: s.pbi.id })),
)
// Persist branch + base_sha + scope-snapshot in één transactie
await prisma.$transaction([
prisma.claudeJob.update({
where: { id: job.id },
data: { branch: branchName, base_sha: baseSha },
}),
prisma.sprintTaskExecution.createMany({
data: orderedTasks.map((t, idx) => ({
sprint_job_id: job.id,
task_id: t.id,
order: idx,
plan_snapshot: t.implementation_plan ?? '',
verify_required_snapshot: t.verify_required,
verify_only_snapshot: t.verify_only,
base_sha: idx === 0 ? baseSha : null,
status: 'PENDING' as const,
})),
}),
prisma.sprintRun.update({
where: { id: job.sprint_run_id },
data: { branch: branchName },
}),
])
// Lookup execution_ids in volgorde voor de response
const executions = await prisma.sprintTaskExecution.findMany({
where: { sprint_job_id: job.id },
orderBy: { order: 'asc' },
select: { id: true, task_id: true, order: true, base_sha: true },
})
const execIdByTaskId = new Map(executions.map((e) => [e.task_id, e.id]))
// Dedupe PBIs uit de stories (één PBI kan meerdere stories hebben)
const pbiMap = new Map<string, typeof sprintRun.sprint.stories[number]['pbi']>()
for (const s of sprintRun.sprint.stories) pbiMap.set(s.pbi.id, s.pbi)
return {
job_id: job.id,
kind: job.kind,
status: 'claimed',
config,
sprint: {
id: sprintRun.sprint.id,
sprint_goal: sprintRun.sprint.sprint_goal,
status: sprintRun.sprint.status,
},
sprint_run: {
id: sprintRun.id,
pr_strategy: sprintRun.pr_strategy,
branch: branchName,
previous_run_id: sprintRun.previous_run_id,
},
product: {
id: sprintRun.sprint.product.id,
name: sprintRun.sprint.product.name,
repo_url: sprintRun.sprint.product.repo_url,
definition_of_done: sprintRun.sprint.product.definition_of_done,
auto_pr: sprintRun.sprint.product.auto_pr,
},
pbis: Array.from(pbiMap.values()).map((p) => ({
id: p.id,
code: p.code,
title: p.title,
priority: p.priority,
sort_order: p.sort_order,
status: p.status,
})),
stories: sprintRun.sprint.stories.map((s) => ({
id: s.id,
code: s.code,
title: s.title,
pbi_id: s.pbi_id,
priority: s.priority,
sort_order: s.sort_order,
status: s.status,
})),
task_executions: orderedTasks.map((t, idx) => ({
execution_id: execIdByTaskId.get(t.id)!,
task_id: t.id,
code: t.code,
title: t.title,
story_id: t.story_id,
order: idx,
plan_snapshot: t.implementation_plan ?? '',
verify_required: t.verify_required,
verify_only: t.verify_only,
base_sha: idx === 0 ? baseSha : null,
})),
worktree_path: worktreePath,
branch_name: branchName,
repo_url: sprintRun.sprint.product.repo_url,
base_sha: baseSha,
heartbeat_interval_seconds: 60,
} }
} }
@ -360,6 +764,7 @@ async function getFullJobContext(jobId: string) {
job_id: job.id, job_id: job.id,
kind: job.kind, kind: job.kind,
status: 'claimed', status: 'claimed',
config,
task: { task: {
id: task.id, id: task.id,
title: task.title, title: task.title,

View file

@ -5,12 +5,16 @@ export interface ClassifyResult {
reasoning: string reasoning: string
} }
// Extract changed file paths from a unified diff ("+++ b/<path>" lines). // Extract changed file paths from a unified diff. Reads both "+++ b/<path>"
// (created/modified files) and "--- a/<path>" (deleted/modified files), so
// pure-delete commits (where +++ is /dev/null) are still recognised.
function extractDiffPaths(diff: string): string[] { function extractDiffPaths(diff: string): string[] {
const paths = new Set<string>() const paths = new Set<string>()
for (const line of diff.split('\n')) { for (const line of diff.split('\n')) {
const m = line.match(/^\+\+\+ b\/(.+)$/) const plus = line.match(/^\+\+\+ b\/(.+)$/)
if (m && m[1].trim() !== '/dev/null') paths.add(m[1].trim()) if (plus && plus[1].trim() !== '/dev/null') paths.add(plus[1].trim())
const minus = line.match(/^--- a\/(.+)$/)
if (minus && minus[1].trim() !== '/dev/null') paths.add(minus[1].trim())
} }
return [...paths] return [...paths]
} }
@ -23,7 +27,7 @@ function extractPlanPaths(plan: string): string[] {
let m: RegExpExecArray | null let m: RegExpExecArray | null
while ((m = backtickRe.exec(plan)) !== null) { while ((m = backtickRe.exec(plan)) !== null) {
const p = m[1].trim() const p = m[1].trim()
if ((p.includes('/') || p.includes('.')) && !p.includes(' ') && p.length > 3) paths.add(p) if (looksLikePath(p)) paths.add(p)
} }
const bulletRe = /^[-*]\s+\*{0,2}([^\s*][^\s]*)\.([a-zA-Z]{1,6})\*{0,2}\s*[:\n]/gm const bulletRe = /^[-*]\s+\*{0,2}([^\s*][^\s]*)\.([a-zA-Z]{1,6})\*{0,2}\s*[:\n]/gm
@ -34,6 +38,20 @@ function extractPlanPaths(plan: string): string[] {
return [...paths] return [...paths]
} }
// Heuristic: does this backtick-quoted token look like a file path?
// Excludes code-snippets like `data-debug-label="..."`, `foo()`, `<div>` —
// anything containing operator/quote/bracket chars or an ellipsis is rejected.
// Accepts paths with a slash (multi-segment) or a recognisable file-extension
// suffix (16 alphanumeric chars after a final dot, e.g. `.tsx`, `.json`).
function looksLikePath(p: string): boolean {
if (p.length <= 3) return false
if (p.includes(' ')) return false
if (/[="'<>()[\]{};,]/.test(p)) return false
if (/\.{2,}/.test(p)) return false
if (!p.includes('/') && !/\.[a-zA-Z][a-zA-Z0-9]{0,5}$/.test(p)) return false
return true
}
// Path match: exact or suffix match so "classify.ts" matches "src/verify/classify.ts". // Path match: exact or suffix match so "classify.ts" matches "src/verify/classify.ts".
function pathMatches(planPath: string, diffPaths: string[]): boolean { function pathMatches(planPath: string, diffPaths: string[]): boolean {
const norm = planPath.replace(/\\/g, '/') const norm = planPath.replace(/\\/g, '/')

2
vendor/scrum4me vendored

@ -1 +1 @@
Subproject commit 555ed8fe89f0a3c9e52098fa0590ab8ba16e357a Subproject commit 7bb252c528d810584bcb46a56cff3d26ebf392ff