docs: AI-optimized docs restructure (Phases 1–8) (#61)

* docs(dialog-pattern): add generic entity-dialog spec

Introduceert docs/patterns/dialog.md als bron-of-truth voor elke
create/edit/detail-dialog in Scrum4Me, ongeacht het achterliggende
dataobject. Bevat 14 secties: uitgangspunten, stack, component-
architectuur, layout, validatie, drielaagse demo-policy, submission,
dialog-gedrag, theming, footer, triggers/URL-state, per-entiteit
profile-template, out-of-scope, en een verificatie-checklist.

Registreert het patroon in CLAUDE.md "Implementatiepatronen"-tabel
zodat Claude (en mensen) de spec verplicht raadplegen voor elke
nieuwe dialog.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(dialog-pattern): convert task spec + add pbi/story entity-profiles

Reduceert docs/scrum4me-task-dialog.md van 507 naar ~140 regels: alle
gedeelde regels verhuisd naar docs/patterns/dialog.md, dit document
bevat nu alleen Task-specifieke velden, URL-pattern, status-veld,
server actions, triggers en bewuste out-of-scope-keuzes.

Voegt twee nieuwe entity-profielen toe voor bestaande dialogen:
- docs/scrum4me-pbi-dialog.md (PbiDialog: state-based, code+title-rij,
  PbiStatusSelect, geen delete in v1)
- docs/scrum4me-story-dialog.md (StoryDialog: state-based, header met
  status/priority badges, inline activity-log, demo-readonly-fallback,
  inline-delete-confirm i.p.v. AlertDialog)

Beide profielen documenteren expliciet de "Bekende gaps t.o.v.
generieke spec" zodat opvolgende PR's de afwijkingen kunnen
rechtzetten of bewust kunnen accorderen.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* Added pdevelopment docs

* docs(plans): add docs-restructure plan for AI-optimized lookup

Audit of existing 39 doc files (~10.700 lines) and a phased restructure
proposal aimed at minimising the tokens an AI agent has to read to find
the right reference. Captures resolved decisions on language (English),
ADR template (Nygard default with MADR escape-hatch), index generator
(node script), and folder taxonomy. Proposal status — fase 1 to follow.

* docs(adr): add ADR scaffolding (templates, README, meta-ADR)

Set up docs/adr/ as the canonical home for architecture decisions:

- templates/nygard.md — default four-section format (Status, Context,
  Decision, Consequences) for one-way-door decisions.
- templates/madr.md — MADR v4 with YAML front-matter and explicit
  Considered Options for decisions where rejected alternatives matter.
- README.md — naming convention (NNNN-kebab-case), template-selection
  guidance (Nygard default; MADR for auth, queue mechanics, agent
  integration), status lifecycle, and ADR roster.
- 0000-record-architecture-decisions.md — meta-ADR establishing the
  practice itself, in Nygard format.

Backfilling existing implicit decisions (base-ui-over-radix, float
sort_order, demo-user three-layer policy, etc.) is fase 6 of the
docs-restructure plan.

* feat(docs): add docs index generator + initial INDEX.md

scripts/generate-docs-index.mjs walks docs/**/*.md, parses YAML
front-matter (or first H1 fallback) and a Nygard-style ## Status
section, then writes docs/INDEX.md with grouped tables for ADRs,
Specs, Plans (with archive subsection), Patterns, and Other.

Pure Node 20 (no external deps); idempotent — running it twice
produces byte-identical output. Excludes adr/templates/, the ADR
README, INDEX.md itself, and any *_*.md sidecar file.

Wire-up:
- package.json: docs:index → node scripts/generate-docs-index.mjs

Initial run indexed 35 docs across the existing structure; the
generated INDEX.md is committed so the table is reviewable in the
PR before hooking generation into a pre-commit step.

* chore: ignore Obsidian vault and personal sidecar files

Add .obsidian/ (Obsidian vault config) and _*.md (personal sidecar
notes) to .gitignore so the docs/ tree can serve as canonical source
of truth while still being usable as an Obsidian vault for personal
authoring. The docs index generator already excludes the same _*.md
pattern from INDEX.md.

* docs(plans): add PBI bulk-create spec for docs-restructure

Machine-parseable spec for an executor that calls the scrum4me MCP
(create_pbi → create_story → create_task) to seed the docs-restructure
work into the DB.

- Section 1 (Context) is the PBI description; serves as task-context
  via mcp__scrum4me__get_claude_context.
- Section 2 lists the 6 resolved decisions (English, MD3+styling
  merged, solo-paneel merged, .Plans archived, Nygard ADR default,
  node index script).
- Section 3 records what already shipped on this branch so the
  executor doesn't duplicate the ADR scaffolding or index generator.
- Section 4 carries the structured YAML graph: 1 PBI, 8 stories
  (one per phase), 39 tasks. product_id is REPLACE_ME — fill before
  running.
- YAML validated with PyYAML; field schema sanity-checked.

* docs(junk-cleanup): remove stub patterns/test.md

* docs(junk-cleanup): archive .Plans/ to docs/plans/archive/

* docs(front-matter): add YAML front-matter to docs/ root

* docs(front-matter): add YAML front-matter to patterns/

* docs(front-matter): add YAML front-matter to plans + agent files

* docs(index): regenerate INDEX.md after front-matter pass

* docs(naming): drop scrum4me- prefix from doc filenames

* docs(naming): lowercase API.md and MD3 filenames

* docs(naming): rename plan file to kebab-case ASCII

* docs(naming): rename middleware.md to proxy.md (next 16)

* docs(naming): polish CLAUDE.md doc-index after renames

* docs(taxonomy): scaffold topical folders under docs/

* docs(taxonomy): move spec files into docs/specs/

* docs(taxonomy): move design/api/qa/backlog/assets into folders

* docs(taxonomy): move agent-instruction-audit into decisions/

* docs(split): break architecture.md into 6 topical files

* docs(split): merge solo-paneel-spec into specs/functional.md

* docs(split): merge md3-color-scheme into design/styling

* docs(trim): extract branch/commit rules into runbook

* docs(trim): extract MCP integration into runbook

* docs(adr): add 0001-base-ui-over-radix

* docs(adr): add 0002-float-sort-order

* docs(adr): add 0003-one-branch-per-milestone

* docs(adr): add 0004-status-enum-mapping

* docs(adr): add 0005-iron-session-over-nextauth

* docs(adr): add 0006-demo-user-three-layer-policy

* docs(adr): add 0007-claude-question-channel-design

* docs(adr): add 0008-agent-instructions-in-claude-md + update README index

* docs(index): regenerate after ADR 0001-0008

* docs(glossary): add docs/glossary.md

* chore(docs): regenerate INDEX.md in pre-commit hook

* docs(readme): link INDEX + glossary + agent instructions

* feat(docs): add doc-link checker script

* chore(docs): wire docs:check-links and docs npm scripts

* ci(docs): block merge on broken doc links

* docs(links): fix broken cross-references after restructure

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Janpeter Visser 2026-05-03 03:21:59 +02:00 committed by GitHub
parent 289bcf9bf0
commit 7e45bbdbc0
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
81 changed files with 12364 additions and 3154 deletions

115
scripts/check-doc-links.mjs Normal file
View file

@ -0,0 +1,115 @@
#!/usr/bin/env node
/**
* Doc-link checker: walks docs/ (and README.md, CLAUDE.md, AGENTS.md),
* extracts relative markdown links, and verifies that every target file
* (and optional #anchor) actually exists.
*
* Exits 0 if all links are valid, 1 if any are broken.
*/
import { readFileSync, existsSync, readdirSync, statSync } from 'fs';
import { resolve, dirname, extname } from 'path';
import { fileURLToPath } from 'url';
const __dirname = dirname(fileURLToPath(import.meta.url));
const ROOT = resolve(__dirname, '..');
// Collect all .md files under a directory recursively
function collectMd(dir) {
const results = [];
for (const entry of readdirSync(dir)) {
const full = resolve(dir, entry);
const stat = statSync(full);
if (stat.isDirectory()) {
results.push(...collectMd(full));
} else if (extname(entry) === '.md') {
results.push(full);
}
}
return results;
}
// Convert a heading text to a GitHub-style anchor slug
function toSlug(text) {
return text
.toLowerCase()
.replace(/[^\w\s-]/g, '')
.trim()
.replace(/\s+/g, '-');
}
// Extract all heading slugs from a markdown file
function headingSlugs(filePath) {
const content = readFileSync(filePath, 'utf8');
const slugs = new Set();
for (const line of content.split('\n')) {
const m = line.match(/^#{1,6}\s+(.+)/);
if (m) slugs.add(toSlug(m[1]));
}
return slugs;
}
const LINK_RE = /\[(?:[^\]]*)\]\(([^)]+)\)/g;
function checkFile(filePath) {
const content = readFileSync(filePath, 'utf8');
const failures = [];
let m;
while ((m = LINK_RE.exec(content)) !== null) {
const raw = m[1];
// Skip external links and anchors-only
if (/^https?:\/\//.test(raw) || /^mailto:/.test(raw) || raw.startsWith('#')) continue;
const [pathPart, anchor] = raw.split('#');
const target = resolve(dirname(filePath), pathPart);
if (!existsSync(target)) {
failures.push({ file: filePath, link: raw, reason: 'file not found' });
continue;
}
if (anchor) {
const slugs = headingSlugs(target);
if (!slugs.has(anchor)) {
failures.push({ file: filePath, link: raw, reason: `anchor #${anchor} not found` });
}
}
}
return failures;
}
const roots = [
resolve(ROOT, 'docs'),
resolve(ROOT, 'README.md'),
resolve(ROOT, 'CLAUDE.md'),
resolve(ROOT, 'AGENTS.md'),
];
const files = [];
for (const r of roots) {
if (!existsSync(r)) continue;
const stat = statSync(r);
if (stat.isDirectory()) {
files.push(...collectMd(r));
} else {
files.push(r);
}
}
const allFailures = [];
for (const f of files) {
allFailures.push(...checkFile(f));
}
if (allFailures.length === 0) {
console.log(`✓ All doc links valid (${files.length} files checked)`);
process.exit(0);
} else {
console.error(`\n✗ Broken doc links (${allFailures.length}):\n`);
for (const { file, link, reason } of allFailures) {
const rel = file.replace(ROOT + '/', '');
console.error(` ${rel}\n${link} (${reason})`);
}
console.error('');
process.exit(1);
}

View file

@ -0,0 +1,277 @@
#!/usr/bin/env node
// Generate docs/INDEX.md from the front-matter and headings of every
// .md file under docs/. Pure Node 20 — no external dependencies.
//
// Usage: `npm run docs:index` (or `node scripts/generate-docs-index.mjs`).
//
// Idempotent: rewriting INDEX.md from the same inputs produces identical
// output (apart from the generation date in the header), so the script
// is safe to run repeatedly and in pre-commit hooks.
import { readdir, readFile, writeFile } from 'node:fs/promises';
import { join, relative, basename, sep } from 'node:path';
import { fileURLToPath } from 'node:url';
const SCRIPT_DIR = fileURLToPath(new URL('.', import.meta.url));
const REPO_ROOT = join(SCRIPT_DIR, '..');
const DOCS_DIR = join(REPO_ROOT, 'docs');
const INDEX_PATH = join(DOCS_DIR, 'INDEX.md');
// Paths (relative to repo root, forward-slashed) that the index should
// skip entirely. Templates and archived plans aren't useful in the live
// roster; sidecar files prefixed with `_` are personal Obsidian scratch.
const EXCLUDE_PATTERNS = [
/^docs\/adr\/templates\//,
/^docs\/adr\/README\.md$/,
/\/_[^/]+\.md$/,
/^docs\/INDEX\.md$/,
];
async function walk(dir) {
const entries = await readdir(dir, { withFileTypes: true });
const files = [];
for (const e of entries) {
const full = join(dir, e.name);
if (e.isDirectory()) {
files.push(...(await walk(full)));
} else if (e.isFile() && e.name.endsWith('.md')) {
files.push(full);
}
}
return files;
}
// Minimal YAML front-matter parser. Front-matter in this repo is restricted
// to flat `key: value` pairs, so a hand-rolled parser is enough — and
// keeps the script dependency-free.
function parseFrontMatter(content) {
if (!content.startsWith('---\n')) return { data: {}, body: content };
const end = content.indexOf('\n---\n', 4);
if (end === -1) return { data: {}, body: content };
const block = content.slice(4, end);
const data = {};
for (const raw of block.split('\n')) {
const line = raw.trim();
if (!line || line.startsWith('#')) continue;
const m = line.match(/^([A-Za-z][\w-]*)\s*:\s*(.*?)\s*$/);
if (!m) continue;
let val = m[2];
if (
(val.startsWith('"') && val.endsWith('"')) ||
(val.startsWith("'") && val.endsWith("'"))
) {
val = val.slice(1, -1);
}
data[m[1]] = val;
}
return { data, body: content.slice(end + 5) };
}
function extractFirstH1(text) {
const m = text.match(/^#\s+(.+?)\s*$/m);
return m ? m[1] : null;
}
// For Nygard-style ADRs the status lives under a `## Status` heading
// instead of YAML front-matter. Pull the first non-empty line after the
// heading so the index can still show it.
function extractStatusSection(text) {
const m = text.match(/^##\s+Status\s*\n+([^\n#].*?)(?:\n|$)/m);
return m ? m[1].trim() : null;
}
function isExcluded(relPath) {
return EXCLUDE_PATTERNS.some((rx) => rx.test(relPath));
}
// Map a path under docs/ to one of the four named sections, or "Other".
// Folder-based first; root-level docs fall back to a name-prefix rule
// so legacy `scrum4me-*.md` files still surface under Specs until the
// docs-restructure migrates them into `docs/specs/`.
function categorize(relPath) {
const parts = relPath.split('/');
if (parts[0] !== 'docs') return 'Other';
if (parts.length === 2) {
return /^scrum4me-/.test(parts[1]) ? 'Specs' : 'Other';
}
const sub = parts[1];
if (sub === 'adr') return 'ADRs';
if (sub === 'specs') return 'Specs';
if (sub === 'plans') return 'Plans';
if (sub === 'patterns') return 'Patterns';
return 'Other';
}
function adrNumber(filename) {
const m = filename.match(/^(\d{4})-/);
return m ? parseInt(m[1], 10) : null;
}
function escapePipe(s) {
return String(s).replace(/\|/g, '\\|');
}
async function main() {
const files = await walk(DOCS_DIR);
const docs = [];
for (const full of files) {
const rel = relative(REPO_ROOT, full).split(sep).join('/');
if (isExcluded(rel)) continue;
const content = await readFile(full, 'utf8');
const { data, body } = parseFrontMatter(content);
const title =
data.title || extractFirstH1(body) || basename(full, '.md');
const status = data.status || extractStatusSection(body) || '';
const date = data.date || data.last_updated || '';
const linkPath = './' + rel.replace(/^docs\//, '');
const category = categorize(rel);
docs.push({
rel,
title,
status,
date,
linkPath,
category,
basename: basename(full),
});
}
const groups = { ADRs: [], Specs: [], Plans: [], Patterns: [], Other: [] };
for (const d of docs) {
if (groups[d.category]) groups[d.category].push(d);
}
groups.ADRs.sort((a, b) => {
const na = adrNumber(a.basename) ?? 9999;
const nb = adrNumber(b.basename) ?? 9999;
if (na !== nb) return na - nb;
return a.basename.localeCompare(b.basename);
});
for (const k of ['Specs', 'Plans', 'Patterns', 'Other']) {
groups[k].sort((a, b) => a.rel.localeCompare(b.rel));
}
const lines = [];
lines.push(
'<!-- Generated by scripts/generate-docs-index.mjs. Do not edit by hand. Run `npm run docs:index`. -->'
);
lines.push('');
lines.push('# Documentation Index');
lines.push('');
lines.push(
`Auto-generated on ${new Date().toISOString().slice(0, 10)} from front-matter and headings.`
);
lines.push('');
// --- ADRs ---
lines.push('## Architecture Decision Records');
lines.push('');
if (groups.ADRs.length === 0) {
lines.push('_No ADRs yet._');
lines.push('');
} else {
lines.push('| # | Title | Status |');
lines.push('|---|---|---|');
for (const d of groups.ADRs) {
const n = adrNumber(d.basename);
const num = n !== null ? String(n).padStart(4, '0') : '—';
lines.push(
`| ${num} | [${escapePipe(d.title)}](${d.linkPath}) | ${escapePipe(d.status || '—')} |`
);
}
lines.push('');
}
// --- Specs ---
lines.push('## Specifications');
lines.push('');
if (groups.Specs.length === 0) {
lines.push('_No specs yet._');
lines.push('');
} else {
lines.push('| Title | Status | Updated |');
lines.push('|---|---|---|');
for (const d of groups.Specs) {
lines.push(
`| [${escapePipe(d.title)}](${d.linkPath}) | ${escapePipe(d.status || '—')} | ${escapePipe(d.date || '—')} |`
);
}
lines.push('');
}
// --- Plans (with archive subsection) ---
lines.push('## Plans');
lines.push('');
const plansActive = groups.Plans.filter((d) => !d.rel.includes('/archive/'));
const plansArchive = groups.Plans.filter((d) => d.rel.includes('/archive/'));
if (plansActive.length === 0) {
lines.push('_No active plans._');
lines.push('');
} else {
lines.push('| Title | Status | Updated |');
lines.push('|---|---|---|');
for (const d of plansActive) {
lines.push(
`| [${escapePipe(d.title)}](${d.linkPath}) | ${escapePipe(d.status || '—')} | ${escapePipe(d.date || '—')} |`
);
}
lines.push('');
}
if (plansArchive.length > 0) {
lines.push('### Archive');
lines.push('');
lines.push('| Title | Updated |');
lines.push('|---|---|');
for (const d of plansArchive) {
lines.push(
`| [${escapePipe(d.title)}](${d.linkPath}) | ${escapePipe(d.date || '—')} |`
);
}
lines.push('');
}
// --- Patterns ---
lines.push('## Patterns');
lines.push('');
if (groups.Patterns.length === 0) {
lines.push('_No patterns yet._');
lines.push('');
} else {
lines.push('| Title | Status | Updated |');
lines.push('|---|---|---|');
for (const d of groups.Patterns) {
lines.push(
`| [${escapePipe(d.title)}](${d.linkPath}) | ${escapePipe(d.status || '—')} | ${escapePipe(d.date || '—')} |`
);
}
lines.push('');
}
// --- Other (catches design/, api/, runbooks/, etc. until they get
// dedicated sections after the docs-restructure) ---
if (groups.Other.length > 0) {
lines.push('## Other Docs');
lines.push('');
lines.push('| Title | Path | Status | Updated |');
lines.push('|---|---|---|---|');
for (const d of groups.Other) {
lines.push(
`| [${escapePipe(d.title)}](${d.linkPath}) | \`${d.rel.replace(/^docs\//, '')}\` | ${escapePipe(d.status || '—')} | ${escapePipe(d.date || '—')} |`
);
}
lines.push('');
}
const out = lines.join('\n');
await writeFile(INDEX_PATH, out, 'utf8');
console.log(`Wrote ${relative(REPO_ROOT, INDEX_PATH)} (${docs.length} docs indexed)`);
}
main().catch((err) => {
console.error(err);
process.exit(1);
});