125 KiB
CascadingDev - AI–Human Collaboration System
Process & Architecture Design Document (v2.1)
- Feature ID: FR_2025-10-21_initial-feature-request
- Status: Design Approved (Ready for Implementation)
- Date: 2025-10-21 (Updated: 2025-11-01)
- Owners: Rob (maintainer), AI_Moderator (process steward)
- Contributors: AI_Claude, AI_Deepseek, AI_Junie, AI_Chat-GPT, AI_GitHubCopilot
Implementation Status (2025-11-01)
✅ Currently Implemented (Milestone M0-M1)
- Stage 1: Request - Feature request creation and automation
- Stage 2: Feature Discussion - Discussion file generation, voting, summaries
- Cascading Rules System - .ai-rules.yml loading and merging
- AI Patch Generation - Claude API integration, diff application
- Vote Tracking - VOTE: line parsing, summary updates
- Pre-commit Hook - Automation orchestration, file staging
- Setup Script - Project initialization with Ramble GUI
🚧 In Progress (Current Focus)
- Stage 3: Design Discussion Gate - Currently skipped, being implemented
- Will create design.discussion.md when feature status = READY_FOR_DESIGN
- Will maintain design/design.md document
- Requires design_gate_writer and design_discussion_writer rules
📋 Planned (Milestones M2-M4)
- Stage 4: Implementation Discussion - Task tracking, human approval gates
- Stage 5: Testing Discussion - Test planning, bug sub-cycles
- Stage 6: Review Discussion - Final review, release promotion
- Stage 7: Release - Changelog generation, version tagging
- Moderator Protocol - Nudge system, escalation paths
- Bug Sub-Cycles - Integrated bug tracking within features
Table of Contents
- Executive Summary
- Repository Layout
- Stage Model & Operational Procedure
- Voting, Quorum & Etiquette
- Cascading Rules System
- Orchestration Architecture
- Moderator Protocol
- Error Handling & Resilience
- Security & Secrets Management
- Performance & Scale Considerations
- Testing Strategy
- Implementation Plan
- Risks & Mitigations
- Template Evolution
- Roles & Agent Personas
- Glossary
- Appendices
Executive Summary
We are implementing a Git-native, rules-driven workflow that enables seamless collaboration between humans and multiple AI agents across the entire software development lifecycle. The system uses cascading .ai-rules.yml configurations and a thin Bash pre-commit hook to automatically generate and maintain development artifacts (discussions, design docs, reviews, diagrams, plans). A Python orchestrator provides structured checks and status reporting while preserving the fast Bash execution path.
Scope clarification: The document you are reading is the CascadingDev system design. It is not copied into user projects. End-users get a short USER_GUIDE.md and a create_feature.py tool; their first feature request defines the project, and its later design doc belongs to that project, not to CascadingDev.
*Git-Native Philosophy: Every conversation, decision, and generated artifact lives in the same version-controlled environment as the source code. There are no external databases, dashboards, or SaaS dependencies required for the core workflow.
Objective:
Establish a reproducible, self-documenting workflow where 90% of documentation, status artifacts and code changes are generated automatically from development discussions, while maintaining human oversight for all promotion gates and releases.
Core Principles
- Lightweight & Fast: Everything stored in Git as Markdown; minimal external dependencies
- Single Source of Truth: Repository contains all conversations, decisions, and code artifacts
- Self-Driving with Human Safety: AI agents can propose and vote; humans must approve critical stages
- Deterministic & Reversible: All automated actions are diffed, logged, and easily revertible
- Composable Rules: Nearest-folder precedence via cascading .ai-rules.yml configurations
Innovative Features
- Stage-Per-Discussion Model: Separate conversation threads for each development phase
- Automated Artifact Generation: Discussions automatically drive corresponding documentation
- Integrated Bug Sub-Cycles: Test failures automatically spawn bug reports with their own mini-lifecycle
- Intelligent Promotion Gates: Status-based transitions with configurable voting thresholds
- Multi-Agent Role Specialization: Different AI personas with stage-specific responsibilities
System Overview:
The architecture consists of a lightweight Bash pre-commit hook for artifact generation, a Python orchestrator for state evaluation and policy enforcement, and optional adapters for model and API integrations (Claude, Gitea, etc.). Together they form a layered, rule-driven automation stack.
Human → Git Commit → Pre-commit Hook → AI Generator → Markdown Artifact
↑ ↓
Orchestrator ← Discussion Summaries ← AI Moderator
Repository Layouts
This section clarifies three different directory structures that are easy to confuse:
Terminology
- CascadingDev Repo — The tooling project (this repository) that builds installers
- Install Bundle — The distributable artifact created by
tools/build_installer.py - User Project — A new repository scaffolded when a user runs the installer
A) CascadingDev Repository (Tooling Source)
This is the development repository where CascadingDev itself is maintained.
CascadingDev/ # This repository
├─ automation/ # Workflow automation scripts
│ ├─ runner.py # AI rules orchestrator invoked from hooks
│ ├─ config.py # Cascading .ai-rules loader
│ ├─ patcher.py # Diff generation + git apply helpers
│ └─ workflow.py # Vote parsing, status reporting
├─ src/cascadingdev/ # Core Python modules
│ ├─ cli.py # Developer CLI (cdev command)
│ ├─ setup_project.py # Installer script (copied to bundle)
│ └─ utils.py # Version management, utilities
├─ assets/ # Single source of truth for shipped files
│ ├─ hooks/
│ │ └─ pre-commit # Git hook template (bash script)
│ ├─ templates/ # Templates copied to user projects
│ │ ├─ USER_GUIDE.md # Daily usage guide
│ │ ├─ feature_request.md # Feature request template
│ │ ├─ feature.discussion.md # Discussion template
│ │ ├─ feature.discussion.sum.md # Summary template
│ │ ├─ design_doc.md # Design document template
│ │ ├─ root_gitignore # Root .gitignore template
│ │ ├─ process/
│ │ │ └─ policies.yml # Machine-readable policies
│ │ └─ rules/
│ │ ├─ root.ai-rules.yml # Root cascading rules
│ │ └─ features.ai-rules.yml # Feature-level rules
│ └─ runtime/ # Scripts copied to bundle & user projects
│ ├─ ramble.py # GUI for feature creation (PySide6/PyQt5)
│ ├─ create_feature.py # CLI for feature creation
│ └─ .gitignore.seed # Gitignore seed patterns
├─ tools/ # Build and test automation
│ ├─ build_installer.py # Creates install bundle
│ ├─ smoke_test.py # Basic validation tests
│ └─ bundle_smoke.py # End-to-end installer testing
├─ install/ # Build output directory (git-ignored)
│ └─ cascadingdev-<version>/ # Generated installer bundle (see section B)
├─ docs/ # System documentation
│ ├─ DESIGN.md # This comprehensive design document
│ └─ INSTALL.md # Installation instructions
├─ tests/ # Test suite (planned, not yet implemented)
│ ├─ unit/
│ ├─ integration/
│ └─ bin/
├─ VERSION # Semantic version (e.g., 0.1.0)
├─ pyproject.toml # Python package configuration
├─ README.md # Public-facing project overview
└─ CLAUDE.md # AI assistant guidance
> **Maintainer vs. user tooling:** the `cdev` CLI (in `src/cascadingdev/`) is only used to build/test the CascadingDev installer. Once a user bootstraps a project, all automation is driven by the pre-commit hook invoking `automation/runner.py` under the control of the project's own `.ai-rules.yml` files.
FUTURE (planned but not yet implemented):
├─ automation/ # 🚧 M1: Orchestration layer
│ ├─ workflow.py # Status reporting, vote parsing
│ ├─ adapters/
│ │ ├─ claude_adapter.py # AI model integration
│ │ └─ gitea_adapter.py # Gitea API integration
│ └─ agents.yml # Agent role definitions
Purpose: Development, testing, and building the installer. The assets/ directory is the single source of truth for all files shipped to users.
B) Install Bundle (Distribution Artifact)
This is the self-contained, portable installer created by tools/build_installer.py.
cascadingdev-<version>/ # Distributable bundle
├─ setup_cascadingdev.py # Installer entry point (stdlib only)
├─ ramble.py # GUI for first feature (optional)
├─ create_feature.py # CLI tool for creating features
├─ assets/ # Embedded resources
│ ├─ hooks/
│ │ └─ pre-commit # Pre-commit hook template
│ └─ templates/ # All templates from source assets/
│ ├─ USER_GUIDE.md
│ ├─ feature_request.md
│ ├─ feature.discussion.md
│ ├─ feature.discussion.sum.md
│ ├─ design_doc.md
│ ├─ root_gitignore
│ ├─ process/
│ │ └─ policies.yml
│ └─ rules/
│ ├─ root.ai-rules.yml
│ └─ features.ai-rules.yml
├─ INSTALL.md # Bundle-local instructions
└─ VERSION # Version metadata
Purpose: End-user distribution. Can be zipped and shared. Requires only Python 3.10+ stdlib (PySide6 optional for GUI).
Rationale: Minimal, auditable, portable. No external dependencies for core functionality. Users can inspect all files before running.
C) User Project (Generated by Installer)
This is the structure created when a user runs setup_cascadingdev.py --target /path/to/project.
my-project/ # User's application repository
├─ .git/ # Git repository
│ └─ hooks/
│ └─ pre-commit # Installed automatically from bundle
├─ .gitignore # Generated from root_gitignore template
├─ .ai-rules.yml # Root cascading rules (from templates/rules/)
├─ USER_GUIDE.md # Daily workflow reference
├─ ramble.py # Copied from bundle (optional GUI helper)
├─ create_feature.py # Copied from bundle (CLI tool)
├─ Docs/ # Documentation and feature tracking
│ ├─ features/ # All features live here
│ │ ├─ .ai-rules.yml # Feature-level cascading rules
│ │ └─ FR_YYYY-MM-DD_<slug>/ # Individual feature folders
│ │ ├─ request.md # Original feature request
│ │ └─ discussions/ # Stage-specific conversation threads
│ │ ├─ feature.discussion.md # Feature discussion
│ │ ├─ feature.discussion.sum.md # Auto-maintained summary
│ │ ├─ design.discussion.md # Design discussion
│ │ ├─ design.discussion.sum.md # Auto-maintained summary
│ │ ├─ implementation.discussion.md # Implementation tracking
│ │ ├─ implementation.discussion.sum.md
│ │ ├─ testing.discussion.md # Test planning
│ │ ├─ testing.discussion.sum.md
│ │ ├─ review.discussion.md # Final review
│ │ └─ review.discussion.sum.md
│ │ ├─ design/ # Design artifacts (created during design stage)
│ │ │ ├─ design.md # Evolving design document
│ │ │ └─ diagrams/ # Architecture diagrams
│ │ ├─ implementation/ # Implementation artifacts
│ │ │ ├─ plan.md # Implementation plan
│ │ │ └─ tasks.md # Task checklist
│ │ ├─ testing/ # Testing artifacts
│ │ │ ├─ testplan.md # Test strategy
│ │ │ └─ checklist.md # Test checklist
│ │ ├─ review/ # Review artifacts
│ │ │ └─ findings.md # Code review findings
│ │ └─ bugs/ # Bug sub-cycles (future)
│ │ └─ BUG_YYYYMMDD_<slug>/
│ │ ├─ report.md
│ │ ├─ discussion.md
│ │ └─ fix/
│ │ ├─ plan.md
│ │ └─ tasks.md
│ ├─ discussions/ # Global discussions (future)
│ │ └─ reviews/ # Code reviews from hook
│ └─ diagrams/ # Auto-generated diagrams (future)
│ └─ file_diagrams/ # PlantUML from source files
├─ process/ # Process configuration
│ ├─ policies.yml # Machine-readable policies (voting, gates)
│ └─ templates/ # Local template overrides (optional)
├─ src/ # User's application source code
│ └─ (user's code)
└─ tests/ # User's test suite
├─ unit/
└─ integration/
FUTURE (not currently created, planned for M1+):
├─ automation/ # 🚧 M1: Orchestration layer
│ ├─ workflow.py # Vote parsing, status reporting
│ ├─ adapters/ # Model and platform integrations
│ └─ agents.yml # Agent role configuration
Purpose: This is the user's actual project repository where they develop their application while using the CascadingDev workflow.
Key Points:
- The first feature request defines the entire project's purpose
- All discussions are version-controlled alongside code
- Pre-commit hook maintains summary files automatically
- Templates can be overridden locally in
process/templates/ - The
automation/directory is planned but not yet implemented (M1)
Installation & Distribution Architecture
First-Run Flow (User's Project Initialization)
User runs:
python setup_cascadingdev.py --target /path/to/users-project [--no-ramble] [--provider mock]
Installer actions:
- Creates standard folders (Docs/, process/templates/, etc.)
- Copies templates, ramble.py, and create_feature.py into the user project
- Initializes git (main branch), writes
.gitignore - Installs pre-commit hook
- Optionally launches Ramble (unless --no-ramble) to help collect first Feature Request
- Writes a concise USER_GUIDE.md into the user project root for day-to-day use
Seeds:
Docs/features/FR_<date>_initial-feature-request/request.md
Docs/features/.../discussions/feature.discussion.md
Docs/features/.../discussions/feature.discussion.sum.md
Initial commit message: “bootstrap Cascading Development scaffolding”.
Fallback: If Ramble JSON isn’t returned, installer prints to stderr and optionally falls back to terminal prompts.
Important: The CascadingDev DESIGN.md is not copied into user projects. The first feature’s design doc (created later at the design stage) becomes the project’s own design document.
Pre-Commit Hook (v1 behavior)
- Fast regex secret scan on staged diffs
- Ensures each
*.discussion.mdhas a companion*.discussion.sum.md - Non-blocking status call to
automation/workflow.py --status
Policy: v1 is non-blocking; blocking checks are introduced gradually in later versions.
Template META System & Ramble Integration
Overview
CascadingDev includes a sophisticated template metadata system that allows templates to be self-describing. This enables dynamic GUI generation, field validation, and flexible template rendering without hardcoding form structures in the installer.
Status: ✅ Fully implemented (v0.1.0)
Template META Format
Templates can include JSON metadata inside HTML comments at the top of the file:
<!--META
{
"kind": "feature_request",
"ramble_fields": [
{"name": "Title", "hint": "camelCase, ≤24 chars", "default": "initialProjectDesign"},
{"name": "Intent"},
{"name": "Summary", "hint": "≤2 sentences"}
],
"criteria": {
"Title": "camelCase, <= 24 chars",
"Summary": "<= 2 sentences"
},
"hints": [
"What is it called?",
"Who benefits most?",
"What problem does it solve?"
],
"tokens": ["FeatureId", "CreatedDate", "Title", "Intent", "Summary"]
}
-->
# Feature Request: {Title}
**Intent**: {Intent}
**Summary**: {Summary}
**Meta**: FeatureId: {FeatureId} • Created: {CreatedDate}
META Fields Reference
| Field | Type | Purpose | Example |
|---|---|---|---|
kind |
string | Template type identifier | "feature_request" |
ramble_fields |
array | Field definitions for Ramble GUI | See below |
criteria |
object | Validation rules per field | {"Title": "camelCase, <= 24 chars"} |
hints |
array | User guidance prompts | ["What is it called?"] |
tokens |
array | List of available placeholder tokens | ["FeatureId", "Title"] |
ramble_fields Specification
Each field in ramble_fields is an object with:
{
"name": "FieldName", // Required: field identifier
"hint": "display hint", // Optional: shown to user as guidance
"default": "defaultValue" // Optional: pre-filled value
}
Example:
"ramble_fields": [
{"name": "Title", "hint": "camelCase, ≤24 chars", "default": "initialProjectDesign"},
{"name": "Intent"},
{"name": "ProblemItSolves"},
{"name": "BriefOverview"},
{"name": "Summary", "hint": "≤2 sentences"}
]
How META is Processed
In setup_project.py (src/cascadingdev/setup_project.py:64-115):
-
Parsing (
load_template_with_meta()):meta, body = load_template_with_meta(template_path) # meta = {"ramble_fields": [...], "criteria": {...}, ...} # body = template text without META comment -
Extraction (
meta_ramble_config()):fields, defaults, criteria, hints = meta_ramble_config(meta) # fields = ["Title", "Intent", "Summary", ...] # defaults = {"Title": "initialProjectDesign"} # criteria = {"Title": "camelCase, <= 24 chars"} # hints = ["What is it called?", ...] -
Rendering (
render_placeholders()):values = {"Title": "myFeature", "FeatureId": "FR_2025-10-30_...", ...} rendered = render_placeholders(body, values) # Replaces {Token} and {{Token}} with actual values
Token Replacement Rules
The render_placeholders() function supports two-pass replacement:
- First pass: Replace
{{Token}}(double braces) - for tokens that shouldn't be re-processed - Second pass: Replace
{Token}(single braces) using Python's.format_map()
System-provided tokens:
{FeatureId}- Generated feature ID (e.g.,FR_2025-10-30_initial-feature-request){CreatedDate}- Current date inYYYY-MM-DDformat{Title},{Intent}, etc. - User-provided field values
Ramble: AI-Powered Feature Capture GUI
Overview
Ramble is a sophisticated PySide6/PyQt5 GUI application that helps users articulate feature requests through AI-assisted structured input. It supports multiple AI providers, generates PlantUML diagrams, and returns validated JSON output.
Status: ✅ Fully implemented (v0.1.0)
Location: assets/runtime/ramble.py (copied to user projects and install bundle)
Key Features
- Multi-Provider Architecture - Pluggable AI backends
- Dynamic Field Generation - Driven by template META
- Field Locking - Lock fields to preserve context across regenerations
- PlantUML Integration - Auto-generate and render architecture diagrams
- Validation Criteria - Per-field rules from template metadata
- Graceful Fallback - Terminal prompts if GUI fails
Supported Providers
| Provider | Status | Description | Usage |
|---|---|---|---|
| mock | ✅ Stable | No external calls, derives fields from ramble text | Default, no setup required |
| claude | ✅ Stable | Claude CLI integration via subprocess | Requires claude CLI in PATH |
Provider Selection:
# Mock provider (no AI, instant)
python ramble.py --provider mock --fields Title Summary
# Claude CLI provider
python ramble.py --provider claude \
--claude-cmd /path/to/claude \
--fields Title Summary Intent
Provider Protocol
All providers implement the RambleProvider protocol:
class RambleProvider(Protocol):
def generate(
self,
*,
prompt: str, # User's base prompt
ramble_text: str, # User's freeform notes
fields: List[str], # Required field names
field_criteria: Dict[str, str], # Validation rules per field
locked_context: Dict[str, str], # Previously locked field values
) -> Dict[str, Any]:
"""
Returns:
{
"summary": str,
"fields": Dict[str, str],
"uml_blocks": List[Tuple[str, Optional[bytes]]],
"image_descriptions": List[str]
}
"""
...
Mock Provider
Purpose: Fast, deterministic testing and offline use.
Behavior:
- Derives summary from last 25 words of ramble text
- Creates placeholder fields with word count
- Generates simple actor-system UML diagram
- Returns generic image descriptions
Example Output:
{
"summary": "User wants to track metrics and export them.",
"fields": {
"Title": "Title: Derived from ramble (42 words). [criteria: camelCase, <=24 chars]",
"Intent": "Intent: Derived from ramble (42 words).",
},
"uml_blocks": [("@startuml\nactor User\n...\n@enduml", None)],
"image_descriptions": ["Illustrate the core actor..."]
}
Claude CLI Provider
Purpose: Production-quality AI-generated structured output.
Setup Requirements:
# Install Claude CLI (npm)
npm install -g @anthropics/claude-cli
# Or provide custom path
python ramble.py --provider claude --claude-cmd /custom/path/to/claude
Features:
- Spawns
claudesubprocess with structured prompt - Includes locked field context in prompt
- Enforces per-field criteria
- Extracts PlantUML blocks from response
- Timeout protection (default 120s)
- Debug logging to
/tmp/ramble_claude.log
Constructor Options:
ClaudeCLIProvider(
cmd="claude", # Command name or path
extra_args=[], # Additional CLI args
timeout_s=120, # Subprocess timeout
tail_chars=8000, # Max response length
use_arg_p=True, # Use -p flag for prompt
debug=False, # Enable debug logging
log_path="/tmp/ramble_claude.log"
)
Prompt Structure: The provider builds a comprehensive prompt including:
- User's base prompt
- Locked field context (from previously locked fields)
- User's ramble notes
- Required field list with criteria
- PlantUML and image description requests
- JSON output format specification
Integration with Installer
In setup_project.py:151-218 (run_ramble_and_collect()):
def run_ramble_and_collect(target: Path, provider: str = "mock", claude_cmd: str = "claude"):
# 1. Load template META to get field configuration
fr_tmpl = INSTALL_ROOT / "assets" / "templates" / "feature_request.md"
meta, _ = load_template_with_meta(fr_tmpl)
field_names, _defaults, criteria, hints = meta_ramble_config(meta)
# 2. Build dynamic Ramble command from META
args = [
sys.executable, str(ramble),
"--provider", provider,
"--fields", *field_names, # From template META
]
if criteria:
args += ["--criteria", json.dumps(criteria)]
if hints:
args += ["--hints", json.dumps(hints)]
# 3. Launch Ramble, capture JSON output
proc = subprocess.run(args, capture_output=True, text=True)
# 4. Parse JSON or fall back to terminal prompts
try:
return json.loads(proc.stdout)
except:
# Terminal fallback: collect fields via input()
return collect_via_terminal()
Ramble GUI Workflow
- User writes freeform notes in the "Ramble" text area
- Clicks "Generate" → Provider processes ramble text
- Review generated fields → Edit as needed
- Lock important fields → Prevents overwrite on regenerate
- Regenerate if needed → Locked fields feed back as context
- Review PlantUML diagrams → Auto-rendered if plantuml CLI available
- Click "Submit" → Returns JSON to installer
Output Format:
{
"summary": "One or two sentence summary",
"fields": {
"Title": "metricsExportFeature",
"Intent": "Enable users to track and export usage metrics",
"ProblemItSolves": "Currently no way to analyze usage patterns",
"BriefOverview": "Add metrics collection and CSV/JSON export",
"Summary": "Track usage metrics and export to various formats."
}
}
Terminal Fallback
If Ramble GUI fails (missing PySide6, JSON parse error, etc.), the installer falls back to terminal input:
def ask(label, default=""):
try:
v = input(f"{label}: ").strip()
return v or default
except EOFError:
return default
fields = {
"Title": ask("Title (camelCase, <=24 chars)", "initialProjectDesign"),
"Intent": ask("Intent", "—"),
"Summary": ask("One- or two-sentence summary", ""),
}
Adding New Providers
To add a new provider (e.g., deepseek, openai):
-
Create provider class in
ramble.py:class DeepseekProvider: def generate(self, *, prompt, ramble_text, fields, field_criteria, locked_context): # Call Deepseek API response = call_deepseek_api(...) return { "summary": ..., "fields": {...}, "uml_blocks": [...], "image_descriptions": [...] } -
Register in CLI parser:
p.add_argument("--provider", choices=["mock", "claude", "deepseek"], # Add here default="mock") -
Instantiate in main():
if args.provider == "deepseek": provider = DeepseekProvider(api_key=os.getenv("DEEPSEEK_API_KEY"))
Advanced Features
PlantUML Support:
- Ramble extracts
@startuml...@endumlblocks from provider responses - Auto-renders to PNG if
plantumlCLI available - Falls back to text display if rendering fails
Image Generation (Optional):
- Supports Stability AI and Pexels APIs
- Requires API keys via environment variables
- Displays images in GUI if generated
Field Locking:
- Checkbox next to each field
- Locked fields are highlighted and included in next generation prompt
- Enables iterative refinement without losing progress
Criteria Validation:
- Displayed alongside each field as hints
- Passed to AI provider to enforce constraints
- No automatic validation (relies on AI compliance)
Configuration Examples
Basic usage (mock provider):
python ramble.py \
--fields Title Intent Summary \
--prompt "Describe your feature idea"
Production usage (Claude):
python ramble.py \
--provider claude \
--claude-cmd ~/.npm-global/bin/claude \
--fields Title Intent ProblemItSolves BriefOverview Summary \
--criteria '{"Title":"camelCase, <=24 chars","Summary":"<=2 sentences"}' \
--hints '["What is it?","Who benefits?","What problem?"]' \
--prompt "Describe your initial feature request"
Installer integration:
python setup_cascadingdev.py \
--target /path/to/project \
--provider claude \
--claude-cmd /usr/local/bin/claude
Benefits of META + Ramble System
- No Hardcoding: Field lists and validation rules live in templates
- Dynamic Forms: GUI adapts to template changes automatically
- Consistent UX: Same Ramble workflow for all template types
- Extensible: Add new providers without changing core logic
- Offline Capable: Mock provider works without network
- AI-Assisted: Users get help articulating complex requirements
- Reversible: All input is stored in git, easily editable later
Limitations & Future Work
Current Limitations:
- No automatic field validation (relies on AI compliance)
- PlantUML rendering requires external CLI tool
- Claude provider requires separate CLI installation
- No streaming/incremental updates during generation
Potential Enhancements (not yet planned):
- Native API providers (no CLI subprocess)
- Real-time field validation
- Multi-turn conversation support
- Provider comparison mode (generate with multiple providers)
- Template validator that checks META integrity
Build & Release Process (repeatable)
Goal: deterministic “unzip + run” artifact for each version.
Always rebuild after edits
# Rebuild bundle every time you change assets/ or installer logic
python tools/build_installer.py
# Run ONLY the bundled copy
python install/cascadingdev-*/setup_cascadingdev.py --target /path/to/new-project
6.1 Versioning
- Update
VERSION(semver):MAJOR.MINOR.PATCH - Tag releases in git to match
VERSION
6.2 Build the installer bundle
python3 tools/build_installer.py
# Output: install/cascadingdev-<version>/
6.3 Smoke test the bundle (verifies minimal Python+git system compatibility)
python3 install/cascadingdev-<version>/setup_cascadingdev.py --target /tmp/my-users-project --no-ramble
# then again without --no-ramble once GUI deps are ready
6.4 Package for distribution
cd install
zip -r cascadingdev-<version>.zip cascadingdev-<version>
Success criteria
- Bundle runs on a clean machine with only Python + git
- Creates expected files, installs hook, commits once
- Optional GUI capture works under venv with PySide6/PyQt5
Environment Guidance (Ramble GUI)
Recommended: create and activate a local virtualenv before running the installer.
python3 -m venv .venv
source .venv/bin/activate
python -m pip install --upgrade pip wheel
python -m pip install PySide6 # or PyQt5
The installer calls sys.executable for ramble.py, so using the venv’s Python ensures GUI dependencies are found.
If GUI unavailable, run with --no-ramble or rely on terminal fallback.
Idempotency & Flags
Installer flags:
--target(required): destination path for the user’s project--no-ramble: skip GUI prompt; seed using templates (or terminal fallback)--provider: specify Ramble provider (default: mock)
Future flags (planned):
--force: overwrite existing files (or back up)--copy-only: skip git init and hook install--templates /path: override shipped templates
Reproducibility Playbook
To recreate the same result later or on another machine:
git checkout <tagged release>
python3 tools/build_installer.py
python3 install/cascadingdev-<version>/setup_cascadingdev.py --target <path>
(Optional) activate venv and install PySide6 before running if you want the GUI.
Roadmap (post-v0.1)
- Expand pre-commit checks (summary normalization, rule validation)
- CLI status report for current staged discussions and votes
- Provide a developer CLI (
cdev init …) that reuses installer logic - Unit tests for scaffolding helpers (
src/cascadingdev/*) - Template override mechanism (
--templates)
INSTALL.md Template for Bundles
The builder should emit an INSTALL.md alongside the installer bundle:
# CascadingDev Installer
## Requirements
- Python 3.10+ and git
- (Optional) PySide6 or PyQt5 for Ramble GUI
## Quick start
```bash
python setup_cascadingdev.py --target /path/to/users-project
Options
- `--no-ramble`: skip GUI capture, use templates or terminal prompts
- `--provider`: Ramble provider (default: mock)
Steps performed
- Creates standard folder layout in target
- Copies templates, DESIGN.md, ramble.py
- Initializes git and installs pre-commit hook
- Launches Ramble (unless --no-ramble)
- Seeds first feature request & discussions
- Makes initial commit
If GUI fails, use a virtualenv and `pip install PySide6`, or run with `--no-ramble`.
This ensures every distributed bundle includes explicit usage instructions.
Note: Each stage discussion has a companion summary maintained automatically next to it to provide a live, scannable state of the thread.
Naming Conventions
- Feature Folder: Docs/features/FR_YYYY-MM-DD_/
- Discussion Files: {stage}.discussion.md in discussions/ subfolder
- Discussion summary: {stage}.discussion.sum.md in discussions/ subfolder
- Bug Reports: bugs/BUG_YYYYMMDD_/ with standardized contents
- Source Files: Maintain existing patterns in src/
Template Variables
Supported in path resolution:
- {basename} — filename with extension (e.g., design.discussion.md)
- {name} — filename without extension (e.g., design.discussion)
- {ext} — file extension without dot (e.g., md)
- {date} — current date in YYYY-MM-DD
- {rel} — repository-relative path to the source file
- {dir} — directory containing the source file
- {feature_id} — nearest FR_* folder name (e.g., FR_2025-10-21_initial-feature-request)
- {stage} — stage inferred from discussion filename (.discussion.md), e.g., feature|design|implementation|testing|review
Automation AI Configuration
Overview
The automation runner (automation/runner.py and automation/patcher.py) supports multiple AI providers with automatic fallback chains and intelligent model selection based on task complexity. This system balances cost, speed, and quality by routing simple tasks to fast models and complex tasks to premium models.
Configuration Architecture
Central Configuration: config/ai.yml
This file is copied to all generated projects and provides a single source of truth for AI provider preferences. It defines three command chains optimized for different use cases:
version: 1
runner:
# Default command chain (balanced speed/quality)
command_chain:
- "claude -p"
- "codex --model gpt-5"
- "gemini --model gemini-2.5-flash"
# Fast command chain (optimized for speed/cost)
# Used when model_hint: fast in .ai-rules.yml
command_chain_fast:
- "claude -p"
- "codex --model gpt-5-mini"
- "gemini --model gemini-2.5-flash"
# Quality command chain (optimized for complex tasks)
# Used when model_hint: quality in .ai-rules.yml
command_chain_quality:
- "claude -p"
- "codex --model o3"
- "gemini --model gemini-2.5-pro"
sentinel: "CASCADINGDEV_NO_CHANGES"
ramble:
default_provider: mock
providers:
mock: { kind: mock }
claude: { kind: claude_cli, command: "claude", args: [] }
Model Hint System
The .ai-rules.yml files can specify a model_hint field per rule to guide model selection:
rules:
feature_discussion_writer:
model_hint: fast # Simple vote counting - use fast models
instruction: |
Maintain feature discussion with votes...
design_discussion_writer:
model_hint: quality # Complex architecture work - use quality models
instruction: |
Propose detailed architecture and design decisions...
implementation_gate_writer:
model_hint: fast # Gate checking is simple - use fast models
instruction: |
Create implementation discussion when design is ready...
Execution Flow
-
Rule Evaluation (
automation/runner.py:90-112)- Runner reads
.ai-rules.ymland finds matching rule for staged file - Extracts
model_hintfrom rule config (or output_type override) - Passes hint to
generate_output(source_rel, output_rel, instruction, model_hint)
- Runner reads
-
Prompt Construction (
automation/patcher.py:371-406)- Patcher receives model_hint and builds prompt
- Injects
TASK COMPLEXITY: FASTorTASK COMPLEXITY: QUALITYline - This hint helps Claude subagents auto-select appropriate model
-
Chain Selection (
automation/patcher.py:63-77)ModelConfig.get_commands_for_hint(hint)selects appropriate chain- Returns
command_chain_fast,command_chain_quality, or default - Falls back to default chain if hint-specific chain is empty
-
Fallback Execution (
automation/patcher.py:409-450)call_model()iterates through selected command chain left→right- First successful provider's output is used
- Errors are logged; subsequent providers are tried
- If all fail, error is raised (but pre-commit remains non-blocking)
Example Prompt with Hint
When model_hint: fast is set, the generated prompt includes:
You are a specialized patch generator for CascadingDev automation.
SOURCE FILE: Docs/features/FR_2025-11-01_auth/discussions/feature.discussion.md
OUTPUT FILE: Docs/features/FR_2025-11-01_auth/discussions/feature.discussion.sum.md
TASK COMPLEXITY: FAST
=== SOURCE FILE CHANGES (staged) ===
[diff content]
=== CURRENT OUTPUT FILE CONTENT ===
[summary file content]
INSTRUCTIONS:
Update the vote summary section...
The TASK COMPLEXITY: FAST line signals to Claude's subagent system to select cdev-patch (Haiku) instead of cdev-patch-quality (Sonnet).
Supported Providers
| Provider | CLI Tool | Fast Model | Default Model | Quality Model |
|---|---|---|---|---|
| Claude | claude -p |
Haiku (via subagent) | Auto-select | Sonnet (via subagent) |
| OpenAI Codex | codex |
gpt-5-mini | gpt-5 | o3 |
| Google Gemini | gemini |
gemini-2.5-flash | gemini-2.5-flash | gemini-2.5-pro |
Claude Subagent Setup:
# Create ~/.claude/agents/cdev-patch.md and cdev-patch-quality.md
./tools/setup_claude_agents.sh
# Verify installation
claude agents list
The setup script creates two subagent files with descriptions that include "MUST BE USED when TASK COMPLEXITY is FAST/QUALITY", enabling Claude's agent selection to respond to the hint in the prompt.
Implementation Modules
Core modules:
-
automation/ai_config.py- Configuration loader (AISettings,RunnerSettings,RambleSettings)load_ai_settings(repo_root)- Loads and validates config/ai.ymlRunnerSettings.get_chain_for_hint(hint)- Returns appropriate command chainparse_command_chain(raw)- Splits "||" delimited command strings
-
automation/runner.py- Orchestrates rule evaluation and output generation- Reads
model_hintfrom cascaded rule config (line 94) - Passes hint through to
generate_output()(line 112) - Supports output_type hint overrides (line 100-102)
- Reads
-
automation/patcher.py- Generates and applies AI patchesModelConfig.get_commands_for_hint(hint)- Selects command chain (line 63-77)build_prompt(..., model_hint)- Injects TASK COMPLEXITY line (line 371-406)call_model(..., model_hint)- Executes chain with fallback (line 409-450)
Cost Optimization Examples
Before (all tasks use same model):
- Vote counting: Sonnet @ $3/M tokens → $0.003/commit
- Design discussion: Sonnet @ $3/M tokens → $0.15/commit
- Gate checking: Sonnet @ $3/M tokens → $0.002/commit
After (intelligent routing):
- Vote counting: Haiku @ $0.25/M tokens → $0.0002/commit (93% savings)
- Design discussion: Sonnet @ $3/M tokens → $0.15/commit (unchanged)
- Gate checking: Haiku @ $0.25/M tokens → $0.00017/commit (91% savings)
For a project with 100 commits, savings of ~$0.30/commit × 70% simple tasks = $21 saved while maintaining quality for complex tasks.
Environment Overrides
Users can temporarily override config/ai.yml via environment variables:
# Override command for single commit
CDEV_AI_COMMAND="claude -p" git commit -m "message"
# Chain multiple providers
CDEV_AI_COMMAND="claude -p || codex --model gpt-5" git commit -m "message"
# Use only quality models for critical work
CDEV_AI_COMMAND="claude -p" git commit -m "critical refactor"
Environment variables take precedence over config/ai.yml but don't persist.
Stage Model & Operational Procedure
Complete Stage Lifecycle
Request → Feature Discussion → Design Discussion → Implementation Discussion → Testing Discussion → Review Discussion → Release
Stage Overview
| Stage | Primary File | Promotion Trigger | Human Gate | Key Artifacts Created |
|---|---|---|---|---|
| 1 Request | request.md | Created from template | – | feature.discussion.md, feature.discussion.sum.md |
| 2 Feature Discussion | discussions/feature.discussion.md | Votes → READY_FOR_DESIGN | – | design.discussion.md, design/design.md |
| 3 Design Discussion | discussions/design.discussion.md | Votes → READY_FOR_IMPLEMENTATION | – | implementation.discussion.md, implementation/plan.md, implementation/tasks.md |
| 4 Implementation Discussion | discussions/implementation.discussion.md | All required tasks complete → READY_FOR_TESTING | ✅ ≥1 human READY | testing.discussion.md, testing/testplan.md, testing/checklist.md |
| 5 Testing Discussion | discussions/testing.discussion.md | All tests pass → READY_FOR_REVIEW | – | review.discussion.md, review/findings.md |
| 6 Review Discussion | discussions/review.discussion.md | Human READY → READY_FOR_RELEASE | ✅ ≥1 human READY | Release notes, follow-up FRs/Bugs |
| 7 Release | – | Tag & changelog generated | ✅ maintainer | Changelog, version bump, rollback notes |
Stage 1: Request
Entry Criteria
- Docs/features/FR_*/request.md created from template
- Template completeness: intent, motivation, constraints, open questions
Artifacts Generated
- request.md: Source feature request document
Automated Actions
- Creates discussions/feature.discussion.md with standard header
- Adds Summary and Participation sections
- Appends initial AI comment with vote
Exit Criteria
- Discussion file created and populated
- Ready for feature discussion phase
Stage 2: Feature Discussion
- File: discussions/feature.discussion.md
Header Template
---
type: discussion
stage: feature
status: OPEN # OPEN | READY_FOR_DESIGN | REJECTED
feature_id: FR_YYYY-MM-DD_<slug>
stage_id: FR_YYYY-MM-DD_<slug>_feature
created: YYYY-MM-DD
promotion_rule:
allow_agent_votes: true
ready_min_eligible_votes: all
reject_min_eligible_votes: all
participation:
instructions: |
- Append your input at the end as: "YourName: your comment…"
- Every comment must end with a vote line: "VOTE: READY|CHANGES|REJECT"
- Agents/bots must prefix names with "AI_"
voting:
values: [READY, CHANGES, REJECT]
---
Operational Flow
- Participants append comments ending with vote lines
- Latest vote per participant counts toward thresholds
- AI_Moderator tracks unanswered questions and missing votes
- When READY threshold met: status → READY_FOR_DESIGN
Promotion decisions obey thresholds in process/policies.yml (voting.quorum). The orchestrator re-evaluates eligibility before any status change.
Automated Actions on commit
- Appends AI comment with vote
- Moderates discussion
- Establishes objectives with consensus
- Delegates in-conversation tasks
- Creates/maintains discussions/feature.discussion.sum.md
Automation boundary: All actions occur within the same Git commit and are never auto-committed by the orchestrator.
Promotion Actions
- Creates discussions/design.discussion.md (OPEN)
- Creates design/design.md seeded from request + feature discussion
Stage 3: Design Discussion
- File: discussions/design.discussion.md
Header
---
type: discussion
stage: design
status: OPEN # OPEN | READY_FOR_IMPLEMENTATION | NEEDS_MORE_INFO
feature_id: FR_YYYY-MM-DD_<slug>
stage_id: FR_YYYY-MM-DD_<slug>_design
# ... same promotion_rule, participation, voting as feature
---
Operational Flow
- AI_Architect updates design/design.md on each commit
- Design doc evolves with discussion: options, decisions, risks, acceptance criteria
- Participants vote on design completeness
- When READY threshold met: status → READY_FOR_IMPLEMENTATION
Promotion decisions obey thresholds in process/policies.yml (voting.quorum). The orchestrator re-evaluates eligibility before any status change.
Automated Actions on commit
- Appends AI comment with vote
- Moderates discussion
- establishes objective
- delegates in-conversation tasks
- Creates/maintains discussions/design.discussion.sum.md
- Creates/maintains design/design.md
- Creates/maintains design/diagrams/*.puml files if any are produced during the discussion.
Design Document Structure
- Context & Goals
- Non-Goals & Constraints
- Options Considered with Trade-offs
- Decision & Rationale
- Architecture Diagrams
- Risks & Mitigations
- Measurable Acceptance Criteria
Promotion Actions
- Creates discussions/implementation.discussion.md (OPEN)
- Creates implementation/plan.md and implementation/tasks.md
- Creates implementation/tasks.md - Tasks are checkboxes aligned to acceptance criteria
Stage 4: Implementation Discussion
- File: discussions/implementation.discussion.md
Header
---
type: discussion
stage: implementation
status: OPEN # OPEN | READY_FOR_TESTING
feature_id: FR_YYYY-MM-DD_<slug>
stage_id: FR_YYYY-MM-DD_<slug>_implementation
promotion_rule:
allow_agent_votes: true
ready_min_eligible_votes: 1_human # HUMAN GATE
reject_min_eligible_votes: all
# ...
---
Operational Flow
- AI_Implementer syncs implementation/tasks.md with discussion
- Parse checkboxes and PR mentions from discussion posts
- Link commits/PRs to tasks when mentioned ([#123], commit shas)
- When all required tasks complete: status → READY_FOR_TESTING
Promotion decisions obey thresholds in process/policies.yml (voting.quorum). The orchestrator re-evaluates eligibility before any status change.
Automated Actions on commit
- Appends AI comment with vote
- Moderates discussion
- establishes implementation objectives from Plan.md
- delegates implementation tasks from Tasks.md
- Creates/maintains discussions/implementation.discussion.sum.md
- Creates/maintains src/* files if any are produced during the discussion.
Task Management
- Tasks.md maintained as single source of truth
- Checkbox completion tracked automatically
- PR and commit references linked automatically
Promotion Actions
- Creates discussions/testing.discussion.md (OPEN)
- Creates testing/testplan.md and testing/checklist.md
- Test checklist derived from acceptance criteria + edge cases
Stage 5: Testing Discussion
- File: discussions/testing.discussion.md
Header
type: discussion
stage: testing
status: OPEN # OPEN | READY_FOR_REVIEW
feature_id: FR_YYYY-MM-DD_<slug>
stage_id: FR_YYYY-MM-DD_<slug>_testing
promotion_rule:
allow_agent_votes: true
ready_min_eligible_votes: all
reject_min_eligible_votes: all
Operational Flow
- AI_Tester syncs testing/checklist.md with discussion posts
- Parse result blocks: [RESULT] PASS/FAIL: description
- Mark corresponding checklist items pass/fail
- On test failure: auto-create bug report with full sub-cycle
Promotion decisions obey thresholds in process/policies.yml (voting.quorum). The orchestrator re-evaluates eligibility before any status change.
Automated Actions on commit
- Appends AI comment with vote
- Moderates discussion
- establishes testing objectives from testing/testplan.md and testing/checklist.md
- delegates testing tasks from testing/checklist.md
- Creates/maintains discussions/testing.discussion.sum.md
- Creates/maintains test/* files if any are produced during the discussion.
Bug Sub-Cycle Creation
bugs/BUG_YYYYMMDD_<slug>/
├─ report.md # Steps, expected/actual, environment
├─ discussion.md # Bug discussion (OPEN)
├─ discussion.sum.md # Summary of Bug discussion
└─ fix/
├─ plan.md # Fix implementation plan
└─ tasks.md # Fix tasks checklist
└─ src/
Bug Resolution Flow
- Bug follows mini Implementation→Testing cycle
- On bug closure, return to main testing discussion
- Bug results integrated into main test checklist
The bug sub-cycle mirrors Stages 4–6 (Implementation → Testing → Review) and inherits the same promotion and voting policies.
Automated Actions on commit
- Appends AI comment with vote to discussion.md
- Moderates discussion
- establishes fix objectives from plan.md
- delegates fix tasks from tasks.md
- Maintains discussions/discussion.sum.md
- Creates/maintains fix/src/* files if any are produced during the discussion.
Promotion Actions
- Creates or report to discussions/review.discussion.md (OPEN)
- Creates review/findings_BUG_YYYYMMDD_.md with verification summary
Stage 6: Review Discussion
- File: discussions/review.discussion.md
Header
---
type: discussion
stage: review
status: OPEN # OPEN | READY_FOR_RELEASE | CHANGES_REQUESTED
feature_id: FR_YYYY-MM-DD_<slug>
stage_id: FR_YYYY-MM-DD_<slug>_review
promotion_rule:
allow_agent_votes: true
ready_min_eligible_votes: 1_human # HUMAN GATE
reject_min_eligible_votes: all
# ...
---
Operational Flow
- AI_Reviewer summarizes into review/findings.md
- Review covers: changes, risks, test evidence, deployment considerations
- Can spawn follow-up feature requests or bugs from findings
- When human READY present and no blockers: status → READY_FOR_RELEASE
Promotion decisions obey thresholds in process/policies.yml (voting.quorum). The orchestrator re-evaluates eligibility before any status change.
Follow-up Artifact Creation
- New FR: ../../FR_YYYY-MM-DD_followup/request.md
- New Bug: bugs/BUG_YYYYMMDD_review/report.md
Stage 7: Release
Entry Criteria
- Review discussion status is READY_FOR_RELEASE
Automated Actions
- Generate release notes from feature changes
- Semver bump based on change type
- Create git tag
- Update changelog
- Document rollback procedure
Post-Release
- Queue post-release validation tasks
- Update documentation as needed
- Archive feature folder if complete
State machine summary: All stage transitions are governed by the orchestrator and thresholds defined in process/policies.yml. Human gates remain mandatory for Implementation and Release.
Voting, Quorum & Etiquette
Voting System
Vote Values: READY | CHANGES | REJECT
Format Requirements:
- Each comment must end with: VOTE: READY|CHANGES|REJECT
- Last line of comment, exact format
- Multiple votes by same participant: latest wins
- Trailing spaces are ignored; the vote must be the final non-empty line
Vote Parsing Examples:
Rob: I agree with this approach. VOTE: READY
→ Rob: READY
AI_Claude: Here's my analysis... VOTE: CHANGES
→ AI_Claude: CHANGES (if allow_agent_votes=true)
User: I have concerns... VOTE: CHANGES
Later: User: Actually, addressed now. VOTE: READY
→ User: READY (latest vote wins)
Update Rules
- Latest vote per participant supersedes all prior votes in the same stage.
- If a comment has multiple “VOTE:” lines, only the last valid line counts.
- Empty or malformed vote lines are ignored (no implicit abstain).
Eligibility & Quorum
Default Policy (machine-readable in process/policies.yml):
version: 1
voting:
values: [READY, CHANGES, REJECT]
allow_agent_votes: true
quorum:
discussion: { ready: all, reject: all }
implementation: { ready: 1_human, reject: all }
release: { ready: 1_human, reject: all }
eligibility:
agents_allowed: true
require_human_for: [implementation, release]
etiquette:
name_prefix_agents: "AI_"
vote_line_regex: "^VOTE:\\s*(READY|CHANGES|REJECT)\\s*$"
timeouts:
discussion_stale_days: 3
nudge_interval_hours: 24
Human Safety Gates:
- Implementation promotion: ≥1 human READY required
- Release promotion: ≥1 human READY required
- Agent votes count toward discussion but cannot satisfy human requirements
Promotion Evaluation Algorithm
- Parse latest vote per participant (apply
vote_line_regexto final non-empty line). - Filter by eligibility (humans/agents per stage policy).
- Check human-gate requirement (if configured for the stage).
- Evaluate quorum thresholds from
voting.quorum[stage]:ready: all→ all eligible voters areREADY(or noCHANGES/REJECT)ready: 1_human→ at least one humanREADYreject: all→ if anyREJECT, fail promotion
- If thresholds met → flip
statusto the target (e.g.,READY_FOR_DESIGN) within the same commit and generate next artifacts. - If not met → append moderator summary and keep status unchanged.
Eligibility Definition
- Eligible voter: any participant (human or agent) who posted in the current stage discussion and conforms to eligibility.* policy.
- Human gate: stages listed in eligibility.require_human_for require ≥ 1 human READY regardless of agent votes.
Participation Etiquette
- Conciseness: Keep comments action-oriented and focused
- References: Link to files/sections when possible (design.md#architecture)
- Naming: Agents must prefix with AI_ (e.g., AI_Architect)
- Ownership: Suggest explicit owners for next steps (@AI_Architect: please draft...)
- Timeliness: Respond to direct questions within 24 hours
- Staleness & Nudges: If a stage has no new comments within
discussion_stale_days, the AI_Moderator posts a nudge everynudge_interval_hourswith missing votes and open questions.
Tie-breaks & Deadlocks
- If votes include both
READYandCHANGES/REJECTbeyond the promotion timeout (promotion_timeout_days), the AI_Moderator escalates:- Summarize blocking points and owners,
- Request explicit human decision,
- If still unresolved after one more nudge window, maintain status and open a follow-up item in the summary’s ACTION_ITEMS.
Cascading Rules System
The Cascading Rules System defines how automation instructions are discovered and applied
for any file committed in the repository. The nearest .ai-rules.yml file to a changed
file determines how it will be processed. Rules can exist at three scopes:
| Scope | Typical Path | Purpose |
|---|---|---|
| Global | /.ai-rules.yml |
Default behavior for all files (e.g., code, diagrams) |
| Feature-Scoped | Docs/features/.ai-rules.yml |
Rules specific to feature discussions and artifacts |
| Local / Experimental | <feature>/local/.ai-rules.yml (optional) |
Overrides for prototypes or nested modules |
Rule lookup always starts in the source file’s directory and walks upward until it finds
a .ai-rules.yml, then merges settings from outer scopes, where nearest directory wins.
Global Rules (Root .ai-rules.yml)
version: 1
# Map file extensions to rule names
file_associations:
"*.js": "js-file"
"*.ts": "js-file"
"*.puml": "puml-file"
"*.md": "md-file"
rules:
js-file:
description: "Generate PlantUML + review for JS/TS files"
outputs:
diagram:
enabled: true
path: "Docs/diagrams/file_diagrams/{basename}.puml"
output_type: "puml-file"
instruction: |
Update the PlantUML diagram to reflect staged code changes.
Focus on: key functions, control flow, data transformations, dependencies.
Keep architectural elements clear and focused.
review:
enabled: true
path: "Docs/discussions/reviews/{date}_{basename}.md"
output_type: "md-file"
instruction: |
Create technical review of code changes.
Include: summary of changes, potential risks, edge cases,
testing considerations, performance implications.
Use concise bullet points.
puml-file:
description: "Rules for PlantUML diagram files"
instruction: |
Maintain readable, consistent diagrams.
Use descriptive element names, consistent arrow styles.
Include brief legend for complex diagrams.
md-file:
description: "Rules for Markdown documentation"
instruction: |
Use proper Markdown syntax with concise paragraphs.
Use code fences for examples, lists for multiple points.
Maintain technical, clear tone.
settings:
max_tokens: 4000
temperature: 0.1
model: "claude-sonnet-4-5-20250929"
Validation & Schema
Each .ai-rules.yml must pass a lightweight YAML schema check before execution:
versionkey is required (integer or semver)file_associationsmaps glob patterns → rule names- Each rule under
rules:must include at least one of:description,outputs:withpath,output_type, andinstruction
- Unknown keys are ignored but logged as warnings.
Schema validation prevents mis-typed keys from silently breaking automation.
Feature-Scoped Rules (Docs/features/.ai-rules.yml)
version: 1
file_associations:
"request.md": "feature_request"
# discussions
"feature.discussion.md": "feature_discussion"
"design.discussion.md": "design_discussion"
"implementation.discussion.md": "impl_discussion"
"testing.discussion.md": "test_discussion"
"review.discussion.md": "review_discussion"
# summaries (companions)
"feature.discussion.sum.md": "discussion_summary"
"design.discussion.sum.md": "discussion_summary"
"implementation.discussion.sum.md": "discussion_summary"
"testing.discussion.sum.md": "discussion_summary"
"review.discussion.sum.md": "discussion_summary"
rules:
feature_request:
outputs:
feature_discussion:
path: "{dir}/discussions/feature.discussion.md"
output_type: "feature_discussion_writer"
instruction: |
If missing: create with standard header (stage: feature, status: OPEN),
add Summary and Participation sections, then append initial AI comment with vote.
If exists: no op.
# Also create the companion summary file if missing (blank sections with markers)
feature_summary_init:
path: "{dir}/discussions/feature.discussion.sum.md"
output_type: "discussion_summary_init"
instruction: |
If missing, create companion summary with stable markers:
DECISIONS, OPEN_QUESTIONS, AWAITING, ACTION_ITEMS, VOTES, TIMELINE, LINKS.
If exists, do not modify.
feature_discussion:
outputs:
# 1) Append the new AI comment to the discussion (append-only)
self_append:
path: "{dir}/discussions/feature.discussion.md"
output_type: "feature_discussion_writer"
instruction: |
Append concise comment signed with AI name, ending with a single vote line.
Evaluate votes against header thresholds. If READY threshold met:
- Flip status to READY_FOR_DESIGN (or FEATURE_REJECTED)
Clearly state promotion decision. Append-only with minimal diff.
# 2) Update the companion summary (marker-bounded sections only)
summary_companion:
path: "{dir}/discussions/feature.discussion.sum.md"
output_type: "discussion_summary_writer"
instruction: |
Create or update the summary file. Replace ONLY content between markers:
DECISIONS, OPEN_QUESTIONS, AWAITING, ACTION_ITEMS, VOTES, TIMELINE, LINKS.
Inputs: the entire feature.discussion.md and current header.
Keep diffs minimal.
# 3) Promotion artifacts when READY_FOR_DESIGN
design_discussion:
path: "{dir}/discussions/design.discussion.md"
output_type: "design_discussion_writer"
instruction: |
Create ONLY if feature discussion status is READY_FOR_DESIGN.
Seed with standard header (stage: design, status: OPEN).
design_doc:
path: "{dir}/design/design.md"
output_type: "design_doc_writer"
instruction: |
Create ONLY if feature discussion status is READY_FOR_DESIGN.
Seed from request.md and feature discussion.
Include: Context, Options, Decision, Risks, Acceptance Criteria.
# Ensure design summary exists once design discussion begins
design_summary_init:
path: "{dir}/discussions/design.discussion.sum.md"
output_type: "discussion_summary_init"
instruction: |
If missing, create companion summary with standard markers.
If exists, do not modify unless via discussion_summary_writer.
design_discussion:
outputs:
design_update:
path: "{dir}/design/design.md"
output_type: "design_doc_writer"
instruction: |
Update design document to reflect latest design discussion.
Ensure acceptance criteria are measurable and complete.
Maintain all standard sections. Minimal diffs.
# Always keep the design summary in sync (marker-bounded)
summary_companion:
path: "{dir}/discussions/design.discussion.sum.md"
output_type: "discussion_summary_writer"
instruction: |
Update only the marker-bounded sections from the discussion content.
impl_discussion:
path: "{dir}/discussions/implementation.discussion.md"
output_type: "impl_discussion_writer"
instruction: |
Create ONLY if design discussion status is READY_FOR_IMPLEMENTATION.
impl_plan:
path: "{dir}/implementation/plan.md"
output_type: "impl_plan_writer"
instruction: |
Create ONLY if design status is READY_FOR_IMPLEMENTATION.
Draft implementation milestones and scope.
impl_tasks:
path: "{dir}/implementation/tasks.md"
output_type: "impl_tasks_writer"
instruction: |
Create ONLY if design status is READY_FOR_IMPLEMENTATION.
Generate task checklist aligned to acceptance criteria.
# Ensure implementation summary exists at the moment implementation starts
impl_summary_init:
path: "{dir}/discussions/implementation.discussion.sum.md"
output_type: "discussion_summary_init"
instruction: |
If missing, create companion summary with standard markers.
impl_discussion:
outputs:
tasks_sync:
path: "{dir}/implementation/tasks.md"
output_type: "impl_tasks_maintainer"
instruction: |
Parse checkboxes and PR mentions from implementation discussion.
Synchronize tasks.md accordingly.
When all required tasks complete, mark implementation discussion READY_FOR_TESTING.
summary_companion:
path: "{dir}/discussions/implementation.discussion.sum.md"
output_type: "discussion_summary_writer"
instruction: |
Update only the marker-bounded sections from the discussion content.
Include unchecked items from ../implementation/tasks.md in ACTION_ITEMS.
test_discussion:
path: "{dir}/discussions/testing.discussion.md"
output_type: "test_discussion_writer"
instruction: |
Create ONLY if implementation status is READY_FOR_TESTING.
test_plan:
path: "{dir}/testing/testplan.md"
output_type: "testplan_writer"
instruction: |
Create ONLY if implementation status is READY_FOR_TESTING.
Derive strategy from acceptance criteria.
test_checklist:
path: "{dir}/testing/checklist.md"
output_type: "testchecklist_writer"
instruction: |
Create ONLY if implementation status is READY_FOR_TESTING.
Generate test checklist covering acceptance criteria and edge cases.
test_summary_init:
path: "{dir}/discussions/testing.discussion.sum.md"
output_type: "discussion_summary_init"
instruction: |
If missing, create companion summary with standard markers.
test_discussion:
outputs:
checklist_update:
path: "{dir}/testing/checklist.md"
output_type: "testchecklist_maintainer"
instruction: |
Parse [RESULT] PASS/FAIL blocks from test discussion.
Update checklist accordingly with evidence links.
On test failure, create appropriate bug report.
summary_companion:
path: "{dir}/discussions/testing.discussion.sum.md"
output_type: "discussion_summary_writer"
instruction: |
Update marker-bounded sections from the discussion content.
Surface FAILs in OPEN_QUESTIONS or AWAITING with owners.
bug_report:
path: "{dir}/bugs/BUG_{date}_auto/report.md"
output_type: "bug_report_writer"
instruction: |
Create bug report ONLY when test failure has clear reproduction steps.
Initialize bug discussion and fix plan in the same folder.
review_discussion:
path: "{dir}/discussions/review.discussion.md"
output_type: "review_discussion_writer"
instruction: |
Create ONLY if all test checklist items pass.
Set testing discussion status to READY_FOR_REVIEW.
review_findings:
path: "{dir}/review/findings.md"
output_type: "review_findings_writer"
instruction: |
Create summary of verified functionality, risks, and noteworthy changes.
review_summary_init:
path: "{dir}/discussions/review.discussion.sum.md"
output_type: "discussion_summary_init"
instruction: |
If missing, create companion summary with standard markers.
review_discussion:
outputs:
summary_companion:
path: "{dir}/discussions/review.discussion.sum.md"
output_type: "discussion_summary_writer"
instruction: |
Update marker-bounded sections from the discussion content.
Decisions should include READY_FOR_RELEASE with date and follow-ups.
followup_feature:
path: "../../FR_{date}_followup/request.md"
output_type: "feature_request_writer"
instruction: |
Create follow-up feature request ONLY when review identifies an enhancement.
followup_bug:
path: "{dir}/bugs/BUG_{date}_review/report.md"
output_type: "bug_report_writer"
instruction: |
Create bug report ONLY when review identifies a defect.
Seed discussion and fix plan.
# Generic writer invoked when a *.discussion.sum.md file itself is staged/edited
discussion_summary:
outputs:
normalize:
path: "{dir}/{basename}.sum.md"
output_type: "discussion_summary_normalizer"
instruction: |
Ensure standard header exists and marker blocks are present.
Do not rewrite content outside markers.
The shipped defaults focus on the feature → implementation flow; downstream stages (design, testing, review) reuse the same pattern and can be enabled by extending
.ai-rules.ymlinside the generated project.
5.3 Rule Resolution Precedence
- Nearest Directory: Check source file directory and parents upward
- Feature Scope: Docs/features/.ai-rules.yml for feature artifacts
- Global Fallback: Root .ai-rules.yml for code files
- Conflict Resolution: Nearest rule wins, with logging of override decisions
Orchestration Architecture
Principles
- Single-commit boundary: automation only stages changes within the current commit; it never creates new commits or loops.
- Deterministic prompts: identical inputs produce identical patches (prompt hashing + stable sorting of inputs).
- Nearest-rule wins: rule resolution favors the closest
.ai-rules.yml. - Fail fast, explain: on any failure, keep the index untouched and write actionable diagnostics to
.git/ai-rules-debug/.
Bash Pre-commit Hook
Core Responsibilities:
- Collect staged files (Added/Modified only)
- Resolve rules via cascading lookup
- Build context prompts from staged content
- Call AI model via CLI for patch generation
- Apply patches with robust error handling
Prompt Envelope (deterministic)
BEGIN ENVELOPE
VERSION: 1
SOURCE_FILE: <rel_path>
RULE: <rule_name>/<output_key>
FEATURE_ID: <feature_id>
STAGE: <stage>
POLICY_SHA256: <sha of process/policies.yml>
CONTEXT_FILES: <sorted list>
PROMPT_SHA256: <sha of everything above + inputs>
--- INPUT:FILE ---
<trimmed content or staged diff>
--- INPUT:POLICY ---
<process/policies.yml relevant subset>
--- INSTRUCTION ---
<rules.outputs[*].instruction>
END ENVELOPE
On output, the model must return only a unified diff between
<<<AI_DIFF_START>>> and <<<AI_DIFF_END>>>. The orchestrator records
PROMPT_SHA256 alongside the patch for reproducibility.
Execution Order (per staged file)
- resolve_rules(rel_path) → pick nearest
.ai-rules.yml, matchfile_associations, assemble outputs. - build_prompt(ctx) → gather file content/diff, parsed headers, policy,
{feature_id}/{stage}and neighboring artifacts. - invoke_model(prompt) → receive a unified diff envelope (no raw text rewrites).
- sanitize_diff() → enforce patch constraints (no path traversal, within repo, size limits).
- apply_patch() → try 3-way apply, then strict apply; stage only on success.
- log_diagnostics() → write
resolution.log, raw/clean/sanitized/final diffs.
Enhanced Template Support:
# Add/extend in resolve_template() function
resolve_template() {
local tmpl="$1" rel_path="$2"
local today dirpath basename name ext feature_id stage
today="$(date +%F)"
dirpath="$(dirname "$rel_path")"
basename="$(basename "$rel_path")"
name="${basename%.*}"
ext="${basename##*.}"
# nearest FR_* ancestor as feature_id
feature_id="$(echo "$rel_path" | sed -n 's|.*Docs/features/\(FR_[^/]*\).*|\1|p')"
# infer stage from <stage>.feature.discussion.md when applicable
stage="$(echo "$basename" | sed -n 's/^\([A-Za-z0-9_-]\+\)\.discussion\.md$/\1/p')"
echo "$tmpl" \
| sed -e "s|{date}|$today|g" \
-e "s|{rel}|$rel_path|g" \
-e "s|{dir}|$dirpath|g" \
-e "s|{basename}|$basename|g" \
-e "s|{name}|$name|g" \
-e "s|{ext}|$ext|g" \
-e "s|{feature_id}|$feature_id|g" \
-e "s|{stage}|$stage|g"
}
Patch Application Strategy:
- Preserve Index Lines: Enable 3-way merge capability
- Try 3-way First: git apply --index --3way --recount --whitespace=nowarn
- Fallback to Strict: git apply --index if 3-way fails
- Debug Artifacts: Save raw/clean/sanitized/final patches to .git/ai-rules-debug/
Additional Safeguards:
- Reject patches that:
- create or edit files outside the repo root
- exceed 200 KB per artifact (configurable)
- modify binary or non-targeted files for the current output
- Normalize line endings; ensure new files include headers when required.
- Abort on conflicting hunks; do not partially apply a file’s patch.
Discussion File Optimization:
- Prefer append-only edits with optional header flips
- For large files: generate full new content and compute diff locally
- Minimize hunk drift through careful patch construction
- Enforce append-only: refuse hunks that modify prior lines except header keys explicitly allowed (
status, timestamps,feature_id,stage_id).
Python Orchestrator (automation/workflow.py)
Phase 1 (Non-blocking Status):
#!/usr/bin/env python3
import json, os, sys, subprocess, re
from pathlib import Path
def main():
changed_files = read_changed_files()
status_report = analyze_discussion_status(changed_files)
if status_report:
print("AI-Workflow Status Report")
print(json.dumps(status_report, indent=2))
sys.exit(0) # Always non-blocking in v1
Core Functions:
- Vote Parsing: Parse discussion files, track latest votes per participant
- Threshold Evaluation: Compute eligibility and quorum status
- Status Reporting: JSON output of current discussion state
- Decision Hints: Suggest promotion based on policy rules
Optimization Notes:
- Memoize
load_policy()andparse_front_matter()with LRU caches. - Reuse a single regex object for
vote_line_regex. - Avoid re-reading unchanged files by comparing
git hash-objectresults.
CLI (v1):
- workflow.py --status # print stage/vote status for staged files
- workflow.py --summarize # regenerate summary sections to stdout (no write)
- workflow.py --dry-run # run full pipeline but do not stage patches
- Outputs are written to stdout and .git/ai-rules-debug/orchestrator.log.
Future Enhancements:
- Policy enforcement based on process/policies.yml
- Gitea API integration for issue/PR management
- Advanced agent coordination and task routing
Model Invocation (env-configured)
AI_MODEL_CMD(default:claude) andAI_MODEL_OPTSare read from env.AI_RULES_SEEDallows deterministic sampling where supported.- If the model returns non-diff output, the hook aborts with diagnostics.
Environment toggles:
AI_RULES_MAX_JOBScaps parallel workers (default 4)AI_RULES_CACHE_DIRoverrides.git/ai-rules-cacheAI_RULES_DISABLE_CACHE=1forces re-generationAI_RULES_CI=1enables dry-run & cache-only in CI
Gitea Integration (Future)
Label System:
- stage/*: stage/discussion, stage/design, stage/implementation, etc.
- blocked/*: blocked/needs-votes, blocked/needs-human
- needs/*: needs/design, needs/review, needs/tests
Automated Actions:
- Open/label PRs for implementation transitions
- Post status summaries to PR threads
- Create tracking issues for feature implementation
- Report status checks to PRs
Happy Path (single changed discussion file)
git add Docs/features/FR_x/.../feature.discussion.md
└─ pre-commit
├─ resolve_rules → feature_discussion + summary_companion
├─ build_prompt (PROMPT_SHA256=…)
├─ invoke_model → <<<AI_DIFF_START>>>…<<<AI_DIFF_END>>>
├─ sanitize_diff + guards
├─ apply_patch (3-way → strict)
└─ write logs under .git/ai-rules-debug/
Moderator Protocol
AI_Moderator Responsibilities
Conversation Tracking:
- Monitor unanswered questions (>24 hours)
- Track missing votes from active participants
- Identify stale threads needing attention
- Flag direct mentions that need responses
Signals & Triggers
- Unanswered Qs: any line ending with
?or prefixedQ:with an@ownerand no reply withinresponse_timeout_hours. - Missing Votes: participants who posted in the stage but whose last non-empty line does not match
vote_line_regex. - Stale Discussion: no new comments within
discussion_stale_days. - Promotion Drift: conflicting votes present beyond
promotion_timeout_days.
Progress Reporting:
- Compute current vote tallies and thresholds
- List participants who haven't voted recently
- Summarize promotion status and remaining requirements
- Highlight blocking issues or concerns
Comment Template & Constraints:
- Max 10 lines, each ≤ 120 chars.
- Sections in order: UNANSWERED, VOTES, ACTION ITEMS, STATUS.
- Always end with
VOTE: CHANGES(so it never promotes by itself).
Task Allocation:
- Suggest explicit owners for pending tasks
- Example: "AI_Architect: please draft the acceptance criteria section"
- Example: "Rob: could you clarify the deployment timeline?"
Escalation Path:
- If blockers persist past
promotion_timeout_days, ping owners + maintainer. - If still unresolved after one more nudge interval, create a follow-up entry in the summary’s ACTION_ITEMS with owner + due date.
Moderator Implementation
Rule Definition (in Docs/features/.ai-rules.yml):
discussion_moderator_nudge:
outputs:
self_append:
path: "{dir}/discussions/{stage}.feature.discussion.md"
output_type: "discussion_moderator_writer"
instruction: |
Act as AI_Moderator. Analyze the entire discussion and:
UNANSWERED QUESTIONS:
- List any direct questions unanswered for >24 hours (mention @names)
- Flag questions that need clarification or follow-up
VOTE STATUS:
- Current tally: READY: X, CHANGES: Y, REJECT: Z
- Missing votes from: [list of participants without recent votes]
- Promotion status: [based on header thresholds]
ACTION ITEMS:
- Suggest specific next owners for pending tasks
- Propose concrete next steps with deadlines
Keep comment under 10 lines. End with "VOTE: CHANGES".
Append-only; minimal diff; update nothing else.
UNANSWERED: list @owners for Qs > response_timeout_hours.
VOTES: READY=X, CHANGES=Y, REJECT=Z; Missing=[@a, @b]
ACTION: concrete next steps with @owner and a due date.
STATUS: promotion readiness per process/policies.yml (voting.quorum).
Constraints: ≤10 lines; ≤120 chars/line; append-only; end with:
VOTE: CHANGES
Nudge Frequency: Controlled by nudge_interval_hours in policies
Automation boundary: Moderator comments are appended within the current commit; no auto-commits are created.
Error Handling & Resilience
Safety Invariants
- No auto-commits: automation only stages changes in the current commit.
- Atomic per-file: a patch for a file applies all-or-nothing; no partial hunks.
- Append-first for discussions: prior lines are immutable except allowed header keys.
- Inside repo only: patches cannot create/modify files outside the repository root.
- Deterministic retry: identical inputs → identical patches (same prompt hash).
Common Failure Modes
Patch Application Issues:
- Symptom: Hunk drift on large files, merge conflicts
- Mitigation: 3-way apply with index preservation, append-only strategies
- Fallback: Local diff computation from full new content
- Exit code: 2 (apply failure); write
final.diffandapply.stderr
Model Output Problems:
- Symptom: Malformed diff, missing markers, invalid patch format
- Mitigation: Extract between markers, validate with git apply --check
- Fallback: Clear diagnostics with patch validation output
- Exit code: 3 (invalid diff); write
raw.out,clean.diff,sanitize.log
Tooling Dependencies:
- Symptom: Missing yq, claude, or other required tools
- Mitigation: Pre-flight checks with clear error messages
- Fallback: Graceful degradation with feature-specific disabling
- Exit code: 4 (missing dependency); write
preflight.log
Rule Conflicts:
- Symptom: Multiple rules matching same file with conflicting instructions
- Mitigation: Nearest-directory precedence with conflict logging
- Fallback: Global rule application with warning
- Exit code: 5 (rule resolution); write
resolution.log
Guardrail Violations:
- Symptom: Patch touches forbidden paths, exceeds size, or edits outside markers
- Mitigation: Reject patch, print exact guard name and offending path/line count
- Exit code: 6 (guardrail); write
guards.json
Retry & Idempotency
- Re-run the same commit contents → identical
PROMPT_SHA256and identical patch. - To force a new generation, change only the source file content or the rule instruction.
--dry-runprints the unified diff without staging; useful for CI and reproduction.
Recovery Procedures
Quick Triage Map
| Failure | Where to Look | Fast Fix |
|---|---|---|
| Patch won’t apply | .git/ai-rules-debug/*/apply.stderr |
Rebase or re-run after pulling; if discussion, ensure append-only |
| Invalid diff envelope | raw.out, clean.diff, sanitize.log |
Check that model returned <<<AI_DIFF_START/END>>>; shorten file context |
| Rule not found | resolution.log |
Verify file_associations and {stage}/{feature_id} resolution |
| Guardrail breach | guards.json |
Reduce patch size, keep edits within markers, or adjust config limit |
| Missing dependency | preflight.log |
Install tool or disable rule until available |
Manual Override:
# Bypass hook for emergency edits
git commit --no-verify -m "Emergency fix: manually overriding discussion status"
# Manually update discussion header
# type: discussion -> status: READY_FOR_IMPLEMENTATION
Debug Artifacts:
- All patch variants saved to .git/ai-rules-debug/
- Timestamped files: raw, clean, sanitized, final patches
- Commit-specific directories for correlation
- Rule resolution decisions saved to
.git/ai-rules-debug/resolution.logincluding matched rule, output keys, and template-expanded paths.
Rollback Strategy:
- All generated artifacts are staged separately
- Easy partial staging: git reset HEAD for specific artifacts
- Full reset: git reset HEAD~1 to undo entire commit with generations
Regenerate Safely:
# See what would be generated without staging anything
automation/workflow.py --dry-run
# Apply only after inspection
git add -p
Bypass & Minimal Patch:
# Temporarily bypass the hook for urgent hand-edits
git commit --no-verify -m "Hotfix: manual edit, will reconcile with rules later"
Audit Trail
Patch Sanitization & Guards (summary)
- Validate unified diff headers; reject non-diff content.
- Enforce append-only on discussions; allow header keys: status, feature_id, stage_id, timestamps.
- Enforce marker-bounded edits for *.discussion.sum.md.
- Limit per-artifact patch size (default 200 KB; configurable).
- Reject paths escaping repo root or targeting binaries.
- See Appendix B for the normative, full rule set.
Execution Logging:
- All rule invocations logged with source→output mapping
- Patch application attempts and outcomes recorded
- Vote calculations and promotion decisions documented
Debug Bundle:
.git/ai-rules-debug/
├─ 20251021-143022-12345-feature.discussion.md/
│ ├─ raw.out # Raw model output
│ ├─ clean.diff # Extracted patch
│ ├─ sanitized.diff # After sanitization
│ └─ final.diff # Final applied patch
└─ execution.log # Chronological action log
Operator Checklist (1-minute)
- git status → confirm only intended files are staged.
- Open .git/ai-rules-debug/…/apply.stderr (if failed) or final.diff.
- If discussion file: ensure your change is append-only.
- Re-run automation/workflow.py --dry-run and compare diffs.
- If still blocked, bypass with --no-verify, commit, and open a follow-up to reconcile.
Security & Secrets Management
Security Principles
- No plaintext secrets in Git — ever.
- Scan before stage — block secrets at pre-commit, not in CI.
- Redact on write — debug logs and prompts never store raw secrets.
- Least scope — env vars loaded only for the current process; not persisted.
Secret Protection
Never Commit:
- API keys, authentication tokens
- Personal identifying information
- Internal system credentials
- Private configuration data
Secret Scanning & Blocking (pre-commit):
- Run lightweight detectors before rule execution; fail fast on matches.
- Suggested tools (any one is fine):
git-secrets,gitleaks, ortrufflehog(regex mode). - Provide a repo-local config at
process/secrets.allowlistto suppress false positives.
Redaction Policy:
- If a candidate secret is detected in an input file, the hook aborts.
- If a secret appears only in model output or logs, it is replaced with
***REDACTED***before writing artifacts.
Inbound/Outbound Data Handling:
- Inbound (source & discussions): if a suspected secret is present, the hook blocks the commit and points to the line numbers.
- Outbound (logs, envelopes, diffs): redact values and include a
[REDACTED:<key>]tag to aid debugging without leakage.
Environment Variables:
# Current approach
export CLAUDE_API_KEY="your_key"
# Future .env approach (git-ignored)
# .env file loaded via python-dotenv in Python components
.gitignore (additions):
.env
.env.*
# .env.local, .env.prod, etc.
*.key
*.pem
*.p12
secrets/*.json
secrets/*.yaml
Provide non-sensitive examples as *.sample:
- .env.sample with placeholder keys
- automation/config.sample.yml showing structure without values
Configuration Management:
- Keep sensitive endpoints in automation/config.yml
- Use environment variable substitution in configuration
- Validate no secrets in discussions, rules, or generated artifacts
- Substitution happens in-memory during prompt build; no expanded values are written back to disk.
- Maintain a denylist of key names that must never appear in artifacts:
API_KEY, ACCESS_TOKEN, SECRET, PASSWORD, PRIVATE_KEY.
Access Control
Repository Security:
- Assume all repository contents are potentially exposed
- No sensitive business logic in prompt instructions
- Regular security reviews of rule definitions
- Guardrails: outputs cannot target paths outside repo root; writes to
secrets/are blocked.
Agent Permissions:
- Limit file system access to repository scope
- Validate output paths stay within repository
- Sanitize all file operations for path traversal
- Prompt Redaction: when building the model prompt, mask env-like values with
***REDACTED***for any key matching the denylist or high-entropy detector. - See Appendix B: Diff Application Rules (Normative) for the full list of path/size/marker guardrails enforced during patch application.
Incident Response & Rotation
- If a secret is accidentally committed, immediately:
- Rotate the key at the provider,
- Purge it from Git history (e.g.,
git filter-repo), - Invalidate caches and re-run the secret scanner.
- Track rotations in a private runbook (outside the repo).
Preflight Checks (hook)
- Verify required tools present:
git,python3,yq(optional), chosen secret scanner. - Run secret scanner against staged changes; on hit → exit 11.
- Validate
.ai-rules.ymlschema; on error → exit 12. - Confirm patch guards (size/paths); violations → exit 13.
- Diagnostics: write to
.git/ai-rules-debug/preflight.log.
Performance & Scale Considerations
Optimization Strategies
Deterministic Caching & Batching:
- Prompt cache: reuse model outputs when
PROMPT_SHA256is identical. - Batch compatible files: same rule/output pairs with small contexts can be grouped.
- Stable ordering: sort staged files + outputs before batching to keep results repeatable.
- Cache location:
.git/ai-rules-cache/(keys byPROMPT_SHA256+ rule/output).
Prompt Efficiency:
- Pass staged diffs instead of full file contents when possible
- Use concise, structured instructions with clear formatting
- Limit context to relevant sections for large files
- Preload policy once per run; inject only relevant subsections into the prompt
- Memoize parsed front-matter (YAML) and ASTs across files in the same run
- Trim discussion context to the last N lines (configurable) + stable summary
Discussion Management:
- Append-only edits with periodic summarization
- Compact status reporting in moderator comments
- Archive completed discussions if they become too large
- Sliding-window summarization: regenerate
{stage}.discussion.sum.mdwhen diff > threshold lines - Limit TIMELINE to the last 15 entries (configurable)
Batch Operations:
- Process multiple related files in single model calls when beneficial
- Cache rule resolutions for multiple files in same directory
- Parallelize independent output generations
- Cap parallelism with
AI_RULES_MAX_JOBS(default 4) to avoid CPU thrash. - Deduplicate prompts for identical contexts across multiple outputs.
Scaling Limits
File Size Considerations:
- Small (<100KB): Full content in prompts
- Medium (100KB-1MB): Diff-only with strategic context
- Large (>1MB): Chunked processing or summary-only approaches
- Very large (>5MB): refuse inline context; require pre-summarized artifacts
Context Window Strategy:
- Hard cap prompt body at 200 KB per output (configurable)
- If over cap: (1) include diff; (2) include header + last 200 lines; (3) link to file path
AST/Diagram Work:
- Cache ASTs in
.git/ai-rules-cache/ast/keyed by<rel_path>:<blob_sha> - Rate-limit diagram updates to once per file per commit (guard duplicate runs)
Repository Size:
- Current approach suitable for medium-sized repositories
- For very large codebases: scope rules to specific directories
- Consider rule disabling for generated/binary assets
Rate Limiting:
- Model API calls: implement throttling and retry logic
- Gitea API: respect rate limits with exponential backoff
- File operations: batch where possible to reduce I/O
Performance Telemetry (optional)
- Write
.git/ai-rules-debug/perf.jsonwith per-output timings:{ resolve_ms, prompt_ms, model_ms, sanitize_ms, apply_ms, bytes_in, bytes_out } - Summarize totals at end of run for quick regressions spotting.
Testing Strategy
Goals
- Prove determinism (same inputs → same patch).
- Prove guardrails (append-only, marker-bounded, path/size limits).
- Prove promotion math (votes, quorum, human gates).
- Keep runs fast and hermetic (temp repo, mock clock, seeded RNG).
Testing Tiers
Unit Tests (Python):
- Vote parsing and eligibility calculation
- Policy evaluation and quorum determination
- Rules resolution and conflict handling
- Template variable substitution
Integration Tests (Bash + Python):
- End-to-end rule → prompt → patch → apply cycle
- Discussion status transitions and promotion logic
- Error handling and recovery procedures
- Multi-file rule processing
Artifact Validation:
- PlantUML syntax checking: plantuml -checkonly
- Markdown structure validation
- Template completeness checks
- YAML syntax validation
Golden & Snapshot Tests:
- Prompt Envelope Golden: compare against
tests/gold/envelopes/<case>.txt - Diff Output Golden: compare unified diffs in
tests/gold/diffs/<case>.diff - Summary Snapshot: write
{stage}.discussion.sum.mdand compare againsttests/snapshots/<case>/<stage>.discussion.sum.md(markers only)
Property-Based Tests:
- Using
hypothesisto fuzz discussion comments; invariants:- last non-empty line drives the vote
- regex
vote_line_regexnever matches malformed lines - marker-bounded writer never edits outside markers
Mutation Tests (optional):
- Run
mutmutonautomation/workflow.pyvote math and ensure tests fail when logic is mutated.
Test Architecture
tests/
├─ unit/
│ ├─ test_votes.py
│ ├─ test_policies.py
│ ├─ test_rules_resolution.py
│ └─ test_template_variables.py
├─ integration/
│ ├─ run.sh # Main test runner
│ ├─ lib.sh # Test utilities
│ ├─ fixtures/
│ │ └─ repo_skeleton/ # Minimal test repository
│ │ ├─ .ai-rules.yml
│ │ ├─ Docs/features/.ai-rules.yml
│ │ └─ Docs/features/FR_test/
│ │ ├─ request.md
│ │ └─ discussions/
│ │ └─ data/
│ │ ├─ bigfile.md # >1MB to trigger chunking
│ │ ├─ bad.diff # malformed diff for sanitizer tests
│ │ ├─ secrets.txt # simulated secrets for scanner tests
│ │ └─ envelopes/ # golden prompt envelopes
│ ├─ gold/
│ │ ├─ envelopes/
│ │ └─ diffs/
│ └─ test_cases/
│ ├─ test_feature_promotion.sh
│ ├─ test_design_generation.sh
│ ├─ test_bug_creation.sh
│ ├─ test_append_only_guard.sh
│ ├─ test_summary_snapshot.sh
│ ├─ test_secret_scanner_block.sh
│ ├─ test_ci_cache_only_mode.sh
│ ├─ test_moderator_nudge.sh
│ └─ test_rule_precedence.sh
├─ bin/
│ └─ claude # Fake deterministic model
├─ snapshots/
│ └─ FR_test_case/
│ ├─ feature.discussion.sum.md
│ └─ design.discussion.sum.md
└─ README.md
Hermetic Test Utilities
- Mock clock: set SOURCE_DATE_EPOCH to freeze {date} expansions.
- Temp repo: each test case creates a fresh TMP_REPO with isolated .git.
- Seeded RNG: set AI_RULES_SEED for deterministic model variants.
- Filesystem isolation: tests write only under TMPDIR and .git/ai-rules-*.
Fake Model Implementation
Purpose: Deterministic testing without external API dependencies
Implementation (tests/bin/claude):
#!/bin/bash
# Fake Claude CLI for testing
# Reads prompt envelope from stdin, outputs a unified diff or injected error.
# Controls:
# AI_FAKE_ERR=diff|apply|malformed (force error modes)
# AI_FAKE_SEED=<int> (deterministic variant)
# AI_FAKE_MODE=discussion|design (which template to emit)
set -euo pipefail
prompt="$(cat)"
if [[ "${AI_FAKE_ERR:-}" == "malformed" ]]; then
echo "this is not a diff"
exit 0
fi
target_file=$(echo "$prompt" | awk '/^SOURCE_FILE:/ {print $2}')
if echo "$prompt" | grep -q "RULE: .*feature_discussion/self_append"; then
cat << 'EOF'
<<<AI_DIFF_START>>>
diff --git a/Docs/features/FR_test/discussions/feature.discussion.md b/Docs/features/FR_test/discussions/feature.discussion.md
index 1234567..890abcd 100644
--- a/Docs/features/FR_test/discussions/feature.discussion.md
+++ b/Docs/features/FR_test/discussions/feature.discussion.md
@@ -15,3 +15,6 @@ voting:
## Summary
Test feature for validation
+
+## Participation
+AI_Test: This is a test comment. VOTE: READY
EOF
elif echo "$prompt" | grep -q "RULE: .*discussion_summary_writer"; then
cat << 'EOF'
<<<AI_DIFF_START>>>
diff --git a/Docs/features/FR_test/discussions/feature.discussion.sum.md b/Docs/features/FR_test/discussions/feature.discussion.sum.md
index 1111111..2222222 100644
--- a/Docs/features/FR_test/discussions/feature.discussion.sum.md
+++ b/Docs/features/FR_test/discussions/feature.discussion.sum.md
@@ -5,3 +5,4 @@
<!-- SUMMARY:VOTES START -->
## Votes (latest per participant)
READY: 1 • CHANGES: 0 • REJECT: 0
- Rob
<!-- SUMMARY:VOTES END -->
<<<AI_DIFF_END>>>
EOF
else
# Default patch for other file types
echo "<<<AI_DIFF_START>>>"
echo "diff --git a/README.md b/README.md"
echo "index 0000000..0000001 100644"
echo "--- a/README.md"
echo "+++ b/README.md"
echo "@@ -0,0 +1,1 @@"
echo "+Generated by fake model"
echo "<<<AI_DIFF_END>>>"fi
Integration Test Runner
Key Test Scenarios
- Feature Promotion: request.md → feature.discussion.md → READY_FOR_DESIGN
- Design Generation: design.discussion.md → design.md updates
- Bug Creation: test failure → auto bug report generation
- Error Recovery: Malformed patch → graceful failure with diagnostics
- Rule Conflicts: Multiple rule matches → nearest-directory resolution
- Append-Only Guard: attempt to edit earlier lines in discussion → reject
- Summary Snapshot: only markers mutate; outside text preserved
- Secret Scanner: staged secret blocks commit with exit 11
- CI Cache-only: with AI_RULES_CI=1 and cache miss → exit 20
- Moderator Nudge: comment ≤10 lines, ends with
VOTE: CHANGES - Rule Precedence: local overrides feature, feature overrides global
Test Execution
# Run full test suite
cd tests/integration
./run.sh
# Run specific test case
./test_cases/test_feature_promotion.sh
Makefile (optional)
.PHONY: test unit integ lint
test: unit integ
unit:
- pytest -q tests/unit
integ:
- cd tests/integration && ./run.sh
lint:
- ruff check automation src || true
Continuous Validation
Pre-commit Checks:
- PlantUML syntax validation for generated diagrams
- Markdown link validation
- YAML syntax checking for rule files
- Template variable validation
Performance Benchmarks:
- Rule resolution time for typical commit
- Patch generation and application duration
- Memory usage during large file processing
- CI Mode (
AI_RULES_CI=1):- Default to
--dry-runand cache-only model lookups. - On cache miss, print the missing
PROMPT_SHA256, skip invocation, and exit 20. - Use to keep CI fast and reproducible.
- Default to
Coverage Targets:
- ≥90% line coverage on
automation/workflow.pyvote/quorum logic - ≥80% branch coverage on rule resolution and guards
Success Criteria:
- All golden prompts/diffs stable across runs (no drift)
- Guardrail tests fail if append-only/marker or path checks are removed
Source Intelligence Automation (Auto-Review + Auto-Diagram)
Purpose
To keep technical documentation and diagrams in sync with evolving source code. On every staged change to src/**/*.js|ts|py, the automation layer:
- Analyzes the diff and AST to produce a concise review summary
- Extracts structure and updates a PlantUML diagram in Docs/diagrams/file_diagrams/
A) Folder Layout
src/
├─ automation/
│ ├─ __init__.py
│ ├─ analyzer.py # parses diffs, extracts structure & metrics
│ ├─ reviewer.py # writes review summaries (md)
│ ├─ diagrammer.py # emits PUML diagrams
│ └─ utils/
│ ├─ git_tools.py # staged diff, blob lookup
│ ├─ code_parser.py # AST helpers (JS/TS/Python)
│ └─ plantuml_gen.py # renders PlantUML text
B) Operational Flow (Triggered by Hook)
┌────────────────────────────────────────────────────────┐
│ pre-commit hook (bash) │
│ └──> detect src/**/*.js|ts|py changes │
│ ├─> call automation/analyzer.py --file <path> │
│ │ ├─ parse diff + AST │
│ │ ├─ collect functions, classes, calls │
│ │ └─ emit JSON summary │
│ ├─> reviewer.py → Docs/discussions/reviews/ │
│ └─> diagrammer.py → Docs/diagrams/file_diagrams/│
└────────────────────────────────────────────────────────┘
Each stage emits a unified diff so the same patch-application rules (3-way apply, append-only) still apply.
C) Sample Rule (Root .ai-rules.yml)
js-file:
description: "Generate PlantUML + review for JS/TS files"
outputs:
diagram:
path: "Docs/diagrams/file_diagrams/{name}.puml"
output_type: "puml-file"
instruction: |
Parse code structure and update a PlantUML diagram:
- Modules, classes, functions
- Control-flow edges between major functions
review:
path: "Docs/discussions/reviews/{date}_{name}.md"
output_type: "md-file"
instruction: |
Summarize this commit’s code changes:
- What changed and why
- Possible risks / performance / security notes
- Suggested tests or TODOs
Similar rules exist for py-file, ts-file, etc.
D) Core Algorithms (pseudocode)
# 1 analyzer.py
def analyze_source(path):
diff = git_diff(path)
tree = parse_ast(path)
funcs, classes = extract_symbols(tree)
flows = extract_calls(tree)
metrics = compute_metrics(tree)
return {
"file": path,
"functions": funcs,
"classes": classes,
"flows": flows,
"metrics": metrics,
"diff_summary": summarize_diff(diff),
}
# 2 diagrammer.py
def generate_puml(analysis):
nodes = [*analysis["classes"], *analysis["functions"]]
edges = analysis["flows"]
puml = "@startuml\n"
for n in nodes:
puml += f"class {n}\n"
for a, b in edges:
puml += f"{a} --> {b}\n"
puml += "@enduml\n"
return puml
# 3 reviewer.py
def generate_review(analysis):
return f"""# Auto Review — {analysis['file']}
## Summary
{analysis['diff_summary']}
## Key Functions
{', '.join(analysis['functions'][:10])}
## Potential Risks
- TODO: evaluate complexity or security implications
## Suggested Tests
- Unit tests for new/modified functions
"""
E) Outputs
- .puml → Docs/diagrams/file_diagrams/{basename}.puml (keeps architecture maps current)
- .md → Docs/discussions/reviews/{date}_{basename}.md (rolling code review history) Each output follows 3-way apply / append-only rules; every commit leaves a diff trail in .git/ai-rules-debug/.
F) Integration with Orchestrator
# automation/workflow.py (aggregation example)
if src_changed():
from automation import analyzer, reviewer, diagrammer
for f in changed_src_files:
data = analyzer.analyze_source(f)
diagrammer.update_puml(data)
reviewer.update_review(data)
Future versions can post summaries to the feature’s implementation discussion and link diagrams into design/design.md.
G) Testing the Source Automation Layer
- Unit: tests/unit/test_code_parser.py, tests/unit/test_puml_gen.py
- Integration: tests/integration/test_cases/test_auto_review.sh, test_auto_diagram.sh
- Fixtures: tests/integration/fixtures/repo_skeleton/src/ with fake commits to verify generation
H) Security & Performance Notes
- Sandbox analysis only — no execution of user code
- AST parsing limited to static structure
- Large files (>5k lines): partial summarization
- Output capped to ≤ 200 KB per artifact
I) Deliverables Added to Milestones
- M0 → create src/automation/ skeleton
- M1 → functional auto-review + auto-diagram for JS/TS files
- M2 → extend to Python + PlantUML cross-linking in design docs
Discussion Summaries (Companion Artifacts per Stage)
What it is
For every {stage}.discussion.md, maintain a sibling {stage}.discussion.sum.md. It is append-minimal with bounded section rewrites only (between stable markers). Contents: decisions, vote tallies, open questions, awaiting replies, action items, compact timeline.
Where it lives
Docs/features/FR_.../
└─ discussions/
├─ feature.discussion.md
├─ feature.discussion.sum.md
├─ design.discussion.md
├─ design.discussion.sum.md
├─ implementation.discussion.md
├─ implementation.discussion.sum.md
├─ testing.discussion.md
├─ testing.discussion.sum.md
├─ review.discussion.md
└─ review.discussion.sum.md
Header (machine-readable)
---
type: discussion-summary
stage: feature # feature|design|implementation|testing|review
status: ACTIVE # ACTIVE|SNAPSHOT|ARCHIVED
source_discussion: feature.discussion.md
feature_id: FR_YYYY-MM-DD_<slug>
updated: YYYY-MM-DDTHH:MM:SSZ
policy:
allow_agent_votes: true
require_human_for: [implementation, review]
---
Stable section markers (for tiny diffs)
# Summary — <Stage Title>
<!-- SUMMARY:DECISIONS START -->
## Decisions (ADR-style)
- (none yet)
<!-- SUMMARY:DECISIONS END -->
<!-- SUMMARY:OPEN_QUESTIONS START -->
## Open Questions
- (none yet)
<!-- SUMMARY:OPEN_QUESTIONS END -->
<!-- SUMMARY:AWAITING START -->
## Awaiting Replies
- (none yet)
<!-- SUMMARY:AWAITING END -->
<!-- SUMMARY:ACTION_ITEMS START -->
## Action Items
- (none yet)
<!-- SUMMARY:ACTION_ITEMS END -->
<!-- SUMMARY:VOTES START -->
## Votes (latest per participant)
READY: 0 • CHANGES: 0 • REJECT: 0
- (no votes yet)
<!-- SUMMARY:VOTES END -->
<!-- SUMMARY:TIMELINE START -->
## Timeline (most recent first)
- <YYYY-MM-DD HH:MM> <name>: <one-liner>
<!-- SUMMARY:TIMELINE END -->
<!-- SUMMARY:LINKS START -->
## Links
- Related PRs: –
- Commits: –
- Design/Plan: ../design/design.md
<!-- SUMMARY:LINKS END -->
How it updates
Trigger: whenever {stage}.discussion.md is staged, the hook also updates/creates {stage}.sum.md.
Deterministic logic:
- Votes: parse latest vote per participant (eligibility per policy)
- Decisions: if header status flips (e.g., READY_FOR_IMPLEMENTATION), append an ADR entry
- Open Questions: lines ending with ? or flagged Q: with @owner if present
- Awaiting Replies: mentions with no response from that participant within response_timeout_hours
- Action Items: unchecked tasks (- ) with @owner remain tracked until checked
- Timeline: last N (default 15) comment one-liners with timestamp and name
- Links: auto-add PRs (#123), SHAs, and cross-file links
Rotation / snapshots (optional): when discussion grows large or on schedule, write discussions/summaries/.md (status: SNAPSHOT) and keep {stage}.sum.md trimmed while retaining Decisions/Open Q/Actions/Votes.
Rules (additions in Docs/features/.ai-rules.yml)
file_associations:
"feature.discussion.md": "feature_discussion"
"design.discussion.md": "design_discussion"
"implementation.discussion.md": "impl_discussion"
"testing.discussion.md": "test_discussion"
"review.discussion.md": "review_discussion"
rules:
feature_discussion:
outputs:
summary_companion:
path: "{dir}/discussions/feature.discussion.sum.md"
output_type: "discussion_summary_writer"
instruction: |
Create or update the summary file. Replace ONLY content between
these markers: DECISIONS, OPEN_QUESTIONS, AWAITING, ACTION_ITEMS,
VOTES, TIMELINE, LINKS. Do not touch other lines.
Inputs: the entire feature.discussion.md.
design_discussion:
outputs:
summary_companion:
path: "{dir}/discussions/design.discussion.sum.md"
output_type: "discussion_summary_writer"
instruction: |
Same summary policy as feature.sum.md; also add link to ../design/design.md.
impl_discussion:
outputs:
summary_companion:
path: "{dir}/discussions/implementation.discussion.sum.md"
output_type: "discussion_summary_writer"
instruction: |
Same summary policy; include unchecked items from ../implementation/tasks.md.
test_discussion:
outputs:
summary_companion:
path: "{dir}/discussions/testing.discussion.sum.md"
output_type: "discussion_summary_writer"
instruction: |
Same summary policy; include failing test artifacts and ensure FAILs surface in Open Questions or Awaiting.
review_discussion:
outputs:
summary_companion:
path: "{dir}/discussions/review.discussion.sum.md"
output_type: "discussion_summary_writer"
instruction: |
Same summary policy; Decisions should note READY_FOR_RELEASE with date and follow-ups.
Orchestrator support (nice-to-have)
Provide workflow.py --summarize to output regenerated sections for tests/CI. Track awaiting replies via timestamps per author; if absent, mark as awaiting.
Testing additions
- Unit: parsing of votes, questions, mentions, action items
- Integration: commit a discussion with constructs → verify summary sections updated and only marker-bounded hunks changed
- Failure: malformed discussion / huge file → generator still writes sections; timeline truncates; no crash
Why this helps
Newcomers can open {stage}.sum.md and immediately see the state. Humans keep talking in the discussion; the system curates the signal in the summary. Promotions are transparent via Decisions. Open loops are visible and assigned.
Implementation Plan
Milestone M0: Process Foundation
Deliverables:
- setup_project.py (Initialize Cascading Development repo)
- process/design.md (this document)
- process/policies.md + process/policies.yml
- process/templates/ (all four core templates)
- automation/agents.yml (role mappings)
- src/automation/ skeleton (analyzer.py, reviewer.py, diagrammer.py, utils/*)
Success Criteria:
- All process documentation in place
- Policy definitions machine-readable
- Templates provide clear starting points
Milestone M1: Orchestrator MVP + Hook Enhancements
Deliverables:
- automation/workflow.py (non-blocking status reporter)
- Bash hook: {dir} template variable support
- Bash hook: Index preservation for 3-way apply
- Bash hook: Append-only optimization for discussions
- Auto-review + auto-diagram operational for JS/TS via root rules (js-file)
Success Criteria:
- Python orchestrator reports discussion status
- Template variables work for feature folder paths
- 3-way apply handles merge conflicts gracefully
Milestone M2: Stage Automation & Moderator
Deliverables:
- Enhanced Docs/features/.ai-rules.yml with stage rules
- AI_Moderator implementation via discussion rules
- Python orchestrator: policy-based decision hints
- Test suite for feature promotion flow
- Discussion summaries: rules (discussion_summary_writer) + tests
Success Criteria:
- Feature requests auto-create discussions
- Discussions promote through stages based on votes
- Moderator provides useful conversation guidance
Milestone M3: Gitea Integration
Deliverables:
- automation/adapters/gitea_adapter.py
- Automated PR creation and labeling
- Status reporting to PR threads
- Issue tracking integration
Success Criteria:
- Implementation stage auto-creates PRs
- Review status visible in PR discussions
- Labels reflect current stage and blockers
Milestone M4: Bash to Python Migration
Deliverables:
- Core rule resolution logic in Python
- Patch generation and application in Python
- Bash hook as thin wrapper calling Python
- Enhanced error handling and diagnostics
Success Criteria:
- Maintains current functionality with better maintainability
- Improved error messages and recovery options
- Consistent behavior across all operations
Risks & Mitigations
Technical Risks
Over-Automation Bypassing Humans:
Risk: Critical decisions made without human oversight
- Mitigation: Human READY gates for Implementation and Release stages
- Control: Manual override capability for all automated promotions
- Patch Instability on Large Files:
Risk: Hunk drift and merge conflicts in long discussions
- Mitigation: 3-way apply with index preservation, append-only strategies
- Fallback: Local diff computation from full content regeneration
Tooling Dependency Management:
Risk: Version conflicts or missing dependencies break system
- Mitigation: Pre-flight validation with clear error messages
- Recovery: Graceful degradation with feature flags
Context Limit Exceeded:
- Risk: AI models cannot process very large discussions
- Mitigation: Structured summarization, chunked processing
- Alternative: Focus on recent changes with reference to history
13.2 Process Risks Vote Manipulation or Gaming:
- Risk: Participants exploit voting system for unwanted outcomes
- Mitigation: Clear etiquette policies, human override capability
- Oversight: Moderator monitoring for voting patterns
Discussion Fragmentation:
Risk: Conversations become scattered across multiple files
- Mitigation: Clear stage boundaries, cross-references between discussions
- Tooling: Search and navigation aids for related artifacts
Agent Coordination Conflicts:
- Risk: Multiple agents making conflicting changes
- Mitigation: Clear role definitions, sequential processing
- Resolution: Human maintainer as final arbiter
13.3 Adoption Risks Learning Curve:
Risk: New contributors struggle with system complexity
- Mitigation: Comprehensive documentation, template guidance
- Support: AI_Moderator provides onboarding assistance
Process Overhead:
- Risk: System creates too much ceremony for small changes
- Mitigation: Configurable rule enabling/disabling
- Flexibility: Bypass options for trivial changes
Initial Setup & Bootstrapping
To streamline project onboarding and ensure every repository begins with a structured, traceable starting point, this system includes:
- a one-time setup script (
setup_cascadingdev.py) that initializes the folder structure and installs the hook, - a
create_feature.pytool for creating feature requests (with or without Ramble), - and a concise
USER_GUIDE.mdin the user project for daily guidance.
Steps Performed:
- Create the canonical folder structure under
Docs/and seed the initial FR folder. - Install the pre-commit hook and default configuration files.
- Copy
create_feature.pyand (optionally)ramble.pyinto the user project root. - Optionally run Ramble to help collect the first feature; otherwise prompt via CLI.
- Generate the first Feature Request folder and the initial discussion + summary.
Example Implementation
#!/usr/bin/env python3
"""
setup_project.py — Initialize AI–Human Collaboration repo
"""
import os, subprocess, datetime
def run_ramble():
"""Launch Ramble dialog to collect initial feature request"""
subprocess.run(["python3", "ramble.py", "--prompt", "Describe your initial feature request"])
def main():
today = datetime.date.today().isoformat()
feature_dir = f"Docs/features/FR_{today}_initial-feature-request"
os.makedirs(f"{feature_dir}/discussions", exist_ok=True)
print(f"Creating {feature_dir}/ ...")
# Generate initial request file from template
request_md = os.path.join(feature_dir, "request.md")
if not os.path.exists(request_md):
with open(request_md, "w") as f:
f.write("# Feature Request: Initial Project Setup\n\n"
"**Intent:** Describe project goals and first milestone.\n"
"**Motivation / Problem:** Why this system is needed.\n"
"**Constraints / Non-Goals:** ...\n"
"**Open Questions:** ...\n")
# Run Ramble dialog to fill in details interactively
print("Launching Ramble interactive prompt...")
run_ramble()
print("Setup complete — run create_feature.py to add more features.")
if __name__ == "__main__":
main()
Rationale
This setup process ensures that every repository starts with a consistent structure and a human-authored origin document, created in a conversational way. It also guarantees that the automation and templates are initialized before any feature work begins.
14 Template Evolution
14.1 Versioning Strategy
Template Location as Version:
- Current templates always in process/templates/
- Breaking changes require new feature request and migration plan
- Existing features use templates current at their creation
User Guide
- The authoritative
USER_GUIDE.mdlives in CascadingDev’s source (assets/templates/USER_GUIDE.md) and is copied into user projects (root) at install time. Update the source and rebuild the bundle to propagate changes.
Migration Guidance:
- Document template changes in release notes
- Provide automated migration scripts for simple changes
- Flag features using deprecated templates
14.2 Core Templates
Feature Request Template (process/templates/feature_request.md):
# Feature Request: <title>
**Feature ID**: <FR_YYYY-MM-DD_slug>
**Intent**: <one paragraph describing purpose>
**Motivation / Problem**: <why this is needed now>
**Constraints / Non-Goals**: <bulleted list of limitations>
**Rough Proposal**: <short implementation outline>
**Open Questions**: <bulleted list of uncertainties>
**Meta**: Created: <date> • Author: <name>
Discussion Template (process/templates/discussion.md):
---
type: discussion
stage: <feature|design|implementation|testing|review>
status: OPEN
feature_id: <FR_...>
created: <YYYY-MM-DD>
promotion_rule:
allow_agent_votes: true
ready_min_eligible_votes: all
reject_min_eligible_votes: all
participation:
instructions: |
- Append your input at the end as: "YourName: your comment…"
- Every comment must end with a vote line: "VOTE: READY|CHANGES|REJECT"
- Agents/bots must prefix names with "AI_"
voting:
values: [READY, CHANGES, REJECT]
---
## Summary
2-4 sentence summary of current state
## Participation
comments appended below
Design Document Template (process/templates/design_doc.md):
# Design — <FR id / Title>
## Context & Goals
## Non-Goals & Constraints
## Options Considered
## Decision & Rationale
## Architecture Diagram(s)
## Risks & Mitigations
## Acceptance Criteria (measurable)
Implementation Plan Template (process/templates/implementation_plan.md):
# Implementation Plan — <FR id / Title>
## Scope
## Milestones
- [ ] M0: Initial setup and scaffolding
- [ ] M1: Core functionality
- [ ] M2: Testing and refinement
## Tasks
- [ ] specific actionable task
## Test Strategy
## Done When
- <clear, verifiable completion criteria>
Discussion Summary Template (process/templates/summary.md):
# Summary — <Stage Title>
(Automatically maintained. Edit the discussion; this file will follow.)
<!-- SUMMARY:DECISIONS START -->
## Decisions (ADR-style)
- (none yet)
<!-- SUMMARY:DECISIONS END -->
<!-- SUMMARY:OPEN_QUESTIONS START -->
## Open Questions
- (none yet)
<!-- SUMMARY:OPEN_QUESTIONS END -->
<!-- SUMMARY:AWAITING START -->
## Awaiting Replies
- (none yet)
<!-- SUMMARY:AWAITING END -->
<!-- SUMMARY:ACTION_ITEMS START -->
## Action Items
- (none yet)
<!-- SUMMARY:ACTION_ITEMS END -->
<!-- SUMMARY:VOTES START -->
## Votes (latest per participant)
READY: 0 • CHANGES: 0 • REJECT: 0
- (no votes yet)
<!-- SUMMARY:VOTES END -->
<!-- SUMMARY:TIMELINE START -->
## Timeline (most recent first)
- <timestamp> <name>: <one-liner>
<!-- SUMMARY:TIMELINE END -->
<!-- SUMMARY:LINKS START -->
## Links
- Related PRs: –
- Commits: –
- Design/Plan: ../design/design.md
<!-- SUMMARY:LINKS END -->
Roles & Agent Personas
Human Roles
- Maintainer:
- Final approval authority for critical stages
- System configuration and rule definition
- Conflict resolution and manual overrides
- Contributor:
- Feature authorship and implementation
- Participation in discussions and voting
- Review and testing responsibilities
AI Agent Roles
Defined in automation/agents.yml:
agents:
AI_Researcher:
role: research_specialist
stages: [request, discussion, design]
capabilities: [web_search, documentation_review, best_practices]
voting_weight: 0.5
AI_Architect:
role: software_architect
stages: [design, implementation]
capabilities: [system_design, tradeoff_analysis, diagram_generation]
voting_weight: 0.8
AI_Implementer:
role: senior_developer
stages: [implementation, review]
capabilities: [code_generation, refactoring, testing_strategy]
voting_weight: 0.7
AI_Reviewer:
role: quality_engineer
stages: [review, test]
capabilities: [code_review, risk_assessment, security_analysis]
voting_weight: 0.9
AI_Moderator:
role: discussion_moderator
stages: [discussion, design, review]
capabilities: [progress_tracking, question_routing, vote_monitoring]
voting_weight: 0.3
Role-Specific Responsibilities
- AI_Researcher:
- Find prior art, RFCs, and reference implementations
- Research technical constraints and dependencies
- Identify potential risks and mitigation strategies
- AI_Architect:
- Translate requirements into technical design plan
- Create and maintain architecture diagrams
- Evaluate trade-offs and make design recommendations
- AI_Implementer:
- Propose code structure and implementation approaches
- Generate code snippets and refactoring suggestions
- Develop testing strategies and fixture plans
- AI_Reviewer:
- Conduct adversarial code and design review
- Identify edge cases and failure modes
- Assess security and performance implications
- AI_Moderator:
- Track discussion progress and participation
- Identify unanswered questions and missing votes
- Suggest next steps and task ownership
Glossary
- FR: Feature Request — the initial document proposing new functionality
- Gate: Promotion decision point between stages based on policy thresholds
- Append-Only: Edit strategy that only adds new content to end of file, with optional header updates
- 3-way Apply: Patch application technique using blob IDs to reconcile content drift
- Cascading Rules: Rule resolution system where nearest directory's rules override parent directories
- Stage-Per-Discussion: Organizational pattern with separate conversation files for each development phase
- Human Gate: Promotion requirement that cannot be satisfied by AI votes alone
- Bug Sub-Cycle: Mini feature lifecycle automatically created for test failures
- Template Variables: Placeholders ({basename}, {name}, {ext}, {date}, {rel}, {dir}, {feature_id}, {stage}) resolved in rule paths
- Vote Threshold: Minimum number or type of votes required for promotion
- Status Machine: Defined state transitions for discussion files (OPEN → READY_FOR_* → etc.)
- Orchestrator: Central coordination component managing rule execution and status tracking
Appendices
Appendix A: Complete Policy Configuration
# process/policies.yml
version: 1
voting:
values: [READY, CHANGES, REJECT]
allow_agent_votes: true
quorum:
discussion: { ready: all, reject: all }
design: { ready: all, reject: all }
implementation: { ready: 1_human, reject: all }
testing: { ready: all, reject: all }
review: { ready: 1_human, reject: all }
eligibility:
agents_allowed: true
require_human_for: [implementation, review]
etiquette:
name_prefix_agents: "AI_"
vote_line_regex: "^VOTE:\\s*(READY|CHANGES|REJECT)\\s*$"
response_timeout_hours: 24
timeouts:
discussion_stale_days: 3
nudge_interval_hours: 24
promotion_timeout_days: 14
moderation:
max_lines: 10
max_line_length: 120
security:
scanners:
enabled: true
tool: gitleaks # or git-secrets, trufflehog
allowlist_file: process/secrets.allowlist
redaction:
apply_to:
- logs
- prompts
- patches
denylist_keys:
- API_KEY
- ACCESS_TOKEN
- SECRET
- PASSWORD
- PRIVATE_KEY
guards:
block_paths:
- secrets/
max_patch_kb: 200
forbid_binary_edits: true
performance:
max_jobs: 4
prompt_kb_cap: 200
discussion_timeline_limit: 15
cache:
enabled: true
dir: .git/ai-rules-cache
batching:
enabled: true
max_batch: 4
ast_cache:
enabled: true
dir: .git/ai-rules-cache/ast
Appendix B: Diff Application Rules (Normative)
- Patch Sanitization Rules:
- Preserve index lines for 3-way merge capability
- Remove only fragile metadata (similarity, rename info)
- Keep file mode lines (new file mode, deleted file mode)
- Ensure proper header formatting for new files
- Application Order:
- Attempt 3-way apply with recount and whitespace ignore
- Fall back to strict apply if 3-way fails
- For new files: use -p0 patch level with header rewriting
- Validate patch with --check before application
- Error Handling:
- Save all patch variants for debugging
- Provide clear diagnostics on failure
- Suggest manual resolution steps when automated recovery fails
Appendix C: Complete Agent Definitions
# automation/agents.yml
version: 1
agent_defaults:
voting_weight: 0.5
require_human_approval: false
agents:
AI_Researcher:
role: research_specialist
description: "Finds prior art, documentation, and best practices"
stages: [request, discussion, design]
capabilities:
- web_search
- documentation_review
- best_practices_research
- risk_identification
voting_weight: 0.5
AI_Architect:
role: software_architect
description: "Designs system architecture and evaluates trade-offs"
stages: [design, implementation]
capabilities:
- system_design
- tradeoff_analysis
- diagram_generation
- technology_selection
voting_weight: 0.8
AI_Implementer:
role: senior_developer
description: "Implements features and creates testing strategies"
stages: [implementation, review]
capabilities:
- code_generation
- refactoring
- testing_strategy
- performance_optimization
voting_weight: 0.7
AI_Reviewer:
role: quality_engineer
description: "Conducts security, risk, and quality analysis"
stages: [review, test]
capabilities:
- code_review
- risk_assessment
- security_analysis
- quality_validation
voting_weight: 0.9
AI_Moderator:
role: discussion_moderator
description: "Tracks progress and ensures participation"
stages: [discussion, design, review]
capabilities:
- progress_tracking
- question_routing
- vote_monitoring
- task_allocation
voting_weight: 0.3
Final Implementation Note
This v2.0 design document incorporates all insights from our collaborative discussion, providing a comprehensive framework for AI-human development collaboration. The system balances automation with appropriate human oversight, maintains the lightweight Git-native philosophy, and provides clear escalation paths from feature conception through release.
The stage-per-discussion model with automated artifact generation creates a self-documenting development process that scales from small features to large, complex implementations. The integrated bug sub-cycle ensures that quality issues are handled systematically without disrupting the main development flow.
Implementation Priority: Begin with Milestone M0 (process foundation) and proceed sequentially through the implementation plan, validating each milestone before proceeding to the next.
Document Version: 2.1 Last Updated: 2025-10-22 Status: READY_FOR_IMPLEMENTATION
Build Reference: This document (v2.1) applies to CascadingDev installer version matching VERSION in the repository root.