21 KiB
CascadingDev Automation System
Overview
The CascadingDev automation system processes discussion files during git commits to maintain structured summaries. It operates in two phases:
Phase 1 (Basic): Always runs, no dependencies
- Parses
VOTE:lines from discussions - Tracks latest vote per participant
- Updates summary
.sum.mdfiles automatically
Phase 2 (AI-Enhanced): Optional, requires Claude API
- Extracts questions, action items, and decisions using AI
- Tracks @mentions and awaiting replies
- Processes only incremental changes (git diff)
- Maintains timeline and structured summaries
Architecture
automation/
├── workflow.py # Main orchestrator, called by pre-commit hook
├── agents.py # Claude-powered extraction agents
├── summary.py # Summary file formatter and updater
└── __init__.py
Phase 1: Vote Tracking (Always Enabled)
How It Works
- Pre-commit hook triggers
automation/workflow.py --status - Finds all staged
.discussion.mdfiles - Parses entire file for
VOTE:lines - Maintains latest vote per participant
- Updates corresponding
.sum.mdfile's VOTES section - Auto-stages updated summaries
Vote Format
- ParticipantName: Any comment text. VOTE: READY|CHANGES|REJECT
Rules:
- Only the latest vote per participant counts
- Case-insensitive vote tokens (vote:, VOTE:, Vote:)
- Three valid values: READY, CHANGES, REJECT
- Must follow participant bullet format:
- Name: ...
Example
Discussion file:
- Alice: I like this approach. VOTE: READY
- Bob: We need tests first. VOTE: CHANGES
- Alice: Good point, updated the plan. VOTE: CHANGES
Resulting summary:
<!-- SUMMARY:VOTES START -->
## Votes (latest per participant)
READY: 0 • CHANGES: 2 • REJECT: 0
- Alice: CHANGES
- Bob: CHANGES
<!-- SUMMARY:VOTES END -->
Phase 2: AI-Enhanced Processing (Optional)
Requirements
Phase 2 supports multiple AI providers with automatic fallback chains and intelligent model selection. Configuration is managed through config/ai.yml (copied into every generated project).
Configuration: config/ai.yml (Primary Method)
Edit the configuration file to set your preferred AI providers and fallback chain:
# config/ai.yml
runner:
# Default command chain (balanced speed/quality)
command_chain:
- "claude -p"
- "codex --model gpt-5"
- "gemini --model gemini-2.5-flash"
# Fast command chain (simple tasks: vote counting, gate checks)
# Used when model_hint: fast in .ai-rules.yml
command_chain_fast:
- "claude -p" # Auto-selects Haiku via subagent
- "codex --model gpt-5-mini"
- "gemini --model gemini-2.5-flash"
# Quality command chain (complex tasks: design, implementation planning)
# Used when model_hint: quality in .ai-rules.yml
command_chain_quality:
- "claude -p" # Auto-selects Sonnet via subagent
- "codex --model o3"
- "gemini --model gemini-2.5-pro"
sentinel: "CASCADINGDEV_NO_CHANGES"
How it works:
- The automation runner (
patcher.py) iterates through thecommand_chainfrom top to bottom. - It attempts to generate a patch or summary with the first provider.
- If the provider fails (e.g., API error, invalid output), it automatically retries with the next provider in the chain.
- This continues until a provider succeeds or the chain is exhausted.
- This resilience ensures that the automation can proceed even if one AI provider is temporarily unavailable or performs poorly on a given task.
.ai-rules.ymlcan specifymodel_hint: fastormodel_hint: qualityper rule- Fast models (Haiku, GPT-5-mini) handle simple tasks (vote counting, gate checks)
- Quality models (Sonnet, O3) handle complex tasks (design discussions, planning)
- This optimization reduces costs by ~70% while maintaining quality
Environment Overrides (Optional)
Temporarily override config/ai.yml for a single commit:
# Override command for this commit only
CDEV_AI_COMMAND="claude -p" git commit -m "message"
# Chain multiple providers with || delimiter
CDEV_AI_COMMAND="claude -p || codex --model gpt-5" git commit -m "message"
Environment variables take precedence but don't modify the config file.
Common non-interactive setups:
| Provider | CLI Tool | Command Example | Authentication | Notes |
|---|---|---|---|---|
| Claude | claude |
claude -p |
Run claude and follow prompts to sign in |
Supports custom subagents in ~/.claude/agents/. Create with ./tools/setup_claude_agents.sh. Uses Haiku (fast) or Sonnet (quality). |
| OpenAI | codex |
codex --model gpt-5 |
Run codex and sign in with ChatGPT account |
Codex CLI is OpenAI's terminal coding agent. Default model: GPT-5. Use gpt-5-mini for faster, cheaper responses. |
gemini |
gemini --model gemini-2.5-flash |
Run gemini and sign in with Google account |
Free tier: 60 req/min. Use gemini-2.5-flash (fast) or gemini-2.5-pro (1M context, quality). Open source (Apache 2.0). |
Recommended Setup: Use the provided setup script to create Claude subagents:
# One-time setup (creates ~/.claude/agents/cdev-patch.md and cdev-patch-quality.md)
./tools/setup_claude_agents.sh
Automated Discussion Status Promotion
A key feature of the automation system is its ability to automatically update the status of a discussion file based on participant votes. This moves a feature, design, or other discussion through its lifecycle without manual intervention.
How it works:
- The
workflow.pyscript is triggered on commit. - It scans staged discussion files for YAML front matter containing a
promotion_rule. - It tallies
VOTE: READYandVOTE: REJECTvotes from eligible participants. - If the vote counts meet the configured thresholds, the script updates the
status:field in the file's front matter (e.g., fromOPENtoREADY_FOR_DESIGN). - The updated discussion file is automatically staged.
Configuration (in discussion file front matter):
The entire process is controlled by a promotion_rule block within the YAML front matter of a discussion file (e.g., feature.discussion.md).
---
type: feature-discussion
stage: feature
status: OPEN
promotion_rule:
allow_agent_votes: false
ready_min_eligible_votes: 2
reject_min_eligible_votes: 1
ready_status: "READY_FOR_DESIGN"
reject_status: "FEATURE_REJECTED"
---
Alice: Looks good. VOTE: READY
Bob: I agree. VOTE: READY
Parameters:
allow_agent_votes(boolean, optional): Iftrue, votes from participants namedAI_*will be counted. Defaults tofalse.ready_min_eligible_votes(integer | "all", optional): The number ofREADYvotes required to promote the status. Can be an integer or the string"all"(requiring all eligible voters to voteREADY). Defaults to2.reject_min_eligible_votes(integer | "all", optional): The number ofREJECTvotes required to reject the discussion. Defaults to1.ready_status(string, optional): The target status to set when thereadythreshold is met. Defaults to stage-specific values (e.g.,READY_FOR_DESIGNfor thefeaturestage).reject_status(string, optional): The target status to set when therejectthreshold is met. Defaults to stage-specific values (e.g.,FEATURE_REJECTED).
This automation turns the discussion and voting process into a state machine, enabling a self-driving project workflow.
This creates two subagent files:
cdev-patch.md- Uses Haiku model (fast, cost-efficient) - auto-selected when TASK COMPLEXITY: FASTcdev-patch-quality.md- Uses Sonnet model (higher quality) - auto-selected when TASK COMPLEXITY: QUALITY
The claude -p command automatically selects the appropriate subagent based on the TASK COMPLEXITY hint in the prompt:
- Simple tasks (vote counting, gate checks) →
command_chain_fast→ Haiku - Complex tasks (design, implementation planning) →
command_chain_quality→ Sonnet - Default tasks →
command_chain→ auto-select based on complexity
Option 2: Direct API (Alternative)
pip install anthropic
export ANTHROPIC_API_KEY="sk-ant-..."
If no AI provider is configured or responding, Phase 2 features are silently skipped and only Phase 1 (votes) runs.
Features
1. @Mention Tracking
Extracts @Name and @all mentions from discussions to track who's waiting for replies.
Format:
- Alice: @Bob what do you think about OAuth2?
- Carol: @all please review by Friday
Summary section:
<!-- SUMMARY:AWAITING START -->
## Awaiting Replies
### @Bob
- @Alice: What do you think about OAuth2?
### @all
- @Carol: Please review by Friday
<!-- SUMMARY:AWAITING END -->
2. Question Tracking
Identifies questions and tracks their answers.
Markers (optional but recommended):
- Alice: Q: Should we use OAuth2 or JWT?
- Bob: A: I'd recommend OAuth2 for third-party auth.
Also detects:
- Lines ending with
? Question:prefixRe:replies (indicate partial answers)
Summary section:
<!-- SUMMARY:OPEN_QUESTIONS START -->
## Open Questions
- @Alice: Should we cache API responses?
### Partially Answered:
- @Bob: What about rate limiting?
- Partial answer: We'll use token bucket algorithm
<!-- SUMMARY:OPEN_QUESTIONS END -->
3. Action Item Management
Tracks tasks from creation → assignment → completion.
Markers (optional but recommended):
- Alice: TODO: Research OAuth2 libraries
- Bob: I'll handle the JWT implementation.
- Alice: DONE: Completed library research, recommending authlib.
- Dave: ACTION: Review security implications
Summary section:
<!-- SUMMARY:ACTION_ITEMS START -->
## Action Items
### TODO (unassigned):
- [ ] Document the authentication flow (suggested by @Carol)
### In Progress:
- [ ] Implement JWT token validation (@Bob)
### Completed:
- [x] Research OAuth2 libraries (@Alice)
<!-- SUMMARY:ACTION_ITEMS END -->
4. Decision Logging (ADR-Style)
Captures architectural decisions with rationale.
Markers (optional but recommended):
- Alice: DECISION: Use OAuth2 + JWT hybrid approach.
Rationale: OAuth2 for robust third-party auth, JWT for stateless sessions.
Also detects:
- "We decided to..."
- "Going with X because..."
- Vote consensus (multiple READY votes)
Summary section:
<!-- SUMMARY:DECISIONS START -->
## Decisions (ADR-style)
### Decision 1: Use OAuth2 + JWT hybrid approach
- **Proposed by:** @Alice
- **Supported by:** @Bob, @Carol
- **Rationale:** OAuth2 for robust third-party auth, JWT for stateless sessions
- **Alternatives considered:**
- Pure JWT authentication
- Session-based auth with cookies
<!-- SUMMARY:DECISIONS END -->
Conversation Guidelines
Natural Conversation (Recommended)
Write naturally - AI normalization extracts markers automatically:
# Examples of natural conversation that AI understands:
- Alice: I think we should use OAuth2. Does anyone know if we need OAuth 2.1 specifically?
VOTE: READY
- Bob: Good question Alice. I'm making a decision here - we'll use OAuth 2.0 for now.
@Carol can you research migration paths to 2.1? VOTE: CHANGES
- Carol: I've completed the OAuth research. We can upgrade later without breaking changes.
VOTE: READY
AI normalization (via agents.py) extracts:
- Decisions from natural language ("I'm making a decision here - ...")
- Questions from conversational text ("Does anyone know if...")
- Action items with @mentions ("@Carol can you research...")
- Votes (always tracked:
VOTE: READY|CHANGES|REJECT)
Explicit Markers (Fallback)
If AI is unavailable, these explicit line-start markers work as fallback:
# Markers (✅ = works without AI as simple fallback)
QUESTION: <question> # ✅ Explicit question marker
Q: <question> # ✅ Short form
TODO: <action> # ✅ New unassigned task
ACTION: <action> # ✅ Task with implied ownership
ASSIGNED: <task> @name # ✅ Claimed task
DONE: <completion> # ✅ Mark task complete
DECISION: <choice> # ✅ Architectural decision
VOTE: READY|CHANGES|REJECT # ✅ ALWAYS tracked (with or without AI)
@Name # ✅ Mention extraction (simple regex)
Example with explicit markers:
- Alice: QUESTION: Should we support OAuth2?
- Bob: TODO: Research OAuth2 libraries
- Bob: ASSIGNED: OAuth2 library research
- Carol: DECISION: Use OAuth2 for authentication
- Dave: @all Please review. VOTE: READY
Two-Tier Architecture
- AI Normalization (Primary): Handles natural conversation, embedded markers, context understanding
- Simple Fallback: Handles explicit line-start markers when AI unavailable
Benefits:
- ✅ Participants write naturally without strict formatting
- ✅ Resilient (multi-provider fallback: claude → codex → gemini)
- ✅ Works offline/API-down with explicit markers
- ✅ Cost-effective (uses fast models for extraction)
Implementation Details
Incremental Processing
The system only processes new content added since the last commit:
- Uses
git diff HEAD <file>to get changes - Extracts only lines starting with
+(added lines) - Feeds incremental content to AI agents
- Updates summary sections non-destructively
Participant Agent Scripts
Before any output writers run, the runner checks the matched rule for a participants list. Each entry is a Python script path that will be executed with the staged file:
rules:
feature_discussion_update:
participants:
- path: "agents/moderator.py"
- path: "agents/visualizer.py"
background: true
outputs:
self_append:
path: "{path}"
output_type: "feature_discussion_writer"
Key points:
- Agent scripts live under
agents/and reuse the shared SDK insrc/cascadingdev/agent/. AgentContextexposes helpers likeread_text()andappend_block()to work safely against the repo root.ProviderClientcalls the configured AI chain (Claude → Codex → Gemini) and can return free-form or JSON responses.- The included moderator agent appends a single guided comment per commit, guarded by
<!-- AUTO:MODERATOR START -->to remain idempotent. - The visualizer agent watches for
@AI_visualmentions, generates PlantUML diagrams, and links them back to the discussion. - Add
background: truefor tool-style agents (researcher, visualizer) so the runner launches them asynchronously; their follow-up work lands in the working tree after the commit and can be included in the next commit. These service agents provide information only and intentionally omitVOTE:lines to avoid influencing promotion thresholds.
Adding new personas is as simple as dropping a script into agents/ and registering it in .ai-rules.yml.
Marker Block System
Summary files use HTML comment markers for non-destructive updates. In addition to the content sections, a special state marker is used to persist structured data across runs.
Persistent Summary State
To enable robust, incremental updates, the system stores the aggregated state of questions, action items, decisions, and mentions in a JSON blob within a comment at the top of the summary file.
<!-- SUMMARY:STATE {"action_items": [...], "decisions": [...], "mentions": [...], "questions": [...]} -->
How it works:
- Before processing new discussion content, the
summary.pyscript reads this state blob from the.sum.mdfile. - New items extracted from the latest discussion changes are merged with the existing state.
- This merging logic deduplicates items and updates their status (e.g., a question moving from
OPENtoANSWERED). - The updated state is written back to the marker.
- The visible summary sections (like
OPEN_QUESTIONS) are then regenerated from this canonical state.
This approach prevents information loss and ensures that the summary accurately reflects the cumulative history of the discussion, even as the discussion file itself grows.
Content Sections
<!-- SUMMARY:SECTION_NAME START -->
## Section Header
<content>
<!-- SUMMARY:SECTION_NAME END -->
Sections:
VOTES- Vote counts and participantsDECISIONS- ADR-style decisionsOPEN_QUESTIONS- Unanswered questionsAWAITING- Unresolved @mentionsACTION_ITEMS- TODO → ASSIGNED → DONETIMELINE- Chronological updates (future)LINKS- Related PRs/commits (future)
Implementation Stage Enhancements
- Checkbox items in
implementation.discussion.mdare parsed and normalised intoimplementation/tasks.mdand the summary's Tasks block. - Promotion to testing now checks two conditions before updating YAML frontmatter:
- All detected checkboxes are
[x]. - At least one human participant has a
VOTE: READY.
- All detected checkboxes are
- These safeguards run even when AI providers fail, ensuring the stage cannot advance on agent votes alone or with unfinished work.
Error Handling
The workflow is non-blocking:
- Always exits 0 (never blocks commits)
- Prints warnings to stderr for missing dependencies
- Falls back to Phase 1 (votes only) if API key missing
- Continues on individual agent failures
Testing
# Run all tests
pytest tests/
# Test vote parsing
pytest tests/test_workflow.py -v
# Manual test in a project
cd /path/to/cascadingdev-project
echo "- Test: Comment with vote. VOTE: READY" >> Docs/features/test/discussions/test.discussion.md
git add Docs/features/test/discussions/test.discussion.md
git commit -m "Test workflow" # Triggers automation
Configuration
AI Provider Options
1. Claude CLI (Default)
# No configuration needed if you have 'claude' command available
# The system defaults to: claude -p '{prompt}'
# To customize:
git config cascadingdev.aicommand "claude -p '{prompt}'"
2. Gemini CLI
git config cascadingdev.aiprovider "gemini-cli"
git config cascadingdev.aicommand "gemini '{prompt}'"
3. OpenAI Codex CLI
git config cascadingdev.aiprovider "codex-cli"
git config cascadingdev.aicommand "codex '{prompt}'"
4. Direct API (Anthropic)
pip install anthropic
export ANTHROPIC_API_KEY="sk-ant-..."
# CLI commands will fallback to API if available
5. Custom AI Command
# Use any command that reads a prompt and returns JSON
git config cascadingdev.aicommand "my-ai-tool --prompt '{prompt}' --format json"
Disabling AI Features
Simply don't configure any AI provider. The system will:
- Log a warning:
[agents] warning: No AI provider configured, skipping AI processing - Continue with Phase 1 (vote tracking only)
- Still extract @mentions (doesn't require AI)
Future Enhancements
🚧 Phase 3 (Planned):
- Timeline auto-population from git commits
- Link tracking (related PRs, commits)
- Multi-file decision tracking
- Slack/Discord notification integration
- Summary diffs between commits
- Natural language summary generation
Troubleshooting
"agents module not available"
Cause: Import path issue when workflow.py runs from pre-commit hook.
Solution: Already fixed in workflow.py with dual import style:
try:
from automation import agents
except ImportError:
import agents # Fallback for different execution contexts
"No AI provider configured"
Cause: No AI CLI command or API key configured.
Solution: Choose one:
# Option 1: Use Claude CLI (default)
git config cascadingdev.aicommand "claude -p '{prompt}'"
# Option 2: Use Gemini CLI
git config cascadingdev.aiprovider "gemini-cli"
git config cascadingdev.aicommand "gemini '{prompt}'"
# Option 3: Use Anthropic API
pip install anthropic
export ANTHROPIC_API_KEY="sk-ant-..."
Or accept Phase 1 only (votes only, still useful)
"AI command failed with code X"
Cause: AI CLI command returned error.
Solution:
- Test command manually:
claude -p "test prompt" - Check command is in PATH:
which claude - Verify command syntax in config:
git config cascadingdev.aicommand - Check stderr output in warning message for details
Summary sections not updating
Cause: Markers might be malformed or missing.
Solution:
- Check summary file has proper markers (see "Marker Block System")
- Regenerate from template if needed
- File should be created automatically by pre-commit hook
Votes not being detected
Cause: Format doesn't match parser expectations.
Solution: Ensure format is:
- ParticipantName: Comment text. VOTE: READY
Common issues:
- Missing
-bullet - Missing
:after name - Typo in vote value (must be READY, CHANGES, or REJECT)
- Multiple participants on one line (not supported)
See Also
- DESIGN.md - Overall system architecture
- CLAUDE.md - Guide for AI assistants
- USER_GUIDE.md - User-facing documentation