13 KiB
CascadingDev Automation System
Overview
The CascadingDev automation system processes discussion files during git commits to maintain structured summaries. It operates in two phases:
Phase 1 (Basic): Always runs, no dependencies
- Parses
VOTE:lines from discussions - Tracks latest vote per participant
- Updates summary
.sum.mdfiles automatically
Phase 2 (AI-Enhanced): Optional, requires Claude API
- Extracts questions, action items, and decisions using AI
- Tracks @mentions and awaiting replies
- Processes only incremental changes (git diff)
- Maintains timeline and structured summaries
Architecture
automation/
├── workflow.py # Main orchestrator, called by pre-commit hook
├── agents.py # Claude-powered extraction agents
├── summary.py # Summary file formatter and updater
└── __init__.py
Phase 1: Vote Tracking (Always Enabled)
How It Works
- Pre-commit hook triggers
automation/workflow.py --status - Finds all staged
.discussion.mdfiles - Parses entire file for
VOTE:lines - Maintains latest vote per participant
- Updates corresponding
.sum.mdfile's VOTES section - Auto-stages updated summaries
Vote Format
- ParticipantName: Any comment text. VOTE: READY|CHANGES|REJECT
Rules:
- Only the latest vote per participant counts
- Case-insensitive vote tokens (vote:, VOTE:, Vote:)
- Three valid values: READY, CHANGES, REJECT
- Must follow participant bullet format:
- Name: ...
Example
Discussion file:
- Alice: I like this approach. VOTE: READY
- Bob: We need tests first. VOTE: CHANGES
- Alice: Good point, updated the plan. VOTE: CHANGES
Resulting summary:
<!-- SUMMARY:VOTES START -->
## Votes (latest per participant)
READY: 0 • CHANGES: 2 • REJECT: 0
- Alice: CHANGES
- Bob: CHANGES
<!-- SUMMARY:VOTES END -->
Phase 2: AI-Enhanced Processing (Optional)
Requirements
Phase 2 supports multiple AI providers via CLI commands or direct API. The
preferred way to set defaults is editing config/ai.yml (copied into every
generated project). The runner.command_chain list is evaluated left → right
until a provider succeeds, and the ramble section controls the GUI defaults.
Environment variables or CLI flags still override the shared config for ad-hoc
runs:
Option 1: CLI-based (Recommended)
# Uses whatever AI CLI tool you have installed
# Default: claude -p "prompt"
# Configure via git config (persistent)
git config cascadingdev.aiprovider "claude-cli"
git config cascadingdev.aicommand "claude -p '{prompt}'"
# Or via environment variables (session). These temporarily override
# `config/ai.yml` for the current shell.
export CDEV_AI_PROVIDER="claude-cli"
export CDEV_AI_COMMAND="claude -p '{prompt}'"
Common non-interactive setups:
| Provider | CLI Tool | Command Example | Authentication | Notes |
|---|---|---|---|---|
| Claude | claude |
claude --agent cdev-patch -p |
Run claude and follow prompts to sign in |
Supports custom subagents in ~/.claude/agents/. Create with ./tools/setup-claude-agents.sh. Uses Haiku (fast) or Sonnet (quality). |
| OpenAI | codex |
codex --model gpt-5 |
Run codex and sign in with ChatGPT account |
Codex CLI is OpenAI's terminal coding agent. Default model: GPT-5. Use gpt-5-mini for faster, cheaper responses. |
gemini |
gemini --model gemini-2.5-flash |
Run gemini and sign in with Google account |
Free tier: 60 req/min. Use gemini-2.5-flash (fast) or gemini-2.5-pro (1M context, quality). Open source (Apache 2.0). |
Recommended Setup: Use the provided setup script to create Claude subagents:
# One-time setup (creates ~/.claude/agents/cdev-patch.md and cdev-patch-quality.md)
./tools/setup-claude-agents.sh
This creates two subagent files:
cdev-patch.md- Uses Haiku model (fast, cost-efficient)cdev-patch-quality.md- Uses Sonnet model (higher quality, deeper analysis)
The default config/ai.yml uses the fast version. To temporarily use the quality version for complex changes:
# Override for a single commit
CDEV_AI_COMMAND="claude -p" git commit -m "complex refactor"
Option 2: Direct API (Alternative)
pip install anthropic
export ANTHROPIC_API_KEY="sk-ant-..."
If no AI provider is configured or responding, Phase 2 features are silently skipped and only Phase 1 (votes) runs.
Features
1. @Mention Tracking
Extracts @Name and @all mentions from discussions to track who's waiting for replies.
Format:
- Alice: @Bob what do you think about OAuth2?
- Carol: @all please review by Friday
Summary section:
<!-- SUMMARY:AWAITING START -->
## Awaiting Replies
### @Bob
- @Alice: What do you think about OAuth2?
### @all
- @Carol: Please review by Friday
<!-- SUMMARY:AWAITING END -->
2. Question Tracking
Identifies questions and tracks their answers.
Markers (optional but recommended):
- Alice: Q: Should we use OAuth2 or JWT?
- Bob: A: I'd recommend OAuth2 for third-party auth.
Also detects:
- Lines ending with
? Question:prefixRe:replies (indicate partial answers)
Summary section:
<!-- SUMMARY:OPEN_QUESTIONS START -->
## Open Questions
- @Alice: Should we cache API responses?
### Partially Answered:
- @Bob: What about rate limiting?
- Partial answer: We'll use token bucket algorithm
<!-- SUMMARY:OPEN_QUESTIONS END -->
3. Action Item Management
Tracks tasks from creation → assignment → completion.
Markers (optional but recommended):
- Alice: TODO: Research OAuth2 libraries
- Bob: I'll handle the JWT implementation.
- Alice: DONE: Completed library research, recommending authlib.
- Dave: ACTION: Review security implications
Summary section:
<!-- SUMMARY:ACTION_ITEMS START -->
## Action Items
### TODO (unassigned):
- [ ] Document the authentication flow (suggested by @Carol)
### In Progress:
- [ ] Implement JWT token validation (@Bob)
### Completed:
- [x] Research OAuth2 libraries (@Alice)
<!-- SUMMARY:ACTION_ITEMS END -->
4. Decision Logging (ADR-Style)
Captures architectural decisions with rationale.
Markers (optional but recommended):
- Alice: DECISION: Use OAuth2 + JWT hybrid approach.
Rationale: OAuth2 for robust third-party auth, JWT for stateless sessions.
Also detects:
- "We decided to..."
- "Going with X because..."
- Vote consensus (multiple READY votes)
Summary section:
<!-- SUMMARY:DECISIONS START -->
## Decisions (ADR-style)
### Decision 1: Use OAuth2 + JWT hybrid approach
- **Proposed by:** @Alice
- **Supported by:** @Bob, @Carol
- **Rationale:** OAuth2 for robust third-party auth, JWT for stateless sessions
- **Alternatives considered:**
- Pure JWT authentication
- Session-based auth with cookies
<!-- SUMMARY:DECISIONS END -->
Conversation Guidelines (Optional)
Using these markers helps extract information accurately. Many work without AI using regex:
# Markers (✅ = works without AI)
Q: <question> # ✅ Mark questions explicitly (also: "Question:", or ending with ?)
A: <answer> # Mark answers explicitly (AI tracks these)
Re: <response> # Partial answers or follow-ups (AI tracks these)
TODO: <action> # ✅ New unassigned task
ACTION: <action> # ✅ Task with implied ownership (alias for TODO)
ASSIGNED: <task> @name # ✅ Claimed task (extracts @mention as assignee)
DONE: <completion> # ✅ Mark task complete
DECISION: <choice> # ✅ Architectural decision (AI adds rationale/alternatives)
Rationale: <why> # Explain reasoning (AI extracts this)
VOTE: READY|CHANGES|REJECT # ✅ REQUIRED for voting (always tracked)
@Name # ✅ Mention someone specifically
@all # ✅ Mention everyone
Example Workflow:
- Alice: Q: Should we support OAuth2?
- Bob: TODO: Research OAuth2 libraries
- Bob: ASSIGNED: OAuth2 library research (@Bob taking ownership)
- Carol: DECISION: Use OAuth2 for authentication. Rationale: Industry standard with good library support.
- Carol: DONE: Completed OAuth2 comparison document
- Dave: @all Please review the comparison by Friday. VOTE: READY
Implementation Details
Incremental Processing
The system only processes new content added since the last commit:
- Uses
git diff HEAD <file>to get changes - Extracts only lines starting with
+(added lines) - Feeds incremental content to AI agents
- Updates summary sections non-destructively
Marker Block System
Summary files use HTML comment markers for non-destructive updates:
<!-- SUMMARY:SECTION_NAME START -->
## Section Header
<content>
<!-- SUMMARY:SECTION_NAME END -->
Sections:
VOTES- Vote counts and participantsDECISIONS- ADR-style decisionsOPEN_QUESTIONS- Unanswered questionsAWAITING- Unresolved @mentionsACTION_ITEMS- TODO → ASSIGNED → DONETIMELINE- Chronological updates (future)LINKS- Related PRs/commits (future)
Error Handling
The workflow is non-blocking:
- Always exits 0 (never blocks commits)
- Prints warnings to stderr for missing dependencies
- Falls back to Phase 1 (votes only) if API key missing
- Continues on individual agent failures
Testing
# Run all tests
pytest tests/
# Test vote parsing
pytest tests/test_workflow.py -v
# Manual test in a project
cd /path/to/cascadingdev-project
echo "- Test: Comment with vote. VOTE: READY" >> Docs/features/test/discussions/test.discussion.md
git add Docs/features/test/discussions/test.discussion.md
git commit -m "Test workflow" # Triggers automation
Configuration
AI Provider Options
1. Claude CLI (Default)
# No configuration needed if you have 'claude' command available
# The system defaults to: claude -p '{prompt}'
# To customize:
git config cascadingdev.aicommand "claude -p '{prompt}'"
2. Gemini CLI
git config cascadingdev.aiprovider "gemini-cli"
git config cascadingdev.aicommand "gemini '{prompt}'"
3. OpenAI Codex CLI
git config cascadingdev.aiprovider "codex-cli"
git config cascadingdev.aicommand "codex '{prompt}'"
4. Direct API (Anthropic)
pip install anthropic
export ANTHROPIC_API_KEY="sk-ant-..."
# CLI commands will fallback to API if available
5. Custom AI Command
# Use any command that reads a prompt and returns JSON
git config cascadingdev.aicommand "my-ai-tool --prompt '{prompt}' --format json"
Disabling AI Features
Simply don't configure any AI provider. The system will:
- Log a warning:
[agents] warning: No AI provider configured, skipping AI processing - Continue with Phase 1 (vote tracking only)
- Still extract @mentions (doesn't require AI)
Future Enhancements
🚧 Phase 3 (Planned):
- Timeline auto-population from git commits
- Link tracking (related PRs, commits)
- Multi-file decision tracking
- Slack/Discord notification integration
- Summary diffs between commits
- Natural language summary generation
Troubleshooting
"agents module not available"
Cause: Import path issue when workflow.py runs from pre-commit hook.
Solution: Already fixed in workflow.py with dual import style:
try:
from automation import agents
except ImportError:
import agents # Fallback for different execution contexts
"No AI provider configured"
Cause: No AI CLI command or API key configured.
Solution: Choose one:
# Option 1: Use Claude CLI (default)
git config cascadingdev.aicommand "claude -p '{prompt}'"
# Option 2: Use Gemini CLI
git config cascadingdev.aiprovider "gemini-cli"
git config cascadingdev.aicommand "gemini '{prompt}'"
# Option 3: Use Anthropic API
pip install anthropic
export ANTHROPIC_API_KEY="sk-ant-..."
Or accept Phase 1 only (votes only, still useful)
"AI command failed with code X"
Cause: AI CLI command returned error.
Solution:
- Test command manually:
claude -p "test prompt" - Check command is in PATH:
which claude - Verify command syntax in config:
git config cascadingdev.aicommand - Check stderr output in warning message for details
Summary sections not updating
Cause: Markers might be malformed or missing.
Solution:
- Check summary file has proper markers (see "Marker Block System")
- Regenerate from template if needed
- File should be created automatically by pre-commit hook
Votes not being detected
Cause: Format doesn't match parser expectations.
Solution: Ensure format is:
- ParticipantName: Comment text. VOTE: READY
Common issues:
- Missing
-bullet - Missing
:after name - Typo in vote value (must be READY, CHANGES, or REJECT)
- Multiple participants on one line (not supported)
See Also
- DESIGN.md - Overall system architecture
- CLAUDE.md - Guide for AI assistants
- USER_GUIDE.md - User-facing documentation