|
|
||
|---|---|---|
| .. | ||
| README.md | ||
| __init__.py | ||
| agents.py | ||
| ai_config.py | ||
| config.py | ||
| patcher.py | ||
| runner.py | ||
| summary.py | ||
| workflow.py | ||
README.md
CascadingDev Automation
Automated discussion summarization with AI-powered extraction.
Quick Start
Phase 1 (Always Works):
# Just commit! Vote tracking happens automatically
git commit -m "Discussion updates"
Phase 2 (AI-Enhanced):
# Option 1: Use Claude CLI (default - no config needed)
# Just have 'claude' command available in PATH
# Option 2: Use different AI
git config cascadingdev.aicommand "gemini '{prompt}'"
# Option 3: Use API directly
pip install anthropic
export ANTHROPIC_API_KEY="sk-ant-..."
What Gets Automated
Phase 1 (Basic)
- ✅ Vote counting (READY/CHANGES/REJECT)
- ✅ Latest vote per participant
- ✅ Auto-update summary files
Phase 2 (AI-Powered)
- ✅ @Mention tracking
- ✅ Question identification (OPEN/PARTIAL/ANSWERED) — falls back to
Q:/?marker regex when no AI is configured - ✅ Action items (TODO → ASSIGNED → DONE) — recognizes
TODO:/DONE:markers out of the box - ✅ Decision logging (ADR-style with rationale)
- ✅ Timeline entries — newest discussion snippets appear in
## Timelineeven without an AI provider - ✅ Stage gating — feature discussions flip
statusbased on vote thresholds and spawn implementation discussions when.ai-rules.ymlsays so
Configuration Examples
Claude Code (Default)
# No configuration needed if 'claude' command works
claude -p "test" # Verify command works
# Or customize:
git config cascadingdev.aicommand "claude -p '{prompt}'"
Gemini CLI
git config cascadingdev.aiprovider "gemini-cli"
git config cascadingdev.aicommand "gemini '{prompt}'"
OpenAI Codex
git config cascadingdev.aiprovider "codex-cli"
git config cascadingdev.aicommand "codex '{prompt}'"
Custom AI Tool
# Use any command that accepts a prompt and returns JSON
git config cascadingdev.aicommand "my-ai '{prompt}' --format json"
Check Current Config
git config cascadingdev.aiprovider # Defaults to: claude-cli
git config cascadingdev.aicommand # Defaults to: claude -p '{prompt}'
File Structure
automation/
├── runner.py # AI rules engine entrypoint (invoked from pre-commit)
├── config.py # Cascading .ai-rules loader and template resolver
├── patcher.py # Unified diff pipeline + git apply wrapper
├── workflow.py # Vote/timeline status reporter
├── agents.py # AI extraction agents
├── summary.py # Summary file formatter
└── README.md # This file
How It Works
- Pre-commit hook triggers
workflow.py --status - Finds staged
.discussion.mdfiles - Phase 1: Parses VOTE: lines → updates summary
- Phase 2: Calls AI agent → extracts questions/actions/decisions
- Updates corresponding
.sum.mdfiles - Auto-stages updated summaries
- Commit continues (never blocks, always exits 0)
Vote Format
- ParticipantName: Any comment. VOTE: READY|CHANGES|REJECT
Rules:
- Latest vote per participant wins
- Must follow
- Name: ...bullet format - Case-insensitive: VOTE:, vote:, Vote:
Markers (Recognized Without AI)
The system recognizes these markers without requiring AI using regex patterns:
Q: <question> # Question (also: "Question:", or ending with ?)
A: <answer> # Answer (not yet tracked)
TODO: <task> # Unassigned action item
ACTION: <task> # Unassigned action item (alias for TODO)
ASSIGNED: <task> @name # Claimed action item (extracts @mention as assignee)
DONE: <completion> # Completed task
DECISION: <choice> # Decision (AI can add rationale/alternatives)
VOTE: READY|CHANGES|REJECT # Vote (REQUIRED - always tracked)
@Name, @all # Mentions (tracked automatically)
Examples:
- Alice: Q: Should we support OAuth2?
- Bob: TODO: Research OAuth2 libraries
- Bob: ASSIGNED: OAuth2 research (@Bob taking this)
- Carol: DONE: Completed OAuth2 comparison
- Dave: DECISION: Use OAuth2 + JWT hybrid approach
- Eve: @all please review by Friday
Note: These markers work immediately without any AI configuration. AI enhancement adds:
- Question answer tracking (A: responses)
- Decision rationale and alternatives
- Action item status transitions
- More sophisticated context understanding
Testing
# Test workflow vote parsing & staged-diff handling
pytest tests/test_workflow.py -v
# Manual test
echo "- Test: Comment. VOTE: READY" >> Docs/features/test/discussions/test.discussion.md
git add Docs/features/test/discussions/test.discussion.md
git commit -m "Test" # Automation runs
Troubleshooting
No AI Processing
# Check if AI command works
claude -p "Return JSON: {\"test\": true}"
# Check git config
git config --list | grep cascadingdev
# Inspect shared config (repo-wide defaults)
cat config/ai.yml
# Try environment variable override
CDEV_AI_COMMAND="claude -p '{prompt}'" git commit -m "test"
# Other provider notes
# codex-cli --stdin --model gpt-4.1-mini (requires OPENAI_API_KEY)
# gemini-cli --model=gemini-1.5-pro --stdin (requires GEMINI_API_KEY or GOOGLE_API_KEY)
# customize claude with --model or --agent to select a saved persona
Votes Not Updating
- Ensure format:
- Name: text. VOTE: READY - Check summary file has markers:
<!-- SUMMARY:VOTES START --> - Look for warnings in commit output
Documentation
- AUTOMATION.md - Complete system documentation
- DESIGN.md - Overall architecture
- CLAUDE.md - Guide for AI assistants