CascadingDev/docs/AUTOMATION.md

14 KiB

CascadingDev Automation System

Overview

The CascadingDev automation system processes discussion files during git commits to maintain structured summaries. It operates in two phases:

Phase 1 (Basic): Always runs, no dependencies

  • Parses VOTE: lines from discussions
  • Tracks latest vote per participant
  • Updates summary .sum.md files automatically

Phase 2 (AI-Enhanced): Optional, requires Claude API

  • Extracts questions, action items, and decisions using AI
  • Tracks @mentions and awaiting replies
  • Processes only incremental changes (git diff)
  • Maintains timeline and structured summaries

Architecture

automation/
├── workflow.py      # Main orchestrator, called by pre-commit hook
├── agents.py        # Claude-powered extraction agents
├── summary.py       # Summary file formatter and updater
└── __init__.py

Phase 1: Vote Tracking (Always Enabled)

How It Works

  1. Pre-commit hook triggers automation/workflow.py --status
  2. Finds all staged .discussion.md files
  3. Parses entire file for VOTE: lines
  4. Maintains latest vote per participant
  5. Updates corresponding .sum.md file's VOTES section
  6. Auto-stages updated summaries

Vote Format

- ParticipantName: Any comment text. VOTE: READY|CHANGES|REJECT

Rules:

  • Only the latest vote per participant counts
  • Case-insensitive vote tokens (vote:, VOTE:, Vote:)
  • Three valid values: READY, CHANGES, REJECT
  • Must follow participant bullet format: - Name: ...

Example

Discussion file:

- Alice: I like this approach. VOTE: READY
- Bob: We need tests first. VOTE: CHANGES
- Alice: Good point, updated the plan. VOTE: CHANGES

Resulting summary:

<!-- SUMMARY:VOTES START -->
## Votes (latest per participant)
READY: 0 • CHANGES: 2 • REJECT: 0
- Alice: CHANGES
- Bob: CHANGES
<!-- SUMMARY:VOTES END -->

Phase 2: AI-Enhanced Processing (Optional)

Requirements

Phase 2 supports multiple AI providers with automatic fallback chains and intelligent model selection. Configuration is managed through config/ai.yml (copied into every generated project).

Configuration: config/ai.yml (Primary Method)

Edit the configuration file to set your preferred AI providers and fallback chain:

# config/ai.yml
runner:
  # Default command chain (balanced speed/quality)
  command_chain:
    - "claude -p"
    - "codex --model gpt-5"
    - "gemini --model gemini-2.5-flash"

  # Fast command chain (simple tasks: vote counting, gate checks)
  # Used when model_hint: fast in .ai-rules.yml
  command_chain_fast:
    - "claude -p"                      # Auto-selects Haiku via subagent
    - "codex --model gpt-5-mini"
    - "gemini --model gemini-2.5-flash"

  # Quality command chain (complex tasks: design, implementation planning)
  # Used when model_hint: quality in .ai-rules.yml
  command_chain_quality:
    - "claude -p"                      # Auto-selects Sonnet via subagent
    - "codex --model o3"
    - "gemini --model gemini-2.5-pro"

  sentinel: "CASCADINGDEV_NO_CHANGES"

How it works:

  • Each chain is tried left → right until a provider succeeds
  • .ai-rules.yml can specify model_hint: fast or model_hint: quality per rule
  • Fast models (Haiku, GPT-5-mini) handle simple tasks (vote counting, gate checks)
  • Quality models (Sonnet, O3) handle complex tasks (design discussions, planning)
  • This optimization reduces costs by ~70% while maintaining quality

Environment Overrides (Optional)

Temporarily override config/ai.yml for a single commit:

# Override command for this commit only
CDEV_AI_COMMAND="claude -p" git commit -m "message"

# Chain multiple providers with || delimiter
CDEV_AI_COMMAND="claude -p || codex --model gpt-5" git commit -m "message"

Environment variables take precedence but don't modify the config file.

Common non-interactive setups:

Provider CLI Tool Command Example Authentication Notes
Claude claude claude -p Run claude and follow prompts to sign in Supports custom subagents in ~/.claude/agents/. Create with ./tools/setup_claude_agents.sh. Uses Haiku (fast) or Sonnet (quality).
OpenAI codex codex --model gpt-5 Run codex and sign in with ChatGPT account Codex CLI is OpenAI's terminal coding agent. Default model: GPT-5. Use gpt-5-mini for faster, cheaper responses.
Google gemini gemini --model gemini-2.5-flash Run gemini and sign in with Google account Free tier: 60 req/min. Use gemini-2.5-flash (fast) or gemini-2.5-pro (1M context, quality). Open source (Apache 2.0).

Recommended Setup: Use the provided setup script to create Claude subagents:

# One-time setup (creates ~/.claude/agents/cdev-patch.md and cdev-patch-quality.md)
./tools/setup_claude_agents.sh

This creates two subagent files:

  • cdev-patch.md - Uses Haiku model (fast, cost-efficient) - auto-selected when TASK COMPLEXITY: FAST
  • cdev-patch-quality.md - Uses Sonnet model (higher quality) - auto-selected when TASK COMPLEXITY: QUALITY

The claude -p command automatically selects the appropriate subagent based on the TASK COMPLEXITY hint in the prompt:

  • Simple tasks (vote counting, gate checks) → command_chain_fast → Haiku
  • Complex tasks (design, implementation planning) → command_chain_quality → Sonnet
  • Default tasks → command_chain → auto-select based on complexity

Option 2: Direct API (Alternative)

pip install anthropic
export ANTHROPIC_API_KEY="sk-ant-..."

If no AI provider is configured or responding, Phase 2 features are silently skipped and only Phase 1 (votes) runs.

Features

1. @Mention Tracking

Extracts @Name and @all mentions from discussions to track who's waiting for replies.

Format:

- Alice: @Bob what do you think about OAuth2?
- Carol: @all please review by Friday

Summary section:

<!-- SUMMARY:AWAITING START -->
## Awaiting Replies

### @Bob
- @Alice: What do you think about OAuth2?

### @all
- @Carol: Please review by Friday
<!-- SUMMARY:AWAITING END -->

2. Question Tracking

Identifies questions and tracks their answers.

Markers (optional but recommended):

- Alice: Q: Should we use OAuth2 or JWT?
- Bob: A: I'd recommend OAuth2 for third-party auth.

Also detects:

  • Lines ending with ?
  • Question: prefix
  • Re: replies (indicate partial answers)

Summary section:

<!-- SUMMARY:OPEN_QUESTIONS START -->
## Open Questions
- @Alice: Should we cache API responses?

### Partially Answered:
- @Bob: What about rate limiting?
  - Partial answer: We'll use token bucket algorithm
<!-- SUMMARY:OPEN_QUESTIONS END -->

3. Action Item Management

Tracks tasks from creation → assignment → completion.

Markers (optional but recommended):

- Alice: TODO: Research OAuth2 libraries
- Bob: I'll handle the JWT implementation.
- Alice: DONE: Completed library research, recommending authlib.
- Dave: ACTION: Review security implications

Summary section:

<!-- SUMMARY:ACTION_ITEMS START -->
## Action Items

### TODO (unassigned):
- [ ] Document the authentication flow (suggested by @Carol)

### In Progress:
- [ ] Implement JWT token validation (@Bob)

### Completed:
- [x] Research OAuth2 libraries (@Alice)
<!-- SUMMARY:ACTION_ITEMS END -->

4. Decision Logging (ADR-Style)

Captures architectural decisions with rationale.

Markers (optional but recommended):

- Alice: DECISION: Use OAuth2 + JWT hybrid approach.
  Rationale: OAuth2 for robust third-party auth, JWT for stateless sessions.

Also detects:

  • "We decided to..."
  • "Going with X because..."
  • Vote consensus (multiple READY votes)

Summary section:

<!-- SUMMARY:DECISIONS START -->
## Decisions (ADR-style)

### Decision 1: Use OAuth2 + JWT hybrid approach
- **Proposed by:** @Alice
- **Supported by:** @Bob, @Carol
- **Rationale:** OAuth2 for robust third-party auth, JWT for stateless sessions
- **Alternatives considered:**
  - Pure JWT authentication
  - Session-based auth with cookies
<!-- SUMMARY:DECISIONS END -->

Conversation Guidelines (Optional)

Using these markers helps extract information accurately. Many work without AI using regex:

# Markers (✅ = works without AI)

Q: <question>          # ✅ Mark questions explicitly (also: "Question:", or ending with ?)
A: <answer>            # Mark answers explicitly (AI tracks these)
Re: <response>         # Partial answers or follow-ups (AI tracks these)

TODO: <action>         # ✅ New unassigned task
ACTION: <action>       # ✅ Task with implied ownership (alias for TODO)
ASSIGNED: <task> @name # ✅ Claimed task (extracts @mention as assignee)
DONE: <completion>     # ✅ Mark task complete

DECISION: <choice>     # ✅ Architectural decision (AI adds rationale/alternatives)
Rationale: <why>       # Explain reasoning (AI extracts this)

VOTE: READY|CHANGES|REJECT  # ✅ REQUIRED for voting (always tracked)

@Name                  # ✅ Mention someone specifically
@all                   # ✅ Mention everyone

Example Workflow:

- Alice: Q: Should we support OAuth2?
- Bob: TODO: Research OAuth2 libraries
- Bob: ASSIGNED: OAuth2 library research (@Bob taking ownership)
- Carol: DECISION: Use OAuth2 for authentication. Rationale: Industry standard with good library support.
- Carol: DONE: Completed OAuth2 comparison document
- Dave: @all Please review the comparison by Friday. VOTE: READY

Implementation Details

Incremental Processing

The system only processes new content added since the last commit:

  1. Uses git diff HEAD <file> to get changes
  2. Extracts only lines starting with + (added lines)
  3. Feeds incremental content to AI agents
  4. Updates summary sections non-destructively

Marker Block System

Summary files use HTML comment markers for non-destructive updates:

<!-- SUMMARY:SECTION_NAME START -->
## Section Header
<content>
<!-- SUMMARY:SECTION_NAME END -->

Sections:

  • VOTES - Vote counts and participants
  • DECISIONS - ADR-style decisions
  • OPEN_QUESTIONS - Unanswered questions
  • AWAITING - Unresolved @mentions
  • ACTION_ITEMS - TODO → ASSIGNED → DONE
  • TIMELINE - Chronological updates (future)
  • LINKS - Related PRs/commits (future)

Error Handling

The workflow is non-blocking:

  • Always exits 0 (never blocks commits)
  • Prints warnings to stderr for missing dependencies
  • Falls back to Phase 1 (votes only) if API key missing
  • Continues on individual agent failures

Testing

# Run all tests
pytest tests/

# Test vote parsing
pytest tests/test_workflow.py -v

# Manual test in a project
cd /path/to/cascadingdev-project
echo "- Test: Comment with vote. VOTE: READY" >> Docs/features/test/discussions/test.discussion.md
git add Docs/features/test/discussions/test.discussion.md
git commit -m "Test workflow"  # Triggers automation

Configuration

AI Provider Options

1. Claude CLI (Default)

# No configuration needed if you have 'claude' command available
# The system defaults to: claude -p '{prompt}'

# To customize:
git config cascadingdev.aicommand "claude -p '{prompt}'"

2. Gemini CLI

git config cascadingdev.aiprovider "gemini-cli"
git config cascadingdev.aicommand "gemini '{prompt}'"

3. OpenAI Codex CLI

git config cascadingdev.aiprovider "codex-cli"
git config cascadingdev.aicommand "codex '{prompt}'"

4. Direct API (Anthropic)

pip install anthropic
export ANTHROPIC_API_KEY="sk-ant-..."
# CLI commands will fallback to API if available

5. Custom AI Command

# Use any command that reads a prompt and returns JSON
git config cascadingdev.aicommand "my-ai-tool --prompt '{prompt}' --format json"

Disabling AI Features

Simply don't configure any AI provider. The system will:

  • Log a warning: [agents] warning: No AI provider configured, skipping AI processing
  • Continue with Phase 1 (vote tracking only)
  • Still extract @mentions (doesn't require AI)

Future Enhancements

🚧 Phase 3 (Planned):

  • Timeline auto-population from git commits
  • Link tracking (related PRs, commits)
  • Multi-file decision tracking
  • Slack/Discord notification integration
  • Summary diffs between commits
  • Natural language summary generation

Troubleshooting

"agents module not available"

Cause: Import path issue when workflow.py runs from pre-commit hook.

Solution: Already fixed in workflow.py with dual import style:

try:
    from automation import agents
except ImportError:
    import agents  # Fallback for different execution contexts

"No AI provider configured"

Cause: No AI CLI command or API key configured.

Solution: Choose one:

# Option 1: Use Claude CLI (default)
git config cascadingdev.aicommand "claude -p '{prompt}'"

# Option 2: Use Gemini CLI
git config cascadingdev.aiprovider "gemini-cli"
git config cascadingdev.aicommand "gemini '{prompt}'"

# Option 3: Use Anthropic API
pip install anthropic
export ANTHROPIC_API_KEY="sk-ant-..."

Or accept Phase 1 only (votes only, still useful)

"AI command failed with code X"

Cause: AI CLI command returned error.

Solution:

  1. Test command manually: claude -p "test prompt"
  2. Check command is in PATH: which claude
  3. Verify command syntax in config: git config cascadingdev.aicommand
  4. Check stderr output in warning message for details

Summary sections not updating

Cause: Markers might be malformed or missing.

Solution:

  1. Check summary file has proper markers (see "Marker Block System")
  2. Regenerate from template if needed
  3. File should be created automatically by pre-commit hook

Votes not being detected

Cause: Format doesn't match parser expectations.

Solution: Ensure format is:

- ParticipantName: Comment text. VOTE: READY

Common issues:

  • Missing - bullet
  • Missing : after name
  • Typo in vote value (must be READY, CHANGES, or REJECT)
  • Multiple participants on one line (not supported)

See Also