Updated all documentation to reflect the new two-tier extraction system:
**workflow-marker-extraction.puml:**
- Completely rewritten to show AI normalization flow
- Documents agents.normalize_discussion() as primary method
- Shows simple line-start fallback for explicit markers
- Includes natural conversation examples vs. explicit markers
- Demonstrates resilience and cost-effectiveness
**AUTOMATION.md:**
- Restructured "Conversation Guidelines" section
- Emphasizes natural conversation as recommended approach
- Clarifies AI normalization extracts from conversational text
- Documents explicit markers as fallback when AI unavailable
- Explains two-tier architecture benefits
**diagrams-README.md:**
- Already updated in previous commit
All documentation now accurately reflects:
✅ AI-powered extraction (agents.py) for natural conversation
✅ Simple fallback parsing (workflow.py) for explicit markers
✅ Multi-provider resilience (claude → codex → gemini)
✅ No strict formatting requirements for participants
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Major documentation updates to align with multi-provider AI system:
1. Update CLAUDE.md (lines 213-332)
- Add new "AI Configuration System" section
- Document config/ai.yml structure and three optimization levels
- Explain model hint propagation pipeline (rule → runner → patcher)
- Add provider setup table (Claude, Codex, Gemini)
- Document Claude subagent setup with ./tools/setup_claude_agents.sh
- List implementation modules with line number references
- Explain environment variable overrides
- Document fallback behavior when all providers fail
2. Update docs/DESIGN.md (lines 894-1077)
- Add "Automation AI Configuration" section before Stage Model
- Document configuration architecture with full YAML example
- Explain model hint system with .ai-rules.yml examples
- Detail execution flow through 4 steps (rule eval → prompt → chain → fallback)
- Show example prompt with TASK COMPLEXITY hint injection
- Add provider comparison table with fast/default/quality models
- Document implementation modules with line references
- Add cost optimization examples (93% savings on simple tasks)
- Explain environment overrides and persistence
3. Update docs/AUTOMATION.md (lines 70-148)
- Restructure Phase 2 requirements to emphasize config/ai.yml
- Add full YAML configuration example with three chains
- Explain how model hints work (fast vs quality)
- Update Claude subagent documentation
- Clarify auto-selection based on TASK COMPLEXITY
- Move git config to deprecated status
- Emphasize environment variables as optional overrides
4. Update README.md (line 10)
- Add "Multi-Provider AI System" to key features
- Brief mention of fallback chains and model selection
Impact:
- AI assistants can now discover the multi-provider system
- Users understand how to configure providers via config/ai.yml
- Clear explanation of cost optimization through model hints
- Complete documentation of the execution pipeline
- All major docs now reference the same configuration approach
Resolves documentation gap identified in project review.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Fix missing space after colon in features.ai-rules.yml
- Add tools/mock_ai.sh for testing automation without real AI
- Ensures installer has valid YAML templates
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Major refactoring to support flexible AI provider configuration instead of
requiring direct API access. Users can now use whatever AI CLI tool they have
installed (claude, gemini, codex, etc.) without API keys.
## Changes to automation/agents.py
**New Functions:**
- `get_ai_config()` - Reads config from env vars or git config
- Environment: CDEV_AI_PROVIDER, CDEV_AI_COMMAND (highest priority)
- Git config: cascadingdev.aiprovider, cascadingdev.aicommand
- Default: claude-cli with "claude -p '{prompt}'"
- `call_ai_cli()` - Execute AI via CLI command
- Passes content via temp file to avoid shell escaping
- Supports {prompt} placeholder in command template
- 60s timeout with error handling
- Parses JSON from response (with/without code blocks)
- `call_ai_api()` - Direct API access (renamed from call_claude)
- Unchanged functionality
- Now used as fallback option
- `call_ai()` - Unified AI caller
- Try CLI first (if configured)
- Fall back to API (if ANTHROPIC_API_KEY set)
- Graceful failure with warnings
**Updated Functions:**
- `normalize_discussion()` - calls call_ai() instead of call_claude()
- `track_questions()` - calls call_ai() instead of call_claude()
- `track_action_items()` - calls call_ai() instead of call_claude()
- `track_decisions()` - calls call_ai() instead of call_claude()
**Configuration Precedence:**
1. Environment variables (session-scoped)
2. Git config (repo-scoped)
3. Defaults (claude-cli)
## Changes to docs/AUTOMATION.md
**Updated Sections:**
- "Requirements" - Now lists CLI as Option 1 (recommended), API as Option 2
- "Configuration" - Complete rewrite with 5 provider examples:
1. Claude CLI (default)
2. Gemini CLI
3. OpenAI Codex CLI
4. Direct API (Anthropic)
5. Custom AI command
- "Troubleshooting" - Added "AI command failed" section, updated error messages
**New Configuration Examples:**
```bash
# Claude Code (default)
git config cascadingdev.aicommand "claude -p '{prompt}'"
# Gemini
git config cascadingdev.aiprovider "gemini-cli"
git config cascadingdev.aicommand "gemini '{prompt}'"
# Custom
git config cascadingdev.aicommand "my-ai-tool --prompt '{prompt}' --format json"
```
## Benefits
1. **No API Key Required**: Use existing CLI tools (claude, gemini, etc.)
2. **Flexible Configuration**: Git config (persistent) or env vars (session)
3. **Provider Agnostic**: Works with any CLI that returns JSON
4. **Backward Compatible**: Still supports direct API if ANTHROPIC_API_KEY set
5. **User-Friendly**: Defaults to "claude -p" if available
## Testing
- ✅ get_ai_config() tests:
- Default: claude-cli with "claude -p '{prompt}'"
- Git config override: gemini-cli with "gemini '{prompt}'"
- Env var override: codex-cli with "codex '{prompt}'"
- ✅ extract_mentions() still works (no AI required)
- ✅ All 6 workflow tests pass
## Impact
Users with Claude Code installed can now use the automation without any
configuration - it just works! Same for users with gemini or codex CLIs.
Only requires git config setup if using non-default command.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>