Simplified marker extraction architecture:
- AI normalization (agents.py) handles natural conversation
- Simple line-start matching for explicit markers as fallback
- Removed complex regex patterns (DECISION_PATTERN, QUESTION_PATTERN, ACTION_PATTERN)
- Participants can now write naturally without strict formatting rules
This implements the original design intent: fast AI model normalizes conversational
text into structured format, then simple parsing logic extracts it.
Benefits:
- More flexible for participants (no strict formatting required)
- Simpler code (startswith() instead of regex)
- Clear separation: AI for understanding, code for mechanical parsing
- Cost-effective (fast models for simple extraction task)
Updated workflow-marker-extraction.puml to show patterns in notes
instead of inline text (fixes PlantUML syntax error).
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Added DECISION_PATTERN, QUESTION_PATTERN, ACTION_PATTERN regexes
- Support both plain (DECISION:) and markdown bold (**DECISION**:) formats
- Markers now detected anywhere in text, not just at line start
- Removed analysis_normalized since regex handles both variants directly
- Kept legacy support for ASSIGNED: and DONE: at line start
- Updated docstring to reflect regex-based approach
Fix critical bug in patcher.py where patches failed to apply to staged
files during pre-commit hooks.
**Root Cause:**
The apply_patch() function was unstaging files before applying patches:
1. File gets unstaged (git reset HEAD)
2. Patch tries to apply with --index flag
3. But patch was generated from STAGED content
4. Base state mismatch causes patch application to fail
5. Original changes get re-staged, AI changes are lost
**The Fix:**
Remove the unstaging logic entirely (lines 599-610, 639-641).
- Patches are generated from staged content (git diff --cached)
- The --index flag correctly applies to both working tree and index
- No need to unstage first - that changes the base state
**Changes:**
- Deleted 19 lines of problematic unstaging code
- Added clear comment explaining why unstaging is harmful
- Simplified apply_patch() function
**Impact:**
- Patches now apply correctly during pre-commit hooks
- Status changes (OPEN → READY_FOR_DESIGN) work properly
- Gate creation (design_gate_writer) will trigger correctly
- No behavior change for non-staged files
**Testing:**
- All 18 existing tests still pass
- Bundle rebuilt and verified
Discovered during end-to-end testing when AI-generated status promotion
patches failed with "Failed to apply patch (strict and 3-way both failed)".
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Fix question extraction bug where AI-generated questions were being
filtered out due to missing 'status' field.
**Root Cause:**
The AI agents module returns questions without a 'status' field:
{'participant': 'Bob', 'question': 'text', 'line': 'original'}
But format_questions_section() filtered for status == "OPEN":
open_questions = [q for q in questions if q.get("status") == "OPEN"]
Since AI questions had no status, q.get("status") returned None,
which didn't match "OPEN", so questions were filtered out.
**Fix:**
Default missing status to "OPEN" in the filter:
open_questions = [q for q in questions if q.get("status", "OPEN") == "OPEN"]
This makes the function defensive - questions without explicit status
are treated as OPEN, which matches the basic extraction behavior.
**Impact:**
- Questions extracted by AI agents now appear in summaries
- Maintains backward compatibility with basic extraction (has status)
- All 18 tests now pass (was 17/18)
**Testing:**
- Verified with test_run_status_updates_summary_sections
- Question "What is the rollout plan?" now correctly appears in summary
- No regressions in other tests
Resolves test failure identified in comprehensive project review.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Major automation enhancements for flexible AI provider configuration:
1. Add config/ai.yml - Centralized AI configuration
- Three command chains: default, fast, quality
- Multi-provider fallback (Claude → Codex → Gemini)
- Configurable per optimization level
- Sentinel token configuration
2. Extend automation/ai_config.py
- Add RunnerSettings with three chain support
- Add get_chain_for_hint() method
- Load and validate all three command chains
- Proper fallback to defaults
3. Update automation/runner.py
- Read model_hint from .ai-rules.yml
- Pass model_hint to generate_output()
- Support output_type hint overrides
4. Update automation/patcher.py
- Add model_hint parameter throughout pipeline
- Inject TASK COMPLEXITY hint into prompts
- ModelConfig.get_commands_for_hint() selects chain
- Fallback mechanism tries all commands in chain
5. Add design discussion stage to features.ai-rules.yml
- New design_gate_writer rule (model_hint: fast)
- New design_discussion_writer rule (model_hint: quality)
- Update feature_request to create design gate
- Update feature_discussion to create design gate
- Add design.discussion.md file associations
- Proper status transitions: READY_FOR_DESIGN → READY_FOR_IMPLEMENTATION
6. Add assets/templates/design.discussion.md
- Template for Stage 3 design discussions
- META header with tokens support
- Design goals and participation instructions
7. Update tools/setup_claude_agents.sh
- Agent descriptions reference TASK COMPLEXITY hint
- cdev-patch: "MUST BE USED when TASK COMPLEXITY is FAST"
- cdev-patch-quality: "MUST BE USED when TASK COMPLEXITY is QUALITY"
8. Fix assets/hooks/pre-commit
- Correct template path comment (process/templates not assets/templates)
9. Update tools/mock_ai.sh
- Log prompts to /tmp/mock_ai_prompts.log for debugging
Impact:
- Users can configure AI providers via config/ai.yml
- Automatic fallback between Claude, Codex, Gemini
- Fast models for simple tasks (vote counting, gate checks)
- Quality models for complex tasks (design, implementation planning)
- Reduced costs through intelligent model selection
- Design stage now properly integrated into workflow
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Changed temp file location from /tmp to .git/ai-agents-temp/ to resolve
Claude CLI permission errors when reading discussion content.
Problem:
- agents.py created temp files in /tmp/tmp*.md
- Asked Claude CLI to read these files
- Claude CLI couldn't access /tmp without explicit permission grant
- Error: "I don't have permission to read that file"
- Fell back to basic pattern matching (degraded functionality)
Solution:
- Create temp files in .git/ai-agents-temp/ directory
- Claude CLI has permission to read files in the project directory
- Use PID for unique filenames to avoid conflicts
- Cleanup still handled by finally block
Benefits:
- No user configuration needed
- Claude CLI can now read discussion files
- AI agents work properly for structured extraction
- Temp files automatically gitignored (.git/ directory)
- Easy debugging (files visible in project)
The finally block at line 154-158 still cleans up temp files after use.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Changed approach from disabling outputs to properly handling AI's decision
not to generate changes (e.g., gated outputs, conditional rules).
Changes:
1. patcher.py - Allow empty diffs
- sanitize_unified_patch() returns empty string instead of raising error
- generate_output() returns early for empty patches (silent skip)
- Common case: implementation_gate_writer when status != READY_FOR_IMPLEMENTATION
- AI can now return explanatory text without a diff (no error)
2. features.ai-rules.yml - Override README rule
- Add README.md → "readme_skip" association
- Creates empty rule to disable README updates in Docs/features/
- Prevents unnecessary AI calls during feature discussions
- README automation still works in root directory
3. root.ai-rules.yml - Restore default README rule
- Removed "enabled: false" flag (back to default enabled)
- Features directory overrides this with empty rule
Benefits:
- implementation_gate now calls AI but AI returns empty diff (as designed)
- No more "[runner] error generating ...implementation.discussion.md"
- No more "[runner] error generating README.md"
- Clean separation: AI decides vs. config disables
- Instructions to AI are still executed, AI just chooses no changes
Testing:
Setup completes cleanly with no [runner] errors. The automation
runs and AI correctly returns no diff for implementation file
when status is OPEN.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Detect Claude API 500 Overloaded errors
- Continue processing other files on error instead of aborting
- Log errors to stderr for visibility
This allows commits to succeed even if some AI requests fail due to rate limiting.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Helps debugging by preserving raw AI output when markers are missing.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Claude CLI returns exit code 1 even when successfully generating output.
Check for stdout content before failing on non-zero exit codes.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Fix missing space after colon in features.ai-rules.yml
- Add tools/mock_ai.sh for testing automation without real AI
- Ensures installer has valid YAML templates
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add automation/README.md with:
- Quick start instructions for Phase 1 and Phase 2
- Configuration examples for all supported providers
- How it works explanation
- Vote format and optional markers
- Testing and troubleshooting sections
Provides a concise reference for users without needing to read full AUTOMATION.md.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Major refactoring to support flexible AI provider configuration instead of
requiring direct API access. Users can now use whatever AI CLI tool they have
installed (claude, gemini, codex, etc.) without API keys.
## Changes to automation/agents.py
**New Functions:**
- `get_ai_config()` - Reads config from env vars or git config
- Environment: CDEV_AI_PROVIDER, CDEV_AI_COMMAND (highest priority)
- Git config: cascadingdev.aiprovider, cascadingdev.aicommand
- Default: claude-cli with "claude -p '{prompt}'"
- `call_ai_cli()` - Execute AI via CLI command
- Passes content via temp file to avoid shell escaping
- Supports {prompt} placeholder in command template
- 60s timeout with error handling
- Parses JSON from response (with/without code blocks)
- `call_ai_api()` - Direct API access (renamed from call_claude)
- Unchanged functionality
- Now used as fallback option
- `call_ai()` - Unified AI caller
- Try CLI first (if configured)
- Fall back to API (if ANTHROPIC_API_KEY set)
- Graceful failure with warnings
**Updated Functions:**
- `normalize_discussion()` - calls call_ai() instead of call_claude()
- `track_questions()` - calls call_ai() instead of call_claude()
- `track_action_items()` - calls call_ai() instead of call_claude()
- `track_decisions()` - calls call_ai() instead of call_claude()
**Configuration Precedence:**
1. Environment variables (session-scoped)
2. Git config (repo-scoped)
3. Defaults (claude-cli)
## Changes to docs/AUTOMATION.md
**Updated Sections:**
- "Requirements" - Now lists CLI as Option 1 (recommended), API as Option 2
- "Configuration" - Complete rewrite with 5 provider examples:
1. Claude CLI (default)
2. Gemini CLI
3. OpenAI Codex CLI
4. Direct API (Anthropic)
5. Custom AI command
- "Troubleshooting" - Added "AI command failed" section, updated error messages
**New Configuration Examples:**
```bash
# Claude Code (default)
git config cascadingdev.aicommand "claude -p '{prompt}'"
# Gemini
git config cascadingdev.aiprovider "gemini-cli"
git config cascadingdev.aicommand "gemini '{prompt}'"
# Custom
git config cascadingdev.aicommand "my-ai-tool --prompt '{prompt}' --format json"
```
## Benefits
1. **No API Key Required**: Use existing CLI tools (claude, gemini, etc.)
2. **Flexible Configuration**: Git config (persistent) or env vars (session)
3. **Provider Agnostic**: Works with any CLI that returns JSON
4. **Backward Compatible**: Still supports direct API if ANTHROPIC_API_KEY set
5. **User-Friendly**: Defaults to "claude -p" if available
## Testing
- ✅ get_ai_config() tests:
- Default: claude-cli with "claude -p '{prompt}'"
- Git config override: gemini-cli with "gemini '{prompt}'"
- Env var override: codex-cli with "codex '{prompt}'"
- ✅ extract_mentions() still works (no AI required)
- ✅ All 6 workflow tests pass
## Impact
Users with Claude Code installed can now use the automation without any
configuration - it just works! Same for users with gemini or codex CLIs.
Only requires git config setup if using non-default command.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>