Updated all documentation to reflect the new two-tier extraction system:
**workflow-marker-extraction.puml:**
- Completely rewritten to show AI normalization flow
- Documents agents.normalize_discussion() as primary method
- Shows simple line-start fallback for explicit markers
- Includes natural conversation examples vs. explicit markers
- Demonstrates resilience and cost-effectiveness
**AUTOMATION.md:**
- Restructured "Conversation Guidelines" section
- Emphasizes natural conversation as recommended approach
- Clarifies AI normalization extracts from conversational text
- Documents explicit markers as fallback when AI unavailable
- Explains two-tier architecture benefits
**diagrams-README.md:**
- Already updated in previous commit
All documentation now accurately reflects:
✅ AI-powered extraction (agents.py) for natural conversation
✅ Simple fallback parsing (workflow.py) for explicit markers
✅ Multi-provider resilience (claude → codex → gemini)
✅ No strict formatting requirements for participants
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Updated workflow-marker-extraction diagram description to emphasize:
- AI-powered normalization for natural conversation (agents.py)
- Simple line-start fallback for explicit markers (workflow.py)
- Two-tier extraction system design
Benefits of this approach:
- Participants write naturally without strict formatting rules
- Fast AI model handles conversation → structured data conversion
- Simple fallback code when AI unavailable
- Clear separation of concerns
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Simplified marker extraction architecture:
- AI normalization (agents.py) handles natural conversation
- Simple line-start matching for explicit markers as fallback
- Removed complex regex patterns (DECISION_PATTERN, QUESTION_PATTERN, ACTION_PATTERN)
- Participants can now write naturally without strict formatting rules
This implements the original design intent: fast AI model normalizes conversational
text into structured format, then simple parsing logic extracts it.
Benefits:
- More flexible for participants (no strict formatting required)
- Simpler code (startswith() instead of regex)
- Clear separation: AI for understanding, code for mechanical parsing
- Cost-effective (fast models for simple extraction task)
Updated workflow-marker-extraction.puml to show patterns in notes
instead of inline text (fixes PlantUML syntax error).
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Added missing outer if statement for Claude output check to fix
"Cannot find if" syntax error. The else clause on line 56 now
properly matches the provider output check.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Updated commit-workflow.puml to show claude→codex→gemini fallback chain
- Updated patcher-pipeline.puml with provider fallback logic and model hints
- Updated voting-system.puml for multi-stage promotions (READY_FOR_DESIGN)
- Created ai-provider-fallback.puml documenting provider chain in detail
- Created discussion-stages.puml showing complete feature lifecycle
- Created workflow-marker-extraction.puml documenting regex patterns
- Updated diagrams-README.md with all new diagrams and workflows
- Increased diagram count from 7 to 10 total
- All diagrams now reflect current system architecture
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Added DECISION_PATTERN, QUESTION_PATTERN, ACTION_PATTERN regexes
- Support both plain (DECISION:) and markdown bold (**DECISION**:) formats
- Markers now detected anywhere in text, not just at line start
- Removed analysis_normalized since regex handles both variants directly
- Kept legacy support for ASSIGNED: and DONE: at line start
- Updated docstring to reflect regex-based approach
Fix critical bug in patcher.py where patches failed to apply to staged
files during pre-commit hooks.
**Root Cause:**
The apply_patch() function was unstaging files before applying patches:
1. File gets unstaged (git reset HEAD)
2. Patch tries to apply with --index flag
3. But patch was generated from STAGED content
4. Base state mismatch causes patch application to fail
5. Original changes get re-staged, AI changes are lost
**The Fix:**
Remove the unstaging logic entirely (lines 599-610, 639-641).
- Patches are generated from staged content (git diff --cached)
- The --index flag correctly applies to both working tree and index
- No need to unstage first - that changes the base state
**Changes:**
- Deleted 19 lines of problematic unstaging code
- Added clear comment explaining why unstaging is harmful
- Simplified apply_patch() function
**Impact:**
- Patches now apply correctly during pre-commit hooks
- Status changes (OPEN → READY_FOR_DESIGN) work properly
- Gate creation (design_gate_writer) will trigger correctly
- No behavior change for non-staged files
**Testing:**
- All 18 existing tests still pass
- Bundle rebuilt and verified
Discovered during end-to-end testing when AI-generated status promotion
patches failed with "Failed to apply patch (strict and 3-way both failed)".
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add docs/PROGRESS.md - a living document that tracks implementation
status across all milestones with detailed checklists.
**Structure:**
- Quick status overview table (Milestone completion %)
- Milestone-by-milestone breakdown (M0-M4)
- Checkbox lists for every deliverable
- Stage-by-stage breakdown for M2
- File/line references for verification
- Next steps sections for each component
- Update instructions for maintainers
**Key Features:**
- Visual progress tracking (✅❌🚧)
- Completion percentages per section
- Specific missing items highlighted
- Clear next steps for each stage
- Easy to update (just check boxes)
- 431 lines covering all aspects
**Current Status Snapshot:**
- M0: Process Foundation - 100% ✅
- M1: Orchestrator MVP - 100% ✅
- M2: Stage Automation - 40% 🚧 (3/7 stages)
- M3: Gitea Integration - 0% ❌
- M4: Python Migration - 100% ✅
- Overall: ~55% complete
**Benefits:**
- No need to reassess entire project each time
- Quick reference for what's done vs. what's left
- Clear roadmap for contributors
- Tracks implementation vs. design intent
- Shows where to focus effort next
This replaces ad-hoc status assessments with a maintained living document.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Fix question extraction bug where AI-generated questions were being
filtered out due to missing 'status' field.
**Root Cause:**
The AI agents module returns questions without a 'status' field:
{'participant': 'Bob', 'question': 'text', 'line': 'original'}
But format_questions_section() filtered for status == "OPEN":
open_questions = [q for q in questions if q.get("status") == "OPEN"]
Since AI questions had no status, q.get("status") returned None,
which didn't match "OPEN", so questions were filtered out.
**Fix:**
Default missing status to "OPEN" in the filter:
open_questions = [q for q in questions if q.get("status", "OPEN") == "OPEN"]
This makes the function defensive - questions without explicit status
are treated as OPEN, which matches the basic extraction behavior.
**Impact:**
- Questions extracted by AI agents now appear in summaries
- Maintains backward compatibility with basic extraction (has status)
- All 18 tests now pass (was 17/18)
**Testing:**
- Verified with test_run_status_updates_summary_sections
- Question "What is the rollout plan?" now correctly appears in summary
- No regressions in other tests
Resolves test failure identified in comprehensive project review.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Major documentation updates to align with multi-provider AI system:
1. Update CLAUDE.md (lines 213-332)
- Add new "AI Configuration System" section
- Document config/ai.yml structure and three optimization levels
- Explain model hint propagation pipeline (rule → runner → patcher)
- Add provider setup table (Claude, Codex, Gemini)
- Document Claude subagent setup with ./tools/setup_claude_agents.sh
- List implementation modules with line number references
- Explain environment variable overrides
- Document fallback behavior when all providers fail
2. Update docs/DESIGN.md (lines 894-1077)
- Add "Automation AI Configuration" section before Stage Model
- Document configuration architecture with full YAML example
- Explain model hint system with .ai-rules.yml examples
- Detail execution flow through 4 steps (rule eval → prompt → chain → fallback)
- Show example prompt with TASK COMPLEXITY hint injection
- Add provider comparison table with fast/default/quality models
- Document implementation modules with line references
- Add cost optimization examples (93% savings on simple tasks)
- Explain environment overrides and persistence
3. Update docs/AUTOMATION.md (lines 70-148)
- Restructure Phase 2 requirements to emphasize config/ai.yml
- Add full YAML configuration example with three chains
- Explain how model hints work (fast vs quality)
- Update Claude subagent documentation
- Clarify auto-selection based on TASK COMPLEXITY
- Move git config to deprecated status
- Emphasize environment variables as optional overrides
4. Update README.md (line 10)
- Add "Multi-Provider AI System" to key features
- Brief mention of fallback chains and model selection
Impact:
- AI assistants can now discover the multi-provider system
- Users understand how to configure providers via config/ai.yml
- Clear explanation of cost optimization through model hints
- Complete documentation of the execution pipeline
- All major docs now reference the same configuration approach
Resolves documentation gap identified in project review.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Major automation enhancements for flexible AI provider configuration:
1. Add config/ai.yml - Centralized AI configuration
- Three command chains: default, fast, quality
- Multi-provider fallback (Claude → Codex → Gemini)
- Configurable per optimization level
- Sentinel token configuration
2. Extend automation/ai_config.py
- Add RunnerSettings with three chain support
- Add get_chain_for_hint() method
- Load and validate all three command chains
- Proper fallback to defaults
3. Update automation/runner.py
- Read model_hint from .ai-rules.yml
- Pass model_hint to generate_output()
- Support output_type hint overrides
4. Update automation/patcher.py
- Add model_hint parameter throughout pipeline
- Inject TASK COMPLEXITY hint into prompts
- ModelConfig.get_commands_for_hint() selects chain
- Fallback mechanism tries all commands in chain
5. Add design discussion stage to features.ai-rules.yml
- New design_gate_writer rule (model_hint: fast)
- New design_discussion_writer rule (model_hint: quality)
- Update feature_request to create design gate
- Update feature_discussion to create design gate
- Add design.discussion.md file associations
- Proper status transitions: READY_FOR_DESIGN → READY_FOR_IMPLEMENTATION
6. Add assets/templates/design.discussion.md
- Template for Stage 3 design discussions
- META header with tokens support
- Design goals and participation instructions
7. Update tools/setup_claude_agents.sh
- Agent descriptions reference TASK COMPLEXITY hint
- cdev-patch: "MUST BE USED when TASK COMPLEXITY is FAST"
- cdev-patch-quality: "MUST BE USED when TASK COMPLEXITY is QUALITY"
8. Fix assets/hooks/pre-commit
- Correct template path comment (process/templates not assets/templates)
9. Update tools/mock_ai.sh
- Log prompts to /tmp/mock_ai_prompts.log for debugging
Impact:
- Users can configure AI providers via config/ai.yml
- Automatic fallback between Claude, Codex, Gemini
- Fast models for simple tasks (vote counting, gate checks)
- Quality models for complex tasks (design, implementation planning)
- Reduced costs through intelligent model selection
- Design stage now properly integrated into workflow
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Added clear implementation status tracking to DESIGN.md to show what's
currently working vs. what's planned.
Changes:
- Updated document version to v2.1
- Added "Implementation Status (2025-11-01)" section at top
- Created DESIGN.md.old backup for easy comparison
- Categorized features into: Implemented, In Progress, Planned
Current Status:
✅ Implemented: Stages 1-2, cascading rules, AI patch generation, voting
🚧 In Progress: Stage 3 (Design Discussion Gate) - being implemented now
📋 Planned: Stages 4-7, moderator protocol, bug sub-cycles
The three-stage workflow (Feature → Design → Implementation) was always
documented correctly in DESIGN.md. The current implementation just skips
the Design stage, which we're now fixing.
This status section will be updated as each milestone is completed.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Changed temp file location from /tmp to .git/ai-agents-temp/ to resolve
Claude CLI permission errors when reading discussion content.
Problem:
- agents.py created temp files in /tmp/tmp*.md
- Asked Claude CLI to read these files
- Claude CLI couldn't access /tmp without explicit permission grant
- Error: "I don't have permission to read that file"
- Fell back to basic pattern matching (degraded functionality)
Solution:
- Create temp files in .git/ai-agents-temp/ directory
- Claude CLI has permission to read files in the project directory
- Use PID for unique filenames to avoid conflicts
- Cleanup still handled by finally block
Benefits:
- No user configuration needed
- Claude CLI can now read discussion files
- AI agents work properly for structured extraction
- Temp files automatically gitignored (.git/ directory)
- Easy debugging (files visible in project)
The finally block at line 154-158 still cleans up temp files after use.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Changed approach from disabling outputs to properly handling AI's decision
not to generate changes (e.g., gated outputs, conditional rules).
Changes:
1. patcher.py - Allow empty diffs
- sanitize_unified_patch() returns empty string instead of raising error
- generate_output() returns early for empty patches (silent skip)
- Common case: implementation_gate_writer when status != READY_FOR_IMPLEMENTATION
- AI can now return explanatory text without a diff (no error)
2. features.ai-rules.yml - Override README rule
- Add README.md → "readme_skip" association
- Creates empty rule to disable README updates in Docs/features/
- Prevents unnecessary AI calls during feature discussions
- README automation still works in root directory
3. root.ai-rules.yml - Restore default README rule
- Removed "enabled: false" flag (back to default enabled)
- Features directory overrides this with empty rule
Benefits:
- implementation_gate now calls AI but AI returns empty diff (as designed)
- No more "[runner] error generating ...implementation.discussion.md"
- No more "[runner] error generating README.md"
- Clean separation: AI decides vs. config disables
- Instructions to AI are still executed, AI just chooses no changes
Testing:
Setup completes cleanly with no [runner] errors. The automation
runs and AI correctly returns no diff for implementation file
when status is OPEN.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Disabled two outputs that were causing errors during first commit:
1. implementation_gate (features.ai-rules.yml)
- Was trying to generate implementation.discussion.md on every request.md commit
- Should only run when feature status = READY_FOR_IMPLEMENTATION
- Error: "Sanitized patch missing diff header"
- Fix: Set enabled: false by default
- Users can enable in project .ai-rules.yml when needed
2. readme normalizer (root.ai-rules.yml)
- Was trying to update README.md whenever policies.yml was staged
- Caused errors during initial commit
- Error: "Sanitized patch missing diff header"
- Fix: Set enabled: false by default
- Users can enable when they want AI to maintain README
Benefits:
- Clean setup with no [runner] errors
- Faster first commit (fewer AI calls)
- Users can enable features incrementally as needed
- Only essential automation runs by default (feature discussions)
Remaining warnings are expected:
- [agents] warnings: Claude CLI permission prompts (normal behavior)
- [summary] warnings: Template markers not found (handled gracefully)
Testing:
Setup now completes cleanly with only feature.discussion.md generated.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Changed setup_project.py to only create request.md during setup, allowing
the pre-commit hook automation to generate discussion and summary files.
Problem (before):
- setup_project.py created request.md, feature.discussion.md, and .sum.md
- git commit staged ALL files and triggered pre-commit hook
- runner.py saw request.md and tried to generate feature.discussion.md
- But feature.discussion.md was already in the index → race condition
- workflow.py also tried to update .sum.md → more conflicts
Solution (now):
- setup_project.py creates ONLY request.md
- discussions/ directory is created but empty
- First commit triggers automation:
- runner.py sees request.md → generates feature.discussion.md (AI)
- ensure_summary in pre-commit hook → creates .sum.md from template
- workflow.py → updates .sum.md with vote data
- No more conflicts between setup and automation
Benefits:
1. No race condition - each file has one source of truth
2. Actually exercises the automation system on first commit
3. Generated files always match current automation rules
4. Simpler setup code (67 lines removed)
Testing:
The automation will now properly run on first commit instead of conflicting
with pre-seeded files.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Detect Claude API 500 Overloaded errors
- Continue processing other files on error instead of aborting
- Log errors to stderr for visibility
This allows commits to succeed even if some AI requests fail due to rate limiting.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Helps debugging by preserving raw AI output when markers are missing.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Claude CLI returns exit code 1 even when successfully generating output.
Check for stdout content before failing on non-zero exit codes.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Fix missing space after colon in features.ai-rules.yml
- Add tools/mock_ai.sh for testing automation without real AI
- Ensures installer has valid YAML templates
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add automation/README.md with:
- Quick start instructions for Phase 1 and Phase 2
- Configuration examples for all supported providers
- How it works explanation
- Vote format and optional markers
- Testing and troubleshooting sections
Provides a concise reference for users without needing to read full AUTOMATION.md.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Major refactoring to support flexible AI provider configuration instead of
requiring direct API access. Users can now use whatever AI CLI tool they have
installed (claude, gemini, codex, etc.) without API keys.
## Changes to automation/agents.py
**New Functions:**
- `get_ai_config()` - Reads config from env vars or git config
- Environment: CDEV_AI_PROVIDER, CDEV_AI_COMMAND (highest priority)
- Git config: cascadingdev.aiprovider, cascadingdev.aicommand
- Default: claude-cli with "claude -p '{prompt}'"
- `call_ai_cli()` - Execute AI via CLI command
- Passes content via temp file to avoid shell escaping
- Supports {prompt} placeholder in command template
- 60s timeout with error handling
- Parses JSON from response (with/without code blocks)
- `call_ai_api()` - Direct API access (renamed from call_claude)
- Unchanged functionality
- Now used as fallback option
- `call_ai()` - Unified AI caller
- Try CLI first (if configured)
- Fall back to API (if ANTHROPIC_API_KEY set)
- Graceful failure with warnings
**Updated Functions:**
- `normalize_discussion()` - calls call_ai() instead of call_claude()
- `track_questions()` - calls call_ai() instead of call_claude()
- `track_action_items()` - calls call_ai() instead of call_claude()
- `track_decisions()` - calls call_ai() instead of call_claude()
**Configuration Precedence:**
1. Environment variables (session-scoped)
2. Git config (repo-scoped)
3. Defaults (claude-cli)
## Changes to docs/AUTOMATION.md
**Updated Sections:**
- "Requirements" - Now lists CLI as Option 1 (recommended), API as Option 2
- "Configuration" - Complete rewrite with 5 provider examples:
1. Claude CLI (default)
2. Gemini CLI
3. OpenAI Codex CLI
4. Direct API (Anthropic)
5. Custom AI command
- "Troubleshooting" - Added "AI command failed" section, updated error messages
**New Configuration Examples:**
```bash
# Claude Code (default)
git config cascadingdev.aicommand "claude -p '{prompt}'"
# Gemini
git config cascadingdev.aiprovider "gemini-cli"
git config cascadingdev.aicommand "gemini '{prompt}'"
# Custom
git config cascadingdev.aicommand "my-ai-tool --prompt '{prompt}' --format json"
```
## Benefits
1. **No API Key Required**: Use existing CLI tools (claude, gemini, etc.)
2. **Flexible Configuration**: Git config (persistent) or env vars (session)
3. **Provider Agnostic**: Works with any CLI that returns JSON
4. **Backward Compatible**: Still supports direct API if ANTHROPIC_API_KEY set
5. **User-Friendly**: Defaults to "claude -p" if available
## Testing
- ✅ get_ai_config() tests:
- Default: claude-cli with "claude -p '{prompt}'"
- Git config override: gemini-cli with "gemini '{prompt}'"
- Env var override: codex-cli with "codex '{prompt}'"
- ✅ extract_mentions() still works (no AI required)
- ✅ All 6 workflow tests pass
## Impact
Users with Claude Code installed can now use the automation without any
configuration - it just works! Same for users with gemini or codex CLIs.
Only requires git config setup if using non-default command.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>