Remove old docs from tracking (now symlinked to project-docs)

This commit is contained in:
rob 2026-01-12 02:47:19 -04:00
parent 15b03e83d4
commit b71376f4d4
17 changed files with 0 additions and 6245 deletions

View File

@ -1,623 +0,0 @@
# CascadingDev Automation System
## Overview
The CascadingDev automation system processes discussion files during git commits to maintain structured summaries. It operates in two phases:
**Phase 1 (Basic)**: Always runs, no dependencies
- Parses `VOTE:` lines from discussions
- Tracks latest vote per participant
- Updates summary `.sum.md` files automatically
**Phase 2 (AI-Enhanced)**: Optional, requires Claude API
- Extracts questions, action items, and decisions using AI
- Tracks @mentions and awaiting replies
- Processes only incremental changes (git diff)
- Maintains timeline and structured summaries
## Architecture
```
automation/
├── workflow.py # Main orchestrator, called by pre-commit hook
├── agents.py # Claude-powered extraction agents
├── summary.py # Summary file formatter and updater
└── __init__.py
```
## Phase 1: Vote Tracking (Always Enabled)
### How It Works
1. **Pre-commit hook** triggers `automation/workflow.py --status`
2. Finds all staged `.discussion.md` files
3. Parses entire file for `VOTE:` lines
4. Maintains latest vote per participant
5. Updates corresponding `.sum.md` file's VOTES section
6. Auto-stages updated summaries
### Vote Format
```markdown
- ParticipantName: Any comment text. VOTE: READY|CHANGES|REJECT
```
**Rules:**
- Only the **latest** vote per participant counts
- Case-insensitive vote tokens (vote:, VOTE:, Vote:)
- Three valid values: READY, CHANGES, REJECT
- Must follow participant bullet format: `- Name: ...`
### Example
**Discussion file:**
```markdown
- Alice: I like this approach. VOTE: READY
- Bob: We need tests first. VOTE: CHANGES
- Alice: Good point, updated the plan. VOTE: CHANGES
```
**Resulting summary:**
```markdown
<!-- SUMMARY:VOTES START -->
## Votes (latest per participant)
READY: 0 • CHANGES: 2 • REJECT: 0
- Alice: CHANGES
- Bob: CHANGES
<!-- SUMMARY:VOTES END -->
```
## Phase 2: AI-Enhanced Processing (Optional)
### Requirements
Phase 2 supports multiple AI providers with automatic fallback chains and intelligent model selection. Configuration is managed through `config/ai.yml` (copied into every generated project).
### Configuration: config/ai.yml (Primary Method)
**Edit the configuration file** to set your preferred AI providers and fallback chain:
```yaml
# config/ai.yml
runner:
# Default command chain (balanced speed/quality)
command_chain:
- "claude -p"
- "codex --model gpt-5"
- "gemini --model gemini-2.5-flash"
# Fast command chain (simple tasks: vote counting, gate checks)
# Used when model_hint: fast in .ai-rules.yml
command_chain_fast:
- "claude -p" # Auto-selects Haiku via subagent
- "codex --model gpt-5-mini"
- "gemini --model gemini-2.5-flash"
# Quality command chain (complex tasks: design, implementation planning)
# Used when model_hint: quality in .ai-rules.yml
command_chain_quality:
- "claude -p" # Auto-selects Sonnet via subagent
- "codex --model o3"
- "gemini --model gemini-2.5-pro"
sentinel: "CASCADINGDEV_NO_CHANGES"
```
### How it works:
- The automation runner (`patcher.py`) iterates through the `command_chain` from top to bottom.
- It attempts to generate a patch or summary with the first provider.
- If the provider fails (e.g., API error, invalid output), it automatically retries with the next provider in the chain.
- This continues until a provider succeeds or the chain is exhausted.
- This resilience ensures that the automation can proceed even if one AI provider is temporarily unavailable or performs poorly on a given task.
- `.ai-rules.yml` can specify `model_hint: fast` or `model_hint: quality` per rule
- Fast models (Haiku, GPT-5-mini) handle simple tasks (vote counting, gate checks)
- Quality models (Sonnet, O3) handle complex tasks (design discussions, planning)
- This optimization reduces costs by ~70% while maintaining quality
### Environment Overrides (Optional)
**Temporarily override** `config/ai.yml` for a single commit:
```bash
# Override command for this commit only
CDEV_AI_COMMAND="claude -p" git commit -m "message"
# Chain multiple providers with || delimiter
CDEV_AI_COMMAND="claude -p || codex --model gpt-5" git commit -m "message"
```
Environment variables take precedence but don't modify the config file.
Common non-interactive setups:
| Provider | CLI Tool | Command Example | Authentication | Notes |
| --- | --- | --- | --- | --- |
| Claude | `claude` | `claude -p` | Run `claude` and follow prompts to sign in | Supports custom subagents in `~/.claude/agents/`. Create with `./tools/setup_claude_agents.sh`. Uses Haiku (fast) or Sonnet (quality). |
| OpenAI | `codex` | `codex --model gpt-5` | Run `codex` and sign in with ChatGPT account | Codex CLI is OpenAI's terminal coding agent. Default model: GPT-5. Use `gpt-5-mini` for faster, cheaper responses. |
| Google | `gemini` | `gemini --model gemini-2.5-flash` | Run `gemini` and sign in with Google account | Free tier: 60 req/min. Use `gemini-2.5-flash` (fast) or `gemini-2.5-pro` (1M context, quality). Open source (Apache 2.0). |
**Recommended Setup:** Use the provided setup script to create Claude subagents:
```bash
# One-time setup (creates ~/.claude/agents/cdev-patch.md and cdev-patch-quality.md)
./tools/setup_claude_agents.sh
```
### Automated Discussion Status Promotion
A key feature of the automation system is its ability to automatically update the status of a discussion file based on participant votes. This moves a feature, design, or other discussion through its lifecycle without manual intervention.
**How it works:**
1. The `workflow.py` script is triggered on commit.
2. It scans staged discussion files for YAML front matter containing a `promotion_rule`.
3. It tallies `VOTE: READY` and `VOTE: REJECT` votes from eligible participants.
4. If the vote counts meet the configured thresholds, the script updates the `status:` field in the file's front matter (e.g., from `OPEN` to `READY_FOR_DESIGN`).
5. The updated discussion file is automatically staged.
**Configuration (in discussion file front matter):**
The entire process is controlled by a `promotion_rule` block within the YAML front matter of a discussion file (e.g., `feature.discussion.md`).
```yaml
---
type: feature-discussion
stage: feature
status: OPEN
promotion_rule:
allow_agent_votes: false
ready_min_eligible_votes: 2
reject_min_eligible_votes: 1
ready_status: "READY_FOR_DESIGN"
reject_status: "FEATURE_REJECTED"
---
Alice: Looks good. VOTE: READY
Bob: I agree. VOTE: READY
```
**Parameters:**
* `allow_agent_votes` (boolean, optional): If `true`, votes from participants named `AI_*` will be counted. Defaults to `false`.
* `ready_min_eligible_votes` (integer | "all", optional): The number of `READY` votes required to promote the status. Can be an integer or the string `"all"` (requiring all eligible voters to vote `READY`). Defaults to `2`.
* `reject_min_eligible_votes` (integer | "all", optional): The number of `REJECT` votes required to reject the discussion. Defaults to `1`.
* `ready_status` (string, optional): The target status to set when the `ready` threshold is met. Defaults to stage-specific values (e.g., `READY_FOR_DESIGN` for the `feature` stage).
* `reject_status` (string, optional): The target status to set when the `reject` threshold is met. Defaults to stage-specific values (e.g., `FEATURE_REJECTED`).
This automation turns the discussion and voting process into a state machine, enabling a self-driving project workflow.
This creates two subagent files:
- `cdev-patch.md` - Uses Haiku model (fast, cost-efficient) - auto-selected when TASK COMPLEXITY: FAST
- `cdev-patch-quality.md` - Uses Sonnet model (higher quality) - auto-selected when TASK COMPLEXITY: QUALITY
The `claude -p` command automatically selects the appropriate subagent based on the `TASK COMPLEXITY` hint in the prompt:
- Simple tasks (vote counting, gate checks) → `command_chain_fast` → Haiku
- Complex tasks (design, implementation planning) → `command_chain_quality` → Sonnet
- Default tasks → `command_chain` → auto-select based on complexity
**Option 2: Direct API (Alternative)**
```bash
pip install anthropic
export ANTHROPIC_API_KEY="sk-ant-..."
```
If no AI provider is configured or responding, Phase 2 features are silently skipped and only Phase 1 (votes) runs.
### Features
#### 1. @Mention Tracking
Extracts `@Name` and `@all` mentions from discussions to track who's waiting for replies.
**Format:**
```markdown
- Alice: @Bob what do you think about OAuth2?
- Carol: @all please review by Friday
```
**Summary section:**
```markdown
<!-- SUMMARY:AWAITING START -->
## Awaiting Replies
### @Bob
- @Alice: What do you think about OAuth2?
### @all
- @Carol: Please review by Friday
<!-- SUMMARY:AWAITING END -->
```
#### 2. Question Tracking
Identifies questions and tracks their answers.
**Markers (optional but recommended):**
```markdown
- Alice: Q: Should we use OAuth2 or JWT?
- Bob: A: I'd recommend OAuth2 for third-party auth.
```
**Also detects:**
- Lines ending with `?`
- `Question:` prefix
- `Re:` replies (indicate partial answers)
**Summary section:**
```markdown
<!-- SUMMARY:OPEN_QUESTIONS START -->
## Open Questions
- @Alice: Should we cache API responses?
### Partially Answered:
- @Bob: What about rate limiting?
- Partial answer: We'll use token bucket algorithm
<!-- SUMMARY:OPEN_QUESTIONS END -->
```
#### 3. Action Item Management
Tracks tasks from creation → assignment → completion.
**Markers (optional but recommended):**
```markdown
- Alice: TODO: Research OAuth2 libraries
- Bob: I'll handle the JWT implementation.
- Alice: DONE: Completed library research, recommending authlib.
- Dave: ACTION: Review security implications
```
**Summary section:**
```markdown
<!-- SUMMARY:ACTION_ITEMS START -->
## Action Items
### TODO (unassigned):
- [ ] Document the authentication flow (suggested by @Carol)
### In Progress:
- [ ] Implement JWT token validation (@Bob)
### Completed:
- [x] Research OAuth2 libraries (@Alice)
<!-- SUMMARY:ACTION_ITEMS END -->
```
#### 4. Decision Logging (ADR-Style)
Captures architectural decisions with rationale.
**Markers (optional but recommended):**
```markdown
- Alice: DECISION: Use OAuth2 + JWT hybrid approach.
Rationale: OAuth2 for robust third-party auth, JWT for stateless sessions.
```
**Also detects:**
- "We decided to..."
- "Going with X because..."
- Vote consensus (multiple READY votes)
**Summary section:**
```markdown
<!-- SUMMARY:DECISIONS START -->
## Decisions (ADR-style)
### Decision 1: Use OAuth2 + JWT hybrid approach
- **Proposed by:** @Alice
- **Supported by:** @Bob, @Carol
- **Rationale:** OAuth2 for robust third-party auth, JWT for stateless sessions
- **Alternatives considered:**
- Pure JWT authentication
- Session-based auth with cookies
<!-- SUMMARY:DECISIONS END -->
```
## Conversation Guidelines
### Natural Conversation (Recommended)
**Write naturally - AI normalization extracts markers automatically:**
```markdown
# Examples of natural conversation that AI understands:
- Alice: I think we should use OAuth2. Does anyone know if we need OAuth 2.1 specifically?
VOTE: READY
- Bob: Good question Alice. I'm making a decision here - we'll use OAuth 2.0 for now.
@Carol can you research migration paths to 2.1? VOTE: CHANGES
- Carol: I've completed the OAuth research. We can upgrade later without breaking changes.
VOTE: READY
```
**AI normalization (via `agents.py`) extracts:**
- Decisions from natural language ("I'm making a decision here - ...")
- Questions from conversational text ("Does anyone know if...")
- Action items with @mentions ("@Carol can you research...")
- Votes (always tracked: `VOTE: READY|CHANGES|REJECT`)
### Explicit Markers (Fallback)
**If AI is unavailable, these explicit line-start markers work as fallback:**
```markdown
# Markers (✅ = works without AI as simple fallback)
QUESTION: <question> # ✅ Explicit question marker
Q: <question> # ✅ Short form
TODO: <action> # ✅ New unassigned task
ACTION: <action> # ✅ Task with implied ownership
ASSIGNED: <task> @name # ✅ Claimed task
DONE: <completion> # ✅ Mark task complete
DECISION: <choice> # ✅ Architectural decision
VOTE: READY|CHANGES|REJECT # ✅ ALWAYS tracked (with or without AI)
@Name # ✅ Mention extraction (simple regex)
```
**Example with explicit markers:**
```markdown
- Alice: QUESTION: Should we support OAuth2?
- Bob: TODO: Research OAuth2 libraries
- Bob: ASSIGNED: OAuth2 library research
- Carol: DECISION: Use OAuth2 for authentication
- Dave: @all Please review. VOTE: READY
```
### Two-Tier Architecture
1. **AI Normalization (Primary):** Handles natural conversation, embedded markers, context understanding
2. **Simple Fallback:** Handles explicit line-start markers when AI unavailable
Benefits:
- ✅ Participants write naturally without strict formatting
- ✅ Resilient (multi-provider fallback: claude → codex → gemini)
- ✅ Works offline/API-down with explicit markers
- ✅ Cost-effective (uses fast models for extraction)
## Implementation Details
### Incremental Processing
The system only processes **new content** added since the last commit:
1. Uses `git diff HEAD <file>` to get changes
2. Extracts only lines starting with `+` (added lines)
3. Feeds incremental content to AI agents
4. Updates summary sections non-destructively
### Participant Agent Scripts
Before any output writers run, the runner checks the matched rule for a `participants` list. Each entry is a Python script path that will be executed with the staged file:
```yaml
rules:
feature_discussion_update:
participants:
- path: "agents/moderator.py"
- path: "agents/visualizer.py"
background: true
outputs:
self_append:
path: "{path}"
output_type: "feature_discussion_writer"
```
Key points:
- Agent scripts live under `agents/` and reuse the shared SDK in `src/cascadingdev/agent/`.
- `AgentContext` exposes helpers like `read_text()` and `append_block()` to work safely against the repo root.
- `ProviderClient` calls the configured AI chain (Claude → Codex → Gemini) and can return free-form or JSON responses.
- The included moderator agent appends a single guided comment per commit, guarded by `<!-- AUTO:MODERATOR START -->` to remain idempotent.
- The visualizer agent watches for `@AI_visual` mentions, generates PlantUML diagrams, and links them back to the discussion.
- Add `background: true` for tool-style agents (researcher, visualizer) so the runner launches them asynchronously; their follow-up work lands in the working tree after the commit and can be included in the next commit. These service agents provide information only and intentionally omit `VOTE:` lines to avoid influencing promotion thresholds.
Adding new personas is as simple as dropping a script into `agents/` and registering it in `.ai-rules.yml`.
### Marker Block System
Summary files use HTML comment markers for non-destructive updates. In addition to the content sections, a special state marker is used to persist structured data across runs.
#### Persistent Summary State
To enable robust, incremental updates, the system stores the aggregated state of questions, action items, decisions, and mentions in a JSON blob within a comment at the top of the summary file.
```markdown
<!-- SUMMARY:STATE {"action_items": [...], "decisions": [...], "mentions": [...], "questions": [...]} -->
```
**How it works:**
1. Before processing new discussion content, the `summary.py` script reads this state blob from the `.sum.md` file.
2. New items extracted from the latest discussion changes are merged with the existing state.
3. This merging logic deduplicates items and updates their status (e.g., a question moving from `OPEN` to `ANSWERED`).
4. The updated state is written back to the marker.
5. The visible summary sections (like `OPEN_QUESTIONS`) are then regenerated from this canonical state.
This approach prevents information loss and ensures that the summary accurately reflects the cumulative history of the discussion, even as the discussion file itself grows.
#### Content Sections
```markdown
<!-- SUMMARY:SECTION_NAME START -->
## Section Header
<content>
<!-- SUMMARY:SECTION_NAME END -->
```
**Sections:**
- `VOTES` - Vote counts and participants
- `DECISIONS` - ADR-style decisions
- `OPEN_QUESTIONS` - Unanswered questions
- `AWAITING` - Unresolved @mentions
- `ACTION_ITEMS` - TODO → ASSIGNED → DONE
- `TIMELINE` - Chronological updates (future)
- `LINKS` - Related PRs/commits (future)
### Implementation Stage Enhancements
- Checkbox items in `implementation.discussion.md` are parsed and normalised into
`implementation/tasks.md` and the summary's **Tasks** block.
- Promotion to testing now checks two conditions before updating YAML frontmatter:
1. All detected checkboxes are `[x]`.
2. At least one human participant has a `VOTE: READY`.
- These safeguards run even when AI providers fail, ensuring the stage cannot
advance on agent votes alone or with unfinished work.
### Error Handling
The workflow is **non-blocking**:
- Always exits 0 (never blocks commits)
- Prints warnings to stderr for missing dependencies
- Falls back to Phase 1 (votes only) if API key missing
- Continues on individual agent failures
### Testing
```bash
# Run all tests
pytest tests/
# Test vote parsing
pytest tests/test_workflow.py -v
# Manual test in a project
cd /path/to/cascadingdev-project
echo "- Test: Comment with vote. VOTE: READY" >> Docs/features/test/discussions/test.discussion.md
git add Docs/features/test/discussions/test.discussion.md
git commit -m "Test workflow" # Triggers automation
```
## Configuration
### AI Provider Options
**1. Claude CLI (Default)**
```bash
# No configuration needed if you have 'claude' command available
# The system defaults to: claude -p '{prompt}'
# To customize:
git config cascadingdev.aicommand "claude -p '{prompt}'"
```
**2. Gemini CLI**
```bash
git config cascadingdev.aiprovider "gemini-cli"
git config cascadingdev.aicommand "gemini '{prompt}'"
```
**3. OpenAI Codex CLI**
```bash
git config cascadingdev.aiprovider "codex-cli"
git config cascadingdev.aicommand "codex '{prompt}'"
```
**4. Direct API (Anthropic)**
```bash
pip install anthropic
export ANTHROPIC_API_KEY="sk-ant-..."
# CLI commands will fallback to API if available
```
**5. Custom AI Command**
```bash
# Use any command that reads a prompt and returns JSON
git config cascadingdev.aicommand "my-ai-tool --prompt '{prompt}' --format json"
```
### Disabling AI Features
Simply don't configure any AI provider. The system will:
- Log a warning: `[agents] warning: No AI provider configured, skipping AI processing`
- Continue with Phase 1 (vote tracking only)
- Still extract @mentions (doesn't require AI)
## Future Enhancements
🚧 **Phase 3 (Planned):**
- Timeline auto-population from git commits
- Link tracking (related PRs, commits)
- Multi-file decision tracking
- Slack/Discord notification integration
- Summary diffs between commits
- Natural language summary generation
## Troubleshooting
### "agents module not available"
**Cause:** Import path issue when workflow.py runs from pre-commit hook.
**Solution:** Already fixed in workflow.py with dual import style:
```python
try:
from automation import agents
except ImportError:
import agents # Fallback for different execution contexts
```
### "No AI provider configured"
**Cause:** No AI CLI command or API key configured.
**Solution:** Choose one:
```bash
# Option 1: Use Claude CLI (default)
git config cascadingdev.aicommand "claude -p '{prompt}'"
# Option 2: Use Gemini CLI
git config cascadingdev.aiprovider "gemini-cli"
git config cascadingdev.aicommand "gemini '{prompt}'"
# Option 3: Use Anthropic API
pip install anthropic
export ANTHROPIC_API_KEY="sk-ant-..."
```
Or accept Phase 1 only (votes only, still useful)
### "AI command failed with code X"
**Cause:** AI CLI command returned error.
**Solution:**
1. Test command manually: `claude -p "test prompt"`
2. Check command is in PATH: `which claude`
3. Verify command syntax in config: `git config cascadingdev.aicommand`
4. Check stderr output in warning message for details
### Summary sections not updating
**Cause:** Markers might be malformed or missing.
**Solution:**
1. Check summary file has proper markers (see "Marker Block System")
2. Regenerate from template if needed
3. File should be created automatically by pre-commit hook
### Votes not being detected
**Cause:** Format doesn't match parser expectations.
**Solution:** Ensure format is:
```markdown
- ParticipantName: Comment text. VOTE: READY
```
Common issues:
- Missing `- ` bullet
- Missing `: ` after name
- Typo in vote value (must be READY, CHANGES, or REJECT)
- Multiple participants on one line (not supported)
## See Also
- [DESIGN.md](DESIGN.md) - Overall system architecture
- [CLAUDE.md](../CLAUDE.md) - Guide for AI assistants
- [USER_GUIDE.md](../assets/templates/USER_GUIDE.md) - User-facing documentation

File diff suppressed because it is too large Load Diff

View File

@ -1,28 +0,0 @@
# CascadingDev Installer
## Requirements
- Python 3.10+ and git
- (Optional) PySide6 for GUI (`pip install PySide6`)
## Quick start
```bash
python setup_cascadingdev.py --target /path/to/new-project
```
### Skip GUI
```bash
python setup_cascadingdev.py --target /path/to/new-project --no-ramble
```
> After installation, open `USER_GUIDE.md` in your new project for daily usage.
## Rebuild & Run (for maintainers)
Rebuild the bundle every time you change assets/ or the installer:
```bash
python tools/build_installer.py
```
Then run only the bundled copy:
```bash
python install/cascadingdev-*/setup_cascadingdev.py --target /path/to/new-project
```

View File

@ -1,442 +0,0 @@
# CascadingDev Implementation Progress
**Last Updated:** 2025-11-03
**Overall Completion:** ~57% (M0✅ M1✅ M2🚧 M3❌ M4✅)
---
## 📊 Quick Status Overview
| Milestone | Target | Status | Completion |
|-----------|--------|--------|------------|
| **M0: Process Foundation** | Foundation docs and setup | ✅ Complete | 100% |
| **M1: Orchestrator MVP** | Core automation working | ✅ Complete | 100% |
| **M2: Stage Automation** | All 7 stages + moderator | 🚧 In Progress | 40% |
| **M3: Gitea Integration** | PR/issue automation | ❌ Not Started | 0% |
| **M4: Python Migration** | Bash → Python hook | ✅ Complete | 100% |
**Current Focus:** Starting Stage 4 (Implementation)
---
## ✅ Milestone M0: Process Foundation (100%)
**Goal:** Establish project structure, documentation, and templates
### Core Documentation
- [x] `docs/DESIGN.md` - System architecture (3,018 lines)
- [x] `docs/AUTOMATION.md` - User-facing automation guide
- [x] `docs/INSTALL.md` - Installation instructions
- [x] `CLAUDE.md` - AI assistant guidance
- [x] `AGENTS.md` - Developer guidelines
- [x] `README.md` - Project overview
- [x] `VERSION` - Semantic versioning
### Setup Infrastructure
- [x] `src/cascadingdev/setup_project.py` - Project installer (478 lines)
- [x] `src/cascadingdev/cli.py` - CLI commands (doctor, build, smoke, release, pack)
- [x] `tools/build_installer.py` - Bundle builder
- [x] `tools/bundle_smoke.py` - End-to-end installer test
- [x] `pyproject.toml` - Package configuration
### Templates (6 files)
- [x] `assets/templates/feature_request.md` - Feature request template
- [x] `assets/templates/feature.discussion.md` - Feature discussion template
- [x] `assets/templates/feature.discussion.sum.md` - Summary template
- [x] `assets/templates/design.discussion.md` - Design discussion template
- [x] `assets/templates/design_doc.md` - Design document template (needs enhancement)
- [x] `assets/templates/USER_GUIDE.md` - User guide shipped to projects
### Policy Files
- [x] `config/ai.yml` - AI provider configuration
- [x] `assets/templates/process/policies.yml` - Process policies template
### Ramble GUI
- [x] `assets/runtime/ramble.py` - Feature capture GUI (PySide6/PyQt5)
- [x] `assets/runtime/create_feature.py` - CLI feature creation
- [x] Template META system - JSON metadata in HTML comments
---
## ✅ Milestone M1: Orchestrator MVP + Hook Enhancements (100%)
**Goal:** Python automation engine with cascading rules
### Core Automation Modules (2,469 lines total)
- [x] `automation/config.py` (182 lines) - Rule loading, merging, path resolution
- [x] `automation/runner.py` (146 lines) - Rule evaluation, output generation
- [x] `automation/patcher.py` (720 lines) - AI patch generation and application
- [x] `automation/workflow.py` (533 lines) - Vote tracking, status reporting
- [x] `automation/summary.py` (301 lines) - Summary file formatting
- [x] `automation/agents.py` (438 lines) - AI agent integration
- [x] `automation/ai_config.py` (149 lines) - Multi-provider configuration
- [x] `src/cascadingdev/agent/` (sdk/providers/patcher) - Shared helpers for participant scripts
### Pre-commit Hook
- [x] `assets/hooks/pre-commit` (192 lines bash)
- [x] Secret detection (regex patterns)
- [x] Append-only validation for discussions
- [x] Summary file template creation
- [x] Python module orchestration
### Cascading Rules System
- [x] Hierarchical .ai-rules.yml loading (nearest file wins)
- [x] Template variable support (`{feature_id}`, `{dir}`, `{basename}`, etc.)
- [x] Path normalization and security (blocks `../` escapes)
- [x] Rule merging with override semantics
- [x] 10 rule types defined in `assets/templates/rules/features.ai-rules.yml`
### Multi-Provider AI System
- [x] Three optimization levels (fast/default/quality)
- [x] Fallback chains (Claude → Codex → Gemini)
- [x] Model hint propagation (rule → runner → patcher)
- [x] Cost optimization via intelligent routing
- [x] Environment variable overrides
- [x] Claude subagent setup script (`tools/setup_claude_agents.sh`)
- [x] **AI normalization system** (agents.normalize_discussion())
- [x] Natural conversation → structured JSON extraction
- [x] Fast model usage for cost-effective extraction
- [x] Simple fallback for explicit markers when AI unavailable
- [x] Two-tier architecture (AI primary, regex fallback)
### Testing Infrastructure
- [x] `tests/test_workflow.py` - Workflow automation tests (vote parsing, participant agents, summaries)
- [x] `tests/test_patcher.py` - Patch generation tests
- [x] `tests/test_runner.py` - Rule evaluation tests
- [x] `tests/test_config.py` - Config loading tests
- [x] `tests/test_utils.py` - Utility tests
- [x] `tests/test_template_meta.py` - Template metadata tests
- [x] `tests/test_build.py` - Build system tests
- [x] **Total: 39 tests (pytest -q), 100% passing**
---
## 🚧 Milestone M2: Stage Automation & Moderator (45%)
**Goal:** Implement all 7 stages of the development lifecycle
### Stage 1: Request (100% ✅)
- [x] Template: `assets/templates/feature_request.md`
- [x] Rule: `feature_request` in features.ai-rules.yml
- [x] Automation: Creates `feature.discussion.md` on commit
- [x] Tested: Working in production
### Stage 2: Feature Discussion (100% ✅)
- [x] Template: `assets/templates/feature.discussion.md`
- [x] Summary template: `assets/templates/feature.discussion.sum.md`
- [x] Rules: `feature_discussion_update`, `feature_discussion_writer`
- [x] Automation:
- [x] Vote tracking (VOTE: READY/CHANGES/REJECT)
- [x] **AI normalization** for natural conversation (agents.normalize_discussion())
- [x] Question extraction from natural language
- [x] Participant agent scripts (`agents/moderator.py`, `agents/visualizer.py`) invoked via `.ai-rules.yml` `participants`
- [x] Background participant support (`background: true`) so researcher/visualizer can run asynchronously without casting votes
- [x] Action item tracking from conversational text
- [x] Decision tracking with context understanding
- [x] @mention tracking
- [x] Timeline generation
- [x] Summary file updates
- [x] Simple fallback for explicit line-start markers (DECISION:, QUESTION:, ACTION:)
- [x] Gate creation: `design.discussion.md` when status = READY_FOR_DESIGN
- [x] Tested: `tests/test_workflow.py` covers moderator invocation, idempotency, and visualizer diagram generation plus production validation
### Stage 3: Design Discussion (100% ✅)
- [x] Template: `assets/templates/design.discussion.md`
- [x] **Enhanced template:** `assets/templates/design_doc.md` with comprehensive ADR structure
- [x] Rules: `design_gate_writer`, `design_discussion_writer`
- [x] Automation:
- [x] Gate creation when feature status = READY_FOR_DESIGN
- [x] Discussion file generation
- [x] Design document template ready for AI generation
- [x] **End-to-end tests:** 4 comprehensive tests created (2 passing, 1 skipped, 1 specification)
- [x] Tested: Promotion logic validated via unit tests
**Completed (2025-11-02):**
1. ✅ Enhanced `design_doc.md` template with 14-section ADR structure
2. ✅ Created end-to-end test suite (test_stage_promotion.py)
3. ✅ Validated vote counting and threshold logic
4. ✅ Documented AI vote exclusion behavior
**Remaining Work (5% - Polish):**
- [x] Integrate status update into runner/workflow (currently manual)
- [x] Test design document AI generation in production
### Stage 4: Implementation Discussion (75% 🚧)
- [x] Template: `implementation.discussion.md`
- [x] Template: `implementation/plan.md`
- [x] Template: `implementation/tasks.md`
- [x] Rules: `implementation_gate_writer`
- [x] Rule: `implementation_discussion_writer`
- [ ] Automation:
- [x] Gate creation when design status = READY_FOR_IMPLEMENTATION
- [x] Task checkbox tracking (parse `- [ ]` / `- [x]`)
- [ ] PR/commit linking
- [ ] Progress tracking roll-up
- [x] Human gate: Require ≥1 human READY vote + all tasks complete
- [x] Tests: Workflow + promotion coverage for implementation stage
**Next Steps:**
1. Track PR/commit references and surface them in summaries
2. Feed implementation progress into higher-level reporting (burn-down, dashboards)
### Stage 5: Testing Discussion (0% ❌)
- [ ] Template: `testing.discussion.md` **← MISSING**
- [ ] Template: `testing/testplan.md` **← MISSING**
- [ ] Template: `testing/checklist.md` **← MISSING**
- [ ] Rules: `testing_gate_writer` **← MISSING**
- [ ] Rule: `testing_discussion_writer` **← MISSING**
- [ ] Automation:
- [ ] Gate creation when implementation complete
- [ ] Test result tracking ([RESULT] PASS/FAIL)
- [ ] Checklist progress
- [ ] Bug linking
- [ ] Tests: None
**Next Steps:**
1. Create testing templates
2. Design test result format
3. Implement testing rules
4. Add result parser to workflow.py
### Stage 6: Review Discussion (0% ❌)
- [ ] Template: `review.discussion.md` **← MISSING**
- [ ] Template: `review/findings.md` **← MISSING**
- [ ] Rules: `review_gate_writer` **← MISSING**
- [ ] Rule: `review_discussion_writer` **← MISSING**
- [ ] Automation:
- [ ] Gate creation when testing complete
- [ ] Finding tracking
- [ ] Approval tracking
- [ ] Human gate: Require ≥1 human READY vote
- [ ] Tests: None
**Next Steps:**
1. Create review templates
2. Design review findings format
3. Implement review rules
4. Test human gate enforcement
### Stage 7: Release (0% ❌)
- [ ] Template: Changelog generation **← MISSING**
- [ ] Template: Rollback notes **← MISSING**
- [ ] Rules: `release_writer` **← MISSING**
- [ ] Automation:
- [ ] Changelog from commits
- [ ] Version tagging
- [ ] Release notes
- [ ] Human gate: Require maintainer approval
- [ ] Tests: None
**Next Steps:**
1. Design release automation
2. Create changelog generator
3. Implement version tagging
4. Add maintainer role checking
### AI_Moderator Protocol (0% ❌)
- [ ] Nudge system for inactive discussions
- [ ] Escalation paths for blocked features
- [ ] Conversation guidance
- [ ] Question tracking and follow-up
- [ ] Vote reminder system
- [ ] Timeout detection
**Implementation Location:** `automation/moderator.py` (does not exist)
**Next Steps:**
1. Create moderator.py module
2. Implement nudge timing logic
3. Add escalation rules to policies.yml
4. Integrate with workflow.py
### Bug Sub-Cycles (0% ❌)
- [ ] BUG_YYYYMMDD_slug folder structure
- [ ] Bug-specific templates
- [ ] Bug tracking rules
- [ ] Integration with testing stage
- [ ] Bug lifecycle automation
**Next Steps:**
1. Design bug folder structure
2. Create bug templates
3. Add bug rules to features.ai-rules.yml
4. Link bugs to parent features
### Discussion Summaries (90% ✅)
- [x] Marker-based updates (SUMMARY:* blocks)
- [x] Vote tallies
- [x] Question tracking
- [x] Action items
- [x] Decisions
- [x] @mentions
- [x] Timeline
- [ ] Links (PR/commit auto-detection) - partially implemented
- [ ] Snapshots for large discussions
---
## ❌ Milestone M3: Gitea Integration (0%)
**Goal:** Integrate with Gitea for PR/issue automation
### Gitea Adapter
- [ ] `automation/adapters/gitea_adapter.py` **← MISSING**
- [ ] Gitea API client integration
- [ ] Authentication setup
### PR Automation
- [ ] Auto-create PRs from implementation stage
- [ ] Link PRs to feature discussions
- [ ] Update PR descriptions with status
- [ ] Auto-label PRs based on stage
- [ ] Comment with review findings
### Issue Tracking
- [ ] Create issues from action items
- [ ] Link issues to discussions
- [ ] Update issue status from discussions
- [ ] Close issues when tasks complete
### Status Reporting
- [ ] Post stage status to PR comments
- [ ] Update PR labels on stage changes
- [ ] Link to discussion summaries
- [ ] Report blocker status
**Next Steps:**
1. Research Gitea API capabilities
2. Design adapter interface
3. Implement basic PR creation
4. Test with local Gitea instance
---
## ✅ Milestone M4: Bash to Python Migration (100%)
**Goal:** Migrate from bash-heavy hook to Python-powered automation
### Architecture
- [x] Core rule resolution in Python (automation/config.py)
- [x] Patch generation in Python (automation/patcher.py)
- [x] Bash hook as thin wrapper (192 lines)
- [x] Python modules handle complex logic (2,469 lines)
### Bash Hook Responsibilities (Minimal)
- [x] Secret detection (regex patterns)
- [x] Append-only validation
- [x] Summary file template creation
- [x] Python module orchestration
### Python Module Responsibilities (Core)
- [x] Rule loading and cascading
- [x] AI prompt generation
- [x] Patch generation and application
- [x] Vote tracking and summary updates
- [x] Provider integration and fallback
### Error Handling
- [x] Python exception handling
- [x] Graceful degradation
- [x] Debug logging (.git/ai-rules-debug/)
- [x] Clear error messages
**Status:** Completed early in development (built right from the start)
---
## 📈 Overall Progress Summary
### By Component Type
**Templates:** 6/13 (46%)
- ✅ Feature request, discussions (feature, design)
- ❌ Implementation, testing, review templates missing
**Rules:** 10/16 (63%)
- ✅ Request, feature, design rules complete
- 🚧 Implementation gate defined but untested
- ❌ Testing, review, release rules missing
**Automation Modules:** 7/9 (78%)
- ✅ Core modules complete (config, runner, patcher, workflow, summary, agents, ai_config)
- ❌ Moderator, Gitea adapter missing
**Testing:** 18/30+ (60%)
- ✅ 18 tests passing (core automation)
- ❌ Stage promotion tests missing
- ❌ Integration tests needed
**Documentation:** 7/7 (100%)
- ✅ All major docs complete and up-to-date
---
## 🎯 Recommended Next Steps
### Short Term (1-2 weeks)
1. **Complete Stage 3:**
- [ ] Enhance design_doc.md template
- [ ] Add end-to-end design stage test
- [ ] Document design stage workflow
2. **Start Stage 4 (Implementation):**
- [ ] Create implementation templates (discussion, plan, tasks)
- [ ] Implement task checkbox parser
- [ ] Add human gate enforcement
- [ ] Test implementation stage promotion
### Medium Term (3-4 weeks)
3. **Add Stage 5 (Testing):**
- [ ] Create testing templates
- [ ] Implement test result tracking
- [ ] Add checklist automation
4. **Add Stage 6 (Review):**
- [ ] Create review templates
- [ ] Implement findings tracking
- [ ] Add human gate enforcement
### Long Term (5-8 weeks)
5. **Complete Stage 7 (Release):**
- [ ] Design release automation
- [ ] Implement changelog generation
- [ ] Add version tagging
6. **Implement AI_Moderator:**
- [ ] Create moderator.py module
- [ ] Add nudge system
- [ ] Implement escalation paths
7. **Add Bug Sub-Cycles:**
- [ ] Design bug workflow
- [ ] Create bug templates
- [ ] Integrate with testing stage
### Optional (Future)
8. **Gitea Integration (M3):**
- [ ] Research Gitea API
- [ ] Implement PR automation
- [ ] Add issue tracking
---
## 📝 How to Update This Document
When completing items:
1. Change `[ ]` to `[x]` for completed checkboxes
2. Update completion percentages in section headers
3. Update "Last Updated" timestamp at top
4. Update "Overall Completion" percentage
5. Update "Current Focus" line
6. Move items from "Next Steps" to checkboxes as work progresses
7. Commit changes: `git add docs/PROGRESS.md && git commit -m "docs: update progress tracking"`
---
## 🔗 Related Documents
- **DESIGN.md** - Full system architecture and design rationale
- **AUTOMATION.md** - User-facing automation guide
- **CLAUDE.md** - AI assistant context and guidance
- **AGENTS.md** - Developer guidelines and conventions
- **README.md** - Project overview and quick start

View File

@ -1,163 +0,0 @@
@startuml ai-provider-fallback
!theme plain
title AI Provider Fallback Chain with Model Hints
start
:Automation needs AI generation\n(from patcher.py or runner.py);
:Read config/ai.yml;
if (Rule has model_hint?) then (yes)
if (model_hint == "fast"?) then (yes)
:Use **command_chain_fast**:
- claude -p (→ Haiku subagent)
- codex --model gpt-5-mini
- gemini --model gemini-2.5-flash;
else if (model_hint == "quality"?) then (yes)
:Use **command_chain_quality**:
- claude -p (→ Sonnet subagent)
- codex --model o3
- gemini --model gemini-2.5-pro;
else (unknown hint)
:Fall back to default chain;
endif
else (no hint)
:Use **command_chain** (default):
- claude -p (→ auto-select subagent)
- codex --model gpt-5
- gemini --model gemini-2.5-flash;
endif
partition "Provider Loop" {
:Get next provider from chain;
if (Provider == "claude"?) then (yes)
:Execute: **claude -p**;
note right
Claude CLI uses TASK COMPLEXITY hint
from prompt to select subagent:
- FAST → cdev-patch (Haiku)
- QUALITY → cdev-patch-quality (Sonnet)
- Default → auto-select
end note
if (Returned output?) then (yes)
if (Contains diff markers?) then (yes)
:✓ Success! Extract diff;
stop
else (no - non-diff response)
:Log: "Claude non-diff output";
:Try next provider;
endif
else (command failed)
:Log: "Claude command failed";
:Try next provider;
endif
else if (Provider == "codex"?) then (yes)
:Execute: **codex exec --model X --json -**;
note right
Codex requires special handling:
- Add "exec" subcommand
- Add "--json" flag
- Add "--color=never"
- Add "-" to read from stdin
- Parse JSON output for agent_message
end note
if (Exit code == 0?) then (yes)
:Parse JSON lines;
:Extract agent_message text;
if (Contains diff?) then (yes)
:✓ Success! Extract diff;
stop
else (no diff)
:Log: "Codex no diff output";
:Try next provider;
endif
else (exit code 1)
:Log: "Codex exited with 1";
:Try next provider;
endif
else if (Provider == "gemini"?) then (yes)
:Execute: **gemini --model X**;
note right
Gemini is the most reliable fallback:
- Accepts plain text input
- Returns consistent output
- Supports sentinel token
end note
if (Returned output?) then (yes)
if (Output == sentinel token?) then (yes)
:Log: "No changes needed";
:Return empty (intentional);
stop
else if (Contains diff?) then (yes)
:✓ Success! Extract diff;
stop
else (no diff)
:Log: "Gemini no diff output";
:Try next provider;
endif
else (command failed)
:Log: "Gemini command failed";
:Try next provider;
endif
endif
if (More providers in chain?) then (yes)
:Continue loop;
else (no)
:✗ All providers failed;
:Raise PatchGenerationError;
stop
endif
}
stop
legend bottom
**Configuration Example (config/ai.yml):**
runner:
command_chain:
- "claude -p"
- "codex --model gpt-5"
- "gemini --model gemini-2.5-flash"
command_chain_fast:
- "claude -p"
- "codex --model gpt-5-mini"
- "gemini --model gemini-2.5-flash"
command_chain_quality:
- "claude -p"
- "codex --model o3"
- "gemini --model gemini-2.5-pro"
sentinel: "CASCADINGDEV_NO_CHANGES"
**Environment Override:**
export CDEV_AI_COMMAND="claude -p || gemini --model gemini-2.5-pro"
(Overrides config.yml for this commit only)
endlegend
note right
**Why Fallback Chain?**
1. **Redundancy**: Rate limits, API outages
2. **Model specialization**: Different models excel at different tasks
3. **Cost optimization**: Try cheaper models first
4. **Quality assurance**: Fast models for simple tasks, quality for complex
**Observed Behavior:**
- Claude occasionally returns non-diff output
- Codex consistently exits with code 1 (auth issues?)
- Gemini is the most reliable fallback
end note
@enduml

View File

@ -1,66 +0,0 @@
@startuml architecture-overview
!theme plain
title CascadingDev Architecture Overview
package "User Project" {
folder "Docs/features/" {
file "request.md" as request
folder "discussions/" {
file "feature.discussion.md" as discussion
file "feature.discussion.sum.md" as summary
}
}
folder ".git/hooks/" {
file "pre-commit" as hook
}
folder "automation/" {
file "runner.py" as runner
file "config.py" as config
file "patcher.py" as patcher
file "workflow.py" as workflow
file "agents.py" as agents
file "summary.py" as summarymod
}
folder "Docs/features/.ai-rules.yml" as rules
}
cloud "AI Provider" {
component "Claude API" as claude
component "Claude CLI" as cli
}
actor Developer
Developer --> request: 1. Creates/edits
Developer --> hook: 2. git commit
hook --> runner: 3. Invokes
runner --> config: 4. Loads rules
config --> rules: 5. Reads
runner --> patcher: 6. Generate outputs
patcher --> claude: 7. AI request
claude --> patcher: 8. Returns patch
patcher --> discussion: 9. Applies patch
hook --> workflow: 10. Process votes
workflow --> agents: 11. Extract data
workflow --> summarymod: 12. Update summary
hook --> Developer: 13. Commit succeeds
note right of runner
Orchestrates the AI
automation pipeline
end note
note right of patcher
Generates and applies
AI-created patches
end note
note right of workflow
Tracks votes and
updates summaries
end note
@enduml

View File

@ -1,76 +0,0 @@
@startuml cascading-rules
!theme plain
title Cascading Rules Configuration System
package "Repository Root" {
file ".ai-rules.yml" as root_rules #LightBlue
package "Docs/features/" {
file ".ai-rules.yml" as features_rules #LightGreen
package "FR_123/" {
file ".ai-rules.yml" as feature_rules #LightYellow
file "request.md" as request
folder "discussions/" {
file "feature.discussion.md" as discussion
}
}
}
}
component "config.py" as config
request --> config: Process file
config --> feature_rules: 1. Load (nearest)
config --> features_rules: 2. Load (parent)
config --> root_rules: 3. Load (root)
config --> config: 4. Deep merge\n(nearest wins)
note right of config
**Cascading Precedence:**
1. Nearest directory rules
2. Parent directory rules
3. Root rules
**Merge Strategy:**
- Nested dictionaries are merged recursively
- Arrays are replaced (not merged)
- Nearest values win on conflicts
end note
note top of feature_rules
**FR_123/.ai-rules.yml**
rules:
feature_request:
outputs:
feature_discussion:
# Override specific output config
instruction_append: |
Additional context for FR_123
end note
note top of features_rules
**Docs/features/.ai-rules.yml**
file_associations:
"request.md": "feature_request"
"feature.discussion.md": "feature_discussion_update"
rules:
feature_request:
outputs:
feature_discussion:
path: "Docs/features/{feature_id}/discussions/feature.discussion.md"
output_type: "feature_discussion_writer"
end note
note top of root_rules
**Root .ai-rules.yml**
Global rules that apply
to the entire repository
end note
@enduml

View File

@ -1,148 +0,0 @@
@startuml commit-workflow
!theme plain
title Git Commit Workflow with Automation
actor Developer
participant "git commit" as git
participant "pre-commit hook" as hook
participant "runner.py" as runner
participant "config.py" as config
participant "patcher.py" as patcher
participant "AI Providers" as ai
participant "Claude CLI" as claude
participant "Codex CLI" as codex
participant "Gemini CLI" as gemini
participant "workflow.py" as workflow
database ".git index" as index
Developer -> git: commit staged files
activate git
git -> hook: trigger pre-commit
activate hook
hook -> runner: python3 -m automation.runner
activate runner
runner -> config: Load .ai-rules.yml
activate config
config -> config: Find cascading rules
config -> runner: Return merged config
deactivate config
loop For each staged file
runner -> config: Get rule for file
config -> runner: Return rule & outputs
runner -> patcher: generate_output(source, target, instruction)
activate patcher
patcher -> patcher: Build prompt with\nsource diff + context
patcher -> ai: Try provider 1/3
activate ai
ai -> claude: Send prompt
activate claude
alt Claude produces diff
claude -> ai: Return unified diff\n(wrapped in markers)
ai -> patcher: Success with diff
deactivate claude
deactivate ai
patcher -> patcher: Extract & sanitize patch
patcher -> patcher: git apply --3way
patcher -> index: Stage generated file
patcher -> runner: Success
else Claude non-diff output
claude -> ai: Non-diff response
deactivate claude
ai -> codex: Try provider 2/3
activate codex
alt Codex succeeds
codex -> ai: Return diff (JSON parsed)
deactivate codex
deactivate ai
patcher -> patcher: Extract & sanitize
patcher -> index: Stage file
patcher -> runner: Success
else Codex fails (exit 1)
codex -> ai: Exit code 1
deactivate codex
ai -> gemini: Try provider 3/3
activate gemini
alt Gemini succeeds
gemini -> ai: Return diff
deactivate gemini
deactivate ai
patcher -> patcher: Extract & sanitize
patcher -> index: Stage file
patcher -> runner: Success
else All providers failed
gemini -> ai: Error/no diff
deactivate gemini
deactivate ai
patcher -> patcher: Log error to stderr
patcher -> runner: Skip this file
end
end
end
deactivate patcher
end
runner -> hook: Exit 0
deactivate runner
hook -> workflow: python3 -m automation.workflow --status
activate workflow
workflow -> workflow: Parse VOTE: lines\nfrom discussions
workflow -> workflow: Update .sum.md files
workflow -> index: Stage updated summaries
workflow -> hook: Exit 0
deactivate workflow
hook -> git: Exit 0 (continue commit)
deactivate hook
git -> index: Create commit
git -> Developer: Commit successful
deactivate git
note right of ai
**Multi-Provider Fallback Chain**
Configured in config/ai.yml:
1. claude -p (Claude CLI with subagent)
2. codex exec --model gpt-5 --json
3. gemini --model gemini-2.5-flash
Each provider tried until one succeeds.
Provides redundancy against:
- Rate limits
- API outages
- Non-diff responses
end note
note right of patcher
Saves debug artifacts to
.git/ai-rules-debug/
for troubleshooting:
- *.raw.out
- *.clean.diff
- *.sanitized.diff
- *.final.diff
end note
note right of workflow
Always exits 0
(non-blocking)
Extracts structured markers:
- **DECISION**: text
- **QUESTION**: text
- **ACTION**: @assignee text
end note
@enduml

View File

@ -1,275 +0,0 @@
# CascadingDev Architecture Diagrams
This directory contains PlantUML diagrams documenting the CascadingDev automation system.
## Viewing the Diagrams
### Option 1: VS Code (Recommended)
1. Install the [PlantUML extension](https://marketplace.visualstudio.com/items?itemName=jebbs.plantuml)
2. Open any `.puml` file
3. Press `Alt+D` to preview
### Option 2: Online Viewer
Visit [PlantUML Web Server](http://www.plantuml.com/plantuml/uml/) and paste the diagram content.
### Option 3: Command Line
```bash
# Install PlantUML
sudo apt install plantuml # or brew install plantuml
# Generate PNG
plantuml docs/architecture-overview.puml
# Generate SVG (better quality)
plantuml -tsvg docs/*.puml
```
## Diagram Index
### 1. **architecture-overview.puml**
**High-level system architecture** showing how all components interact.
**Shows:**
- User project structure
- Automation modules (runner, config, patcher, workflow)
- AI provider integration
- Data flow from developer to commit
**Best for:** Understanding the big picture and component relationships.
---
### 2. **commit-workflow.puml** ⭐ UPDATED
**Sequence diagram** of what happens during `git commit` with multi-provider fallback.
**Shows:**
- Pre-commit hook execution
- runner.py orchestration
- Multi-provider AI fallback (claude → codex → gemini)
- Patch generation and application
- Vote processing with marker extraction
- File staging
**Best for:** Understanding the complete commit-time automation flow with provider redundancy.
---
### 3. **cascading-rules.puml**
**Configuration system** showing how `.ai-rules.yml` files cascade and merge.
**Shows:**
- Rule file locations (root, features/, FR_*/
- Merge precedence (nearest wins)
- File associations
- Rule definitions
**Best for:** Understanding how to configure automation rules.
---
### 4. **patcher-pipeline.puml** ⭐ UPDATED
**Detailed flowchart** of AI patch generation and application with provider fallback.
**Shows:**
- Prompt building with model hints
- Multi-provider fallback logic (claude → codex → gemini)
- Codex JSON parsing
- Patch extraction (with markers)
- Patch sanitization
- git apply strategies (strict → 3-way → fallback)
- Debug artifact saving
- Error handling
**Best for:** Debugging patch application issues or understanding the AI provider chain.
---
### 5. **voting-system.puml** ⭐ UPDATED
**Voting and promotion logic** for multi-stage feature discussions.
**Shows:**
- Vote parsing from discussion files
- Eligible voter calculation
- Stage-specific promotion thresholds (READY_FOR_DESIGN, READY_FOR_IMPLEMENTATION, etc.)
- Rejection logic per stage
- Summary file updates
**Best for:** Understanding how features move through multi-stage approval (feature → design → implementation → review).
---
### 6. **file-lifecycle.puml**
**Activity diagram** showing the complete lifecycle of a feature request.
**Shows:**
- From `request.md` creation to implementation
- AI-generated file creation
- Vote-driven status transitions
- Implementation gate triggering
- Which files to edit vs. auto-generated
**Best for:** Understanding the developer workflow and when automation triggers.
---
### 7. **directory-structure.puml**
**Component diagram** showing the project structure.
**Shows:**
- CascadingDev repository layout
- Install bundle structure
- User project structure after setup
- Debug artifact locations
- File relationships
**Best for:** Navigating the codebase and understanding where files live.
---
### 8. **ai-provider-fallback.puml** 🆕
**Detailed flowchart** of the multi-provider AI fallback chain with model hints.
**Shows:**
- Command chain selection (default, fast, quality)
- Model hint propagation (TASK COMPLEXITY)
- Provider-specific execution (Claude CLI, Codex JSON, Gemini)
- Fallback logic (claude → codex → gemini)
- Sentinel token handling
- Error handling per provider
**Best for:** Understanding how AI provider redundancy works and debugging provider failures.
---
### 9. **discussion-stages.puml** 🆕
**State machine diagram** showing the complete feature discussion lifecycle through all stages.
**Shows:**
- Feature stage (OPEN → READY_FOR_DESIGN)
- Design stage (OPEN → READY_FOR_IMPLEMENTATION)
- Implementation stage (IN_PROGRESS → READY_FOR_REVIEW)
- Review stage (UNDER_REVIEW → APPROVED)
- Status transitions and promotion conditions
- Auto-generated files per stage
**Best for:** Understanding the full feature approval workflow from request to merge.
---
### 10. **workflow-marker-extraction.puml** 🆕
**Detailed flowchart** of AI-powered marker extraction with simple fallback parsing.
**Shows:**
- Comment parsing from discussion files
- AI normalization (agents.py) for natural conversation
- Simple line-start fallback for explicit markers (DECISION:, QUESTION:, ACTION:)
- Structured data extraction from AI-generated JSON
- Summary section generation
- Marker block updates in .sum.md files
**Best for:** Understanding the two-tier extraction system - AI for natural conversation, simple parsing for strict format fallback.
---
## Key Concepts Illustrated
### Automation Pipeline
```
Developer commits → Pre-commit hook → runner.py → patcher.py → Multi-Provider AI
→ config.py → .ai-rules.yml (claude → codex → gemini)
→ workflow.py → summary.py
(regex marker extraction)
```
### File Processing (Multi-Stage)
```
request.md (edited) → AI generates → feature.discussion.md (created/updated)
→ feature.discussion.sum.md (updated)
↓ (votes → READY_FOR_DESIGN)
→ design.discussion.md (created)
→ design.discussion.sum.md (updated)
↓ (votes → READY_FOR_IMPLEMENTATION)
→ implementation.discussion.md (created)
↓ (votes → READY_FOR_REVIEW)
→ review.discussion.md (created)
```
### Voting Flow
```
Participant comments → VOTE: lines parsed → Count eligible votes
→ Check thresholds
→ Update status
→ Generate implementation (if ready)
```
### Error Handling
```
API overload → Log error → Continue with other files → Commit succeeds
Patch fails → Save debug artifacts → Log error → Continue
```
---
## Common Workflows
### Adding a New Rule
1. See **cascading-rules.puml** for rule structure
2. Edit `Docs/features/.ai-rules.yml`
3. Add `file_associations` and `rules` sections
4. Commit and test
### Debugging Automation
1. Check `.git/ai-rules-debug/*.raw.out` for AI responses
2. See **patcher-pipeline.puml** for patch processing steps
3. Review **commit-workflow.puml** for execution order
### Understanding Status Transitions
1. See **discussion-stages.puml** for complete multi-stage flow
2. See **voting-system.puml** for promotion logic per stage
3. Check **file-lifecycle.puml** for developer workflow
4. Review discussion file YAML headers for thresholds
### Understanding AI Provider System
1. See **ai-provider-fallback.puml** for provider chain logic
2. Check **patcher-pipeline.puml** for integration details
3. Review config/ai.yml for command configuration
4. Check .git/ai-rules-debug/ for provider outputs
### Understanding Marker Extraction
1. See **workflow-marker-extraction.puml** for AI normalization flow
2. Review automation/agents.py for AI-powered extraction
3. Review automation/workflow.py for simple fallback implementation
4. Test with natural conversation - AI extracts markers automatically
5. Fallback: Use explicit line-start markers (DECISION:, QUESTION:, ACTION:)
---
## Implementation Status
| Feature | Status | Diagram |
|---------|--------|---------|
| Cascading Rules | ✅ Complete | cascading-rules.puml |
| AI Patch Generation | ✅ Complete | patcher-pipeline.puml |
| Multi-Provider Fallback | ✅ Complete | ai-provider-fallback.puml |
| Model Hints (fast/quality) | ✅ Complete | ai-provider-fallback.puml |
| Vote Tracking | ✅ Complete | voting-system.puml |
| Multi-Stage Promotion | ✅ Complete | discussion-stages.puml |
| AI Marker Normalization | ✅ Complete | workflow-marker-extraction.puml |
| Structured Summaries | ✅ Complete | workflow-marker-extraction.puml |
| Implementation Gate | ✅ Complete | file-lifecycle.puml |
| Error Handling | ✅ Complete | commit-workflow.puml |
---
## Related Documentation
- **[DESIGN.md](DESIGN.md)** - Complete system design document
- **[AUTOMATION.md](AUTOMATION.md)** - Automation system details
- **[automation/README.md](../automation/README.md)** - Quick reference guide
- **[CLAUDE.md](../CLAUDE.md)** - AI assistant guide for this repo
---
**Created:** 2025-10-31
**Last Updated:** 2025-10-31
**Diagrams:** 7 total (PlantUML format)

View File

@ -1,123 +0,0 @@
@startuml directory-structure
!theme plain
title CascadingDev Project Directory Structure
folder "CascadingDev Repository" {
folder "automation/" as repo_auto #LightBlue {
file "runner.py" as runner #SkyBlue
file "config.py" as config #SkyBlue
file "patcher.py" as patcher #SkyBlue
file "workflow.py" as workflow #SkyBlue
file "agents.py" as agents #SkyBlue
file "summary.py" as summary #SkyBlue
file "README.md" as auto_readme
}
folder "assets/" #LightGreen {
folder "hooks/" {
file "pre-commit" #LightCoral
}
folder "templates/" {
file "feature_request.md"
file "feature.discussion.md"
file "feature.discussion.sum.md"
folder "rules/" {
file "root.ai-rules.yml"
file "features.ai-rules.yml"
}
}
}
folder "tests/" #LightYellow {
file "test_workflow.py"
file "test_config.py"
file "test_patcher.py"
file "test_runner.py"
}
folder "tools/" {
file "build_installer.py"
file "mock_ai.sh"
}
folder "docs/" #Lavender {
file "DESIGN.md"
file "AUTOMATION.md"
file "architecture-overview.puml" #Pink
file "commit-workflow.puml" #Pink
file "cascading-rules.puml" #Pink
file "patcher-pipeline.puml" #Pink
file "voting-system.puml" #Pink
file "file-lifecycle.puml" #Pink
}
file ".ai-rules.yml" #Orange
file "pyproject.toml"
}
folder "Install Bundle\n(Built by build_installer.py)" #LightGray {
folder "automation/" as install_auto #LightBlue
folder "assets/" as install_assets #LightGreen
folder "process/templates/" #Wheat
file "setup_cascadingdev.py" #Coral
}
folder "User Project\n(After setup)" #Wheat {
folder "Docs/features/" {
file ".ai-rules.yml" #Orange
folder "FR_2025-10-31_feature-name/" {
file "request.md" #LightGreen
folder "discussions/" {
file "feature.discussion.md" #SkyBlue
file "feature.discussion.sum.md" #LightBlue
file "implementation.discussion.md" #SkyBlue
file "implementation.discussion.sum.md" #LightBlue
}
}
}
folder "automation/" as user_auto #LightBlue {
note as auto_note
Copied from install bundle
Runs during git commits
end note
}
folder ".git/hooks/" {
file "pre-commit" #LightCoral
}
folder ".git/ai-rules-debug/" #Pink {
file "*.raw.out"
file "*.clean.diff"
file "*.sanitized.diff"
file "*.final.diff"
note as debug_note
Debug artifacts saved here
when automation runs
end note
}
}
note top of runner
**Entrypoint for AI automation**
Called by pre-commit hook
Processes staged files
according to .ai-rules.yml
end note
note top of patcher
**AI patch generation**
Calls Claude API
Applies patches with git
end note
note top of workflow
**Vote tracking & summaries**
Parses VOTE: lines
Updates .sum.md files
Always runs (no AI needed)
end note
@enduml

View File

@ -1,85 +0,0 @@
@startuml discussion-processing
!theme plain
title Discussion Automation Pipeline (current behaviour)
start
:Developer stages discussion file;
:Pre-commit hook invokes\nautomation/workflow.py:_run_status();
partition "Vote Handling" {
:Parse staged discussion snapshot\n(parse_votes);
:Print vote summary to console;
if (Promotion rule present?) then (yes)
:count_eligible_votes();
:check_promotion_threshold();
if (Threshold met?) then (yes)
:update_discussion_status();
:git add updated discussion;
endif
endif
}
partition "Incremental Extraction" {
:get_discussion_changes()\n(staged diff → added lines only);
:Call extract_structured_basic()\n(simple marker parsing);
partition "AI Normalizer" {
:process_discussion_with_ai();
if (Providers available?) then (yes)
repeat
:Call normalize_discussion()\n(claude → codex → gemini);
if (Diff/JSON valid?) then (yes)
stop
else (no / malformed)
:Raise PatchGenerationError;
:Fallback to next provider;
endif
repeat while (more providers)
endif
if (AI returned data?) then (yes)
:Merge AI JSON onto structured result;
else (no)
:Keep regex-only extraction;
endif
}
}
partition "Summary Update" {
:Load companion summary file (if any);
:load_summary_state()\n(read <!-- SUMMARY:STATE --> JSON);
:merge_* helpers update\n questions / actions / decisions / mentions;
:save_summary_state();
:format_votes_section();
:format_questions_section();
:format_action_items_section();
:format_decisions_section();
:format_awaiting_section();
:append_timeline_entry();
:Write summary file and git add;
}
partition "Design / Implementation Outputs" {
:RulesConfig routes staged file;
if (feature discussion promoted to READY_FOR_DESIGN?) then (yes)
:design_gate_writer ensures design discussion exists;
endif
if (design discussion promoted to READY_FOR_IMPLEMENTATION?) then (yes)
:implementation_gate_writer creates\nimplementation discussion;
endif
if (design discussion updated?) then (yes)
:design_discussion_update rule triggers;
:design_discussion_writer appends AI comment;
:design_doc_writer (new rule)\nupdates Docs/.../design/design.md\nvia provider chain;
endif
}
stop
legend bottom
**Key Notes**
- AI providers are attempted in order (claude → codex → gemini); malformed diffs raise PatchGenerationError so the next provider runs.
- Summary state persists in <!-- SUMMARY:STATE {...} --> and is rewritten every commit.
- Vote thresholds drive status changes which gate creation of downstream discussions/docs.
endlegend
@enduml

View File

@ -1,153 +0,0 @@
@startuml discussion-stages
!theme plain
title Feature Discussion Stage Progression with Status Transitions
state "Feature Stage" as feature {
[*] --> OPEN_F : feature.discussion.md created
OPEN_F : status: OPEN
OPEN_F : Participants discuss scope,\nplatform, requirements
OPEN_F --> READY_FOR_DESIGN : ≥2 READY votes\n(human only if\nallow_agent_votes: false)
READY_FOR_DESIGN : status: READY_FOR_DESIGN
READY_FOR_DESIGN : Scope approved,\nready for technical design
READY_FOR_DESIGN --> FEATURE_REJECTED : Majority REJECT votes
FEATURE_REJECTED : status: FEATURE_REJECTED
FEATURE_REJECTED : Feature blocked
note right of READY_FOR_DESIGN
**AI Auto-generates:**
design.discussion.md
(Initial design proposal)
end note
}
state "Design Stage" as design {
[*] --> OPEN_D : design.discussion.md created
OPEN_D : status: OPEN
OPEN_D : Discuss architecture,\ntech stack, data models
OPEN_D --> READY_FOR_IMPLEMENTATION : ≥2 READY votes
READY_FOR_IMPLEMENTATION : status: READY_FOR_IMPLEMENTATION
READY_FOR_IMPLEMENTATION : Design approved,\nready to implement
OPEN_D --> DESIGN_REJECTED : Majority REJECT votes
DESIGN_REJECTED : status: DESIGN_REJECTED
DESIGN_REJECTED : Design needs rework
note right of READY_FOR_IMPLEMENTATION
**AI Auto-generates:**
implementation.discussion.md
(Implementation tracking)
end note
}
state "Implementation Stage" as impl {
[*] --> OPEN_I : implementation.discussion.md created
OPEN_I : status: OPEN
OPEN_I : Track engineering tasks,\nprogress, and blockers
OPEN_I --> READY_FOR_TESTING : All checkboxes complete\nAND ≥1 human READY vote
READY_FOR_TESTING : status: READY_FOR_TESTING
READY_FOR_TESTING : Implementation complete
note right of READY_FOR_TESTING
**Automation syncs:**
implementation/tasks.md
(checkbox mirror + summary)
end note
}
state "Review Stage" as review {
[*] --> UNDER_REVIEW : review.discussion.md created
UNDER_REVIEW : status: UNDER_REVIEW
UNDER_REVIEW : Code review,\ntesting, QA
UNDER_REVIEW --> APPROVED : ≥2 READY votes
UNDER_REVIEW --> NEEDS_CHANGES : CHANGES votes
APPROVED : status: APPROVED
APPROVED : Ready to merge
NEEDS_CHANGES --> UNDER_REVIEW : Changes addressed
}
[*] --> feature
READY_FOR_DESIGN --> design
READY_FOR_IMPLEMENTATION --> impl
READY_FOR_TESTING --> review
APPROVED --> [*]
legend bottom
**Vote Counting Rules (Configured per stage):**
promotion_rule:
allow_agent_votes: false # Only count human votes
ready_min_eligible_votes: 2 # Need 2 READY to promote
reject_min_eligible_votes: 1 # Need 1 REJECT to block
**Vote Format in Discussion Files:**
Name: ParticipantName
Comment text.
VOTE: READY|CHANGES|REJECT
Name: AI_BotName
Comment text.
VOTE: CHANGES (excluded if allow_agent_votes: false)
**Status Transitions:**
- Automatic when vote thresholds met
- AI updates YAML header: status: field
- AI appends consensus comment
- Triggers next stage discussion creation
**Example Progression:**
1. Feature discussion: Define WHAT we're building
2. Design discussion: Define HOW we'll build it
3. Implementation discussion: Track building progress
4. Review discussion: Verify quality before merge
endlegend
note right of feature
**Participants Use Structured Markers:**
**DECISION**: Web platform, React frontend
**QUESTION**: Mobile support in MVP?
**ACTION**: @Alice research auth options
These are extracted to .sum.md files
for easy reference
end note
note right of design
**Design Decisions Tracked:**
- Architecture choices
- Technology stack
- Data models
- API contracts
- Risk trade-offs
end note
note right of impl
**Implementation Tracking:**
- Task breakdowns
- Blocking issues
- Progress updates
- Code references
end note
note right of review
**Review Checklist:**
- Code quality
- Test coverage
- Documentation
- Performance
- Security
end note
@enduml

View File

@ -1,120 +0,0 @@
@startuml file-lifecycle
!theme plain
title Feature File Lifecycle and Automation Triggers
|Developer|
start
:Create feature request;
:Edit **request.md**;
|Git|
:git add request.md;
:git commit;
|Pre-commit Hook|
:runner.py processes\nrequest.md;
|AI (runner + patcher)|
if (feature.discussion.md exists?) then (no)
:Generate new\n**feature.discussion.md**:
- YAML header with promotion rules
- Summary from request
- Initial AI comment + vote;
else (yes)
:Append AI comment to\nexisting discussion;
endif
:Stage feature.discussion.md;
if (feature.discussion.sum.md exists?) then (no)
:Create from template;
else (yes)
:Update summary sections;
endif
:Stage feature.discussion.sum.md;
|Git|
:Commit completes with\nauto-generated files;
|Developer|
:Review generated discussion;
if (Make changes?) then (yes)
:Edit **feature.discussion.md**;
:Add your comment + VOTE;
|Git|
:git add feature.discussion.md;
:git commit;
|Pre-commit Hook|
:workflow.py parses votes;
:runner.py appends AI response;
|AI|
:Read all comments + votes;
:Calculate promotion status;
if (READY votes >= threshold?) then (yes)
:Update status to\n**READY_FOR_IMPLEMENTATION**;
if (implementation_gate enabled?) then (yes)
:Generate\n**implementation.discussion.md**;
:Stage implementation file;
endif
endif
:Append new AI comment\nwith vote analysis;
:Stage updated discussion;
|Git|
:Commit with updates;
endif
|Developer|
if (Status == READY_FOR_IMPLEMENTATION?) then (yes)
:Begin implementation;
:Edit **implementation.discussion.md**;
:Track tasks and progress;
|Pre-commit Hook|
:AI adds planning updates;
:Updates task checklists;
|Git|
:Commit implementation progress;
endif
stop
note right
**Files Auto-Generated:**
1. feature.discussion.md
2. feature.discussion.sum.md
3. implementation.discussion.md (gated)
**Never Edit These Manually:**
- .sum.md files (always auto-generated)
**Edit These Freely:**
- request.md (your feature spec)
- *.discussion.md (add your comments)
end note
note right
**Two Automation Phases:**
**Phase 1 - Vote Tracking:**
- Always runs (no AI needed)
- Parses VOTE: lines
- Updates .sum.md sections
**Phase 2 - AI Enhancement:**
- Requires Claude API/CLI
- Generates intelligent comments
- Tracks quorum and status
- Creates gated files
end note
@enduml

View File

@ -1,149 +0,0 @@
@startuml patcher-pipeline
!theme plain
title AI Patch Generation and Application Pipeline
start
:Receive source file + target file + instruction;
:Build prompt with:
- Source file diff (staged changes)
- Source file full content
- Target file current content
- Generation instructions from rules
- Model hint (fast/quality) if specified;
partition "Multi-Provider Fallback" {
:Try Provider 1: Claude CLI;
if (Claude returned output?) then (yes)
if (Output contains\n"API Error: Overloaded"?) then (yes)
:Raise PatchGenerationError\n"Claude API is overloaded";
stop
endif
if (Output contains diff markers?) then (yes)
:Success! Continue to extraction;
else (no - non-diff response)
:Log: "Claude non-diff output";
:Try Provider 2: Codex CLI;
if (Codex returned output?) then (yes)
:Parse JSON response\nextract agent_message;
if (Parsed text contains diff?) then (yes)
:Success! Continue to extraction;
else (no - exit code 1)
:Log: "Codex exited with 1";
:Try Provider 3: Gemini CLI;
if (Gemini returned output?) then (yes)
if (Gemini returned sentinel?) then (yes)
:Log: "No changes needed";
stop
else (has diff)
:Success! Continue to extraction;
endif
else (no)
:Raise "All providers failed";
stop
endif
endif
else (no)
:Raise "All providers failed";
stop
endif
endif
else (no - command failed)
:Raise "Provider 1 command failed";
stop
endif
}
:Save raw output to\n.git/ai-rules-debug/*.raw.out;
if (Output contains\n<<<AI_DIFF_START>>>?) then (yes)
:Extract content between\nSTART and END markers;
else (no)
if (Output contains\n"diff --git"?) then (yes)
:Extract from\n"diff --git" onward;
else (no)
:Raise "AI output did not contain a diff";
stop
endif
endif
:Save to *.clean.diff;
:Sanitize patch:
- Remove "index ..." lines
- Remove "similarity index" lines
- Keep only diff content;
:Save to *.sanitized.diff;
if (New file and missing\n"new file mode"?) then (yes)
:Add "new file mode 100644" header;
endif
:Save to *.final.diff;
if (Patch is empty?) then (yes)
:Raise "AI returned empty patch";
stop
endif
:Try git apply -p1 --index --check;
if (Check succeeded?) then (yes)
:git apply -p1 --index;
:Success!;
stop
endif
:Try git apply -p1 --index --3way\n--recount --whitespace=nowarn;
if (3-way succeeded?) then (yes)
:Applied with 3-way merge;
:Success!;
stop
endif
if (Is new file?) then (yes)
:Try git apply -p1 (without --index);
if (Succeeded?) then (yes)
:git add target file;
:Success!;
stop
endif
endif
:Raise "Failed to apply patch\n(strict and 3-way both failed)";
stop
note right
**AI Provider Configuration:**
config/ai.yml defines fallback chains:
- command_chain (default)
- command_chain_fast (model_hint: fast)
- command_chain_quality (model_hint: quality)
**Provider Details:**
1. Claude: claude -p (auto-selects subagent)
2. Codex: codex exec --model gpt-5 --json
3. Gemini: gemini --model gemini-2.5-flash
**Debug Artifacts Location:**
.git/ai-rules-debug/
Files saved:
- *.raw.out (full AI response)
- *.clean.diff (extracted patch)
- *.sanitized.diff (cleaned patch)
- *.final.diff (applied patch)
Filename format:
{output_path_with_underscores}-{pid}.{ext}
end note
@enduml

View File

@ -1,124 +0,0 @@
@startuml voting-system
!theme plain
title Feature Discussion Voting and Promotion System
start
:Discussion file updated;
partition "Vote Parsing (workflow.py)" {
:Read staged discussion.md content;
:Parse all lines matching:
**- ParticipantName: ... VOTE: VALUE**;
:Track latest vote per participant\n(most recent wins);
:Count eligible voters based on\n**allow_agent_votes** rule;
note right
**Vote Format:**
Name: ParticipantName
Comment text.
VOTE: READY
Name: AI_BotName
Comment.
VOTE: CHANGES
**Valid Values:**
- READY (approve)
- CHANGES (needs work)
- REJECT (block)
end note
}
partition "Promotion Logic (AI-powered)" {
:AI reads promotion_rule from header:
- allow_agent_votes: true/false
- ready_min_eligible_votes: N or "all"
- reject_min_eligible_votes: N or "all";
if (allow_agent_votes == false?) then (yes)
:Exclude voters with\nnames starting with "AI_";
endif
:Count eligible READY votes;
:Count eligible REJECT votes;
:Count CHANGES votes (neutral);
if (READY threshold met AND\nREJECT threshold NOT met?) then (yes)
if (Current stage == feature?) then (yes)
:Update status to\n**READY_FOR_DESIGN**;
:AI generates\ndesign.discussion.md;
else if (Current stage == design?) then (yes)
:Update status to\n**READY_FOR_IMPLEMENTATION**;
:AI generates\nimplementation.discussion.md;
else if (Current stage == implementation?) then (yes)
:Verify all checkboxes checked\nAND ≥1 human READY vote;
if (Requirements met?) then (yes)
:Update status to\n**READY_FOR_TESTING**;
:AI generates\ntesting discussion artefacts;
else
:Keep status **OPEN** (await more progress);
endif
else (review stage)
:Update status to\n**APPROVED**;
endif
else if (REJECT threshold met AND\nREADY threshold NOT met?) then (no)
if (Current stage == feature?) then (yes)
:Update status to\n**FEATURE_REJECTED**;
else if (Current stage == design?) then (yes)
:Update status to\n**DESIGN_REJECTED**;
else (other stages)
:Update status to\n**NEEDS_CHANGES**;
endif
else (no)
:Keep status as **OPEN** or **UNDER_REVIEW**;
endif
}
partition "Summary Update (summary.py)" {
:Update VOTES section in .sum.md;
note right
Example block written into the summary file:
<!-- SUMMARY:VOTES START -->
## Votes (latest per participant)
READY: X • CHANGES: Y • REJECT: Z
- Alice: READY
- Bob: CHANGES
<!-- SUMMARY:VOTES END -->
end note
:Auto-stage updated .sum.md file;
}
:Include in commit;
stop
legend bottom
Example Promotion Rules:
Simple Majority (2 approvals):
ready_min_eligible_votes: 2
reject_min_eligible_votes: 1
allow_agent_votes: false
Unanimous (everyone must approve):
ready_min_eligible_votes: "all"
reject_min_eligible_votes: 1
allow_agent_votes: false
Include AI votes:
ready_min_eligible_votes: 3
allow_agent_votes: true
Implementation human gate:
ready_min_eligible_votes: 1
allow_agent_votes: true
# workflow enforces ≥1 human READY
# and completion of all tasks
endlegend
@enduml

View File

@ -1,141 +0,0 @@
@startuml workflow-marker-extraction
!theme plain
title Workflow Marker Extraction with AI Normalization
start
:Discussion file staged\n(feature.discussion.md,\ndesign.discussion.md, etc);
:workflow.py reads file content;
partition "Two-Tier Extraction" {
:Call extract_structured_basic()\nSimple fallback parsing;
note right
**Fallback: Simple Line-Start Matching**
Only matches explicit markers at line start:
- DECISION: text
- QUESTION: text
- Q: text
- ACTION: text
- TODO: text
- ASSIGNED: text
- DONE: text
Uses case-insensitive startswith() matching.
Handles strictly-formatted discussions.
end note
:Store fallback results\n(decisions, questions, actions, mentions);
:Call agents.normalize_discussion()\nAI-powered extraction;
partition "AI Normalization (agents.py)" {
:Build prompt for AI model;
note right
**AI Prompt:**
"Extract structured information from discussion.
Return JSON with: votes, questions, decisions,
action_items, mentions"
Supports natural conversation like:
"I'm making a decision here - we'll use X"
"Does anyone know if we need Y?"
"@Sarah can you check Z?"
end note
:Execute command chain\n(claude → codex → gemini);
if (AI returned valid JSON?) then (yes)
:Parse JSON response;
:Extract structured data:\n- votes\n- questions\n- decisions\n- action_items\n- mentions;
:Override fallback results\nwith AI results;
note right
**AI advantages:**
- Handles embedded markers
- Understands context
- Extracts from natural language
- No strict formatting required
end note
else (no - AI failed or unavailable)
:Use fallback results only;
note right
**Fallback activated when:**
- All providers fail
- Invalid JSON response
- agents.py import fails
- API rate limits hit
end note
endif
}
}
partition "Generate Summary Sections" {
:Format Decisions section:\n- Group by participant\n- Number sequentially\n- Include rationale if present;
:Format Open Questions section:\n- List unanswered questions\n- Track by participant\n- Mark status (OPEN/PARTIAL);
:Format Action Items section:\n- Group by status (TODO/ASSIGNED/DONE)\n- Show assignees\n- Link to requesters;
:Format Awaiting Replies section:\n- Group by @mentioned person\n- Show context of request\n- Track unresolved mentions;
:Format Votes section:\n- Count by value (READY/CHANGES/REJECT)\n- List latest vote per participant\n- Exclude AI votes if configured;
:Format Timeline section:\n- Chronological order (newest first)\n- Include status changes\n- Summarize key events;
}
:Update marker blocks in .sum.md;
note right
<!-- SUMMARY:DECISIONS START -->
...
<!-- SUMMARY:DECISIONS END -->
end note
:Stage updated .sum.md file;
stop
legend bottom
**Example Input (natural conversation):**
Name: Rob
I've been thinking about the timeline. I'm making a decision here -
we'll build the upload system first. Does anyone know if we need real-time
preview? @Sarah can you research Unity Asset Store API?
VOTE: READY
**AI Normalization Output (JSON):**
{
"votes": [{"participant": "Rob", "vote": "READY"}],
"decisions": [{"participant": "Rob",
"decision": "build the upload system first"}],
"questions": [{"participant": "Rob",
"question": "if we need real-time preview"}],
"action_items": [{"participant": "Rob", "action": "research Unity API",
"assignee": "Sarah"}],
"mentions": [{"from": "Rob", "to": "Sarah"}]
}
**Fallback Only Matches:**
DECISION: We'll build upload first
QUESTION: Do we need real-time preview?
ACTION: @Sarah research Unity API
endlegend
note right
**Architecture Benefits:**
✓ Participants write naturally
✓ No strict formatting rules
✓ AI handles understanding
✓ Simple code for fallback
✓ Resilient (multi-provider chain)
✓ Cost-effective (fast models)
**Files:**
- automation/agents.py (AI normalization)
- automation/workflow.py (fallback + orchestration)
- automation/patcher.py (provider chain execution)
end note
@enduml

View File

@ -1,151 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" contentScriptType="application/ecmascript" contentStyleType="text/css" height="103px" preserveAspectRatio="none" style="width:352px;height:103px;background:#000000;" version="1.1" viewBox="0 0 352 103" width="352px" zoomAndPan="magnify"><defs/><g><rect fill="#11060A" height="1" style="stroke: #11060A; stroke-width: 1.0;" width="1" x="0" y="0"/><rect fill="#33FF02" height="24.0679" style="stroke: #33FF02; stroke-width: 1.0;" width="346" x="5" y="5"/><text fill="#000000" font-family="sans-serif" font-size="14" font-weight="bold" lengthAdjust="spacingAndGlyphs" textLength="344" x="6" y="20">[From workflow-marker-extraction.puml (line 2) ]</text><text fill="#33FF02" font-family="sans-serif" font-size="14" font-weight="bold" lengthAdjust="spacingAndGlyphs" textLength="0" x="9" y="43.0679"/><text fill="#33FF02" font-family="sans-serif" font-size="14" font-weight="bold" lengthAdjust="spacingAndGlyphs" textLength="275" x="5" y="62.1358">@startuml workflow-marker-extraction</text><text fill="#33FF02" font-family="sans-serif" font-size="14" font-weight="bold" lengthAdjust="spacingAndGlyphs" textLength="87" x="5" y="81.2038">!theme plain</text><text fill="#FF0000" font-family="sans-serif" font-size="14" font-weight="bold" lengthAdjust="spacingAndGlyphs" textLength="93" x="9" y="100.2717">Syntax Error?</text><!--MD5=[32d7802434cc4c797d2bc79c191390cf]
@startuml workflow-marker-extraction
!theme plain
title Workflow Marker Extraction with AI Normalization
start
:Discussion file staged\n(feature.discussion.md,\ndesign.discussion.md, etc);
:workflow.py reads file content;
partition "Two-Tier Extraction" {
:Call extract_structured_basic()\nSimple fallback parsing;
note right
**Fallback: Simple Line-Start Matching**
Only matches explicit markers at line start:
- DECISION: text
- QUESTION: text
- Q: text
- ACTION: text
- TODO: text
- ASSIGNED: text
- DONE: text
Uses case-insensitive startswith() matching.
Handles strictly-formatted discussions.
end note
:Store fallback results\n(decisions, questions, actions, mentions);
:Call agents.normalize_discussion()\nAI-powered extraction;
partition "AI Normalization (agents.py)" {
:Build prompt for AI model;
note right
**AI Prompt:**
"Extract structured information from discussion.
Return JSON with: votes, questions, decisions,
action_items, mentions"
Supports natural conversation like:
"I'm making a decision here - we'll use X"
"Does anyone know if we need Y?"
"@Sarah can you check Z?"
end note
:Execute command chain\n(claude → codex → gemini);
if (AI returned valid JSON?) then (yes)
:Parse JSON response;
:Extract structured data:\n- votes\n- questions\n- decisions\n- action_items\n- mentions;
:Override fallback results\nwith AI results;
note right
**AI advantages:**
- Handles embedded markers
- Understands context
- Extracts from natural language
- No strict formatting required
end note
else (no - AI failed or unavailable)
:Use fallback results only;
note right
**Fallback activated when:**
- All providers fail
- Invalid JSON response
- agents.py import fails
- API rate limits hit
end note
endif
}
}
partition "Generate Summary Sections" {
:Format Decisions section:\n- Group by participant\n- Number sequentially\n- Include rationale if present;
:Format Open Questions section:\n- List unanswered questions\n- Track by participant\n- Mark status (OPEN/PARTIAL);
:Format Action Items section:\n- Group by status (TODO/ASSIGNED/DONE)\n- Show assignees\n- Link to requesters;
:Format Awaiting Replies section:\n- Group by @mentioned person\n- Show context of request\n- Track unresolved mentions;
:Format Votes section:\n- Count by value (READY/CHANGES/REJECT)\n- List latest vote per participant\n- Exclude AI votes if configured;
:Format Timeline section:\n- Chronological order (newest first)\n- Include status changes\n- Summarize key events;
}
:Update marker blocks in .sum.md;
note right
<!- - SUMMARY:DECISIONS START - ->
...
<!- - SUMMARY:DECISIONS END - ->
end note
:Stage updated .sum.md file;
stop
legend bottom
**Example Input (natural conversation):**
Rob: I've been thinking about the timeline. I'm making a decision here -
we'll build the upload system first. Does anyone know if we need real-time
preview? @Sarah can you research Unity Asset Store API? VOTE: READY
**AI Normalization Output (JSON):**
{
"votes": [{"participant": "Rob", "vote": "READY"}],
"decisions": [{"participant": "Rob",
"decision": "build the upload system first"}],
"questions": [{"participant": "Rob",
"question": "if we need real-time preview"}],
"action_items": [{"participant": "Rob", "action": "research Unity API",
"assignee": "Sarah"}],
"mentions": [{"from": "Rob", "to": "Sarah"}]
}
**Fallback Only Matches:**
DECISION: We'll build upload first
QUESTION: Do we need real-time preview?
ACTION: @Sarah research Unity API
endlegend
note right
**Architecture Benefits:**
✓ Participants write naturally
✓ No strict formatting rules
✓ AI handles understanding
✓ Simple code for fallback
✓ Resilient (multi-provider chain)
✓ Cost-effective (fast models)
**Files:**
- automation/agents.py (AI normalization)
- automation/workflow.py (fallback + orchestration)
- automation/patcher.py (provider chain execution)
end note
@enduml
PlantUML version 1.2020.02(Sun Mar 01 06:22:07 AST 2020)
(GPL source distribution)
Java Runtime: OpenJDK Runtime Environment
JVM: OpenJDK 64-Bit Server VM
Java Version: 21.0.8+9-Ubuntu-0ubuntu124.04.1
Operating System: Linux
Default Encoding: UTF-8
Language: en
Country: CA
--></g></svg>

Before

Width:  |  Height:  |  Size: 6.1 KiB