Add parallel orchestration documentation to web UI

New docs page at /docs/parallel-orchestration showing:
- Why parallel execution (3x speedup example)
- Basic ThreadPoolExecutor pattern
- Complete multi-perspective analysis example
- Progress feedback implementation
- Error handling best practices
- Reference to orchestrated-discussions project

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
rob 2026-01-01 03:21:33 -04:00
parent cdb17db43a
commit 940f91d445
1 changed files with 231 additions and 0 deletions

View File

@ -377,6 +377,236 @@ providers:
("provider-selection", "Choosing a Provider"), ("provider-selection", "Choosing a Provider"),
], ],
}, },
"parallel-orchestration": {
"title": "Parallel Orchestration",
"description": "Run multiple SmartTools concurrently for faster workflows",
"content": """
<p class="lead">SmartTools executes steps sequentially within a tool, but you can run
<strong>multiple tools in parallel</strong> using Python's ThreadPoolExecutor. This pattern
is ideal for multi-agent workflows, parallel analysis, or any task where you need responses
from multiple AI providers simultaneously.</p>
<h2 id="why-parallel">Why Parallel Execution?</h2>
<p>Consider a code review workflow that needs input from multiple perspectives:</p>
<ul>
<li><strong>Sequential</strong>: Security Performance Style = 45 seconds</li>
<li><strong>Parallel</strong>: All three at once = 15 seconds</li>
</ul>
<h2 id="basic-pattern">Basic Pattern</h2>
<p>Use Python's <code>concurrent.futures</code> to run multiple SmartTools in parallel:</p>
<pre><code class="language-python">import subprocess
from concurrent.futures import ThreadPoolExecutor, as_completed
def run_tool(tool_name: str, input_text: str) -> dict:
\"\"\"Run a SmartTool and return its output.\"\"\"
result = subprocess.run(
[tool_name],
input=input_text,
capture_output=True,
text=True
)
return {
"tool": tool_name,
"output": result.stdout,
"success": result.returncode == 0
}
def run_parallel(tools: list[str], input_text: str) -> list[dict]:
\"\"\"Run multiple tools in parallel on the same input.\"\"\"
results = []
with ThreadPoolExecutor(max_workers=len(tools)) as executor:
# Submit all tools
futures = {
executor.submit(run_tool, tool, input_text): tool
for tool in tools
}
# Collect results as they complete
for future in as_completed(futures):
results.append(future.result())
return results
# Example usage
tools = ["security-review", "performance-review", "style-review"]
code = open("main.py").read()
reviews = run_parallel(tools, code)
for review in reviews:
print(f"=== {review['tool']} ===")
print(review['output'])
</code></pre>
<h2 id="real-world-example">Real-World Example: Multi-Perspective Analysis</h2>
<p>Here's a complete script that gets multiple AI perspectives on a topic:</p>
<pre><code class="language-python">#!/usr/bin/env python3
\"\"\"Get multiple AI perspectives on a topic in parallel.\"\"\"
import subprocess
import json
from concurrent.futures import ThreadPoolExecutor, as_completed
# Define your perspective tools (each is a SmartTool)
PERSPECTIVES = [
"perspective-optimist", # Focuses on opportunities
"perspective-critic", # Identifies problems
"perspective-pragmatist", # Focuses on actionability
]
def get_perspective(tool: str, topic: str) -> dict:
\"\"\"Get one perspective on a topic.\"\"\"
result = subprocess.run(
[tool],
input=topic,
capture_output=True,
text=True,
timeout=60 # Timeout after 60 seconds
)
return {
"perspective": tool.replace("perspective-", ""),
"response": result.stdout.strip(),
"success": result.returncode == 0
}
def analyze_topic(topic: str) -> list[dict]:
\"\"\"Get all perspectives in parallel.\"\"\"
with ThreadPoolExecutor(max_workers=len(PERSPECTIVES)) as executor:
futures = {
executor.submit(get_perspective, tool, topic): tool
for tool in PERSPECTIVES
}
results = []
for future in as_completed(futures):
try:
results.append(future.result())
except Exception as e:
tool = futures[future]
results.append({
"perspective": tool,
"response": f"Error: {e}",
"success": False
})
return results
if __name__ == "__main__":
import sys
topic = sys.stdin.read() if not sys.stdin.isatty() else input("Topic: ")
print("Gathering perspectives...\\n")
perspectives = analyze_topic(topic)
for p in perspectives:
status = "" if p["success"] else ""
print(f"[{status}] {p['perspective'].upper()}")
print("-" * 40)
print(p["response"])
print()
</code></pre>
<h2 id="with-progress">Adding Progress Feedback</h2>
<p>For long-running parallel tasks, show progress as tools complete:</p>
<pre><code class="language-python">import sys
from concurrent.futures import ThreadPoolExecutor, as_completed
def run_with_progress(tools: list[str], input_text: str):
\"\"\"Run tools in parallel with progress updates.\"\"\"
total = len(tools)
completed = 0
with ThreadPoolExecutor(max_workers=total) as executor:
futures = {
executor.submit(run_tool, tool, input_text): tool
for tool in tools
}
results = []
for future in as_completed(futures):
completed += 1
tool = futures[future]
result = future.result()
results.append(result)
# Progress update
status = "" if result["success"] else ""
print(f"[{completed}/{total}] {status} {tool}", file=sys.stderr)
return results
</code></pre>
<h2 id="error-handling">Error Handling</h2>
<p>Handle failures gracefully so one tool doesn't break the entire workflow:</p>
<pre><code class="language-python">def run_tool_safe(tool_name: str, input_text: str, timeout: int = 120) -> dict:
\"\"\"Run a tool with timeout and error handling.\"\"\"
try:
result = subprocess.run(
[tool_name],
input=input_text,
capture_output=True,
text=True,
timeout=timeout
)
return {
"tool": tool_name,
"output": result.stdout,
"error": result.stderr if result.returncode != 0 else None,
"success": result.returncode == 0
}
except subprocess.TimeoutExpired:
return {
"tool": tool_name,
"output": "",
"error": f"Timeout after {timeout}s",
"success": False
}
except FileNotFoundError:
return {
"tool": tool_name,
"output": "",
"error": f"Tool '{tool_name}' not found",
"success": False
}
</code></pre>
<h2 id="best-practices">Best Practices</h2>
<ul>
<li><strong>Set timeouts</strong> - Prevent hanging on slow providers</li>
<li><strong>Handle errors per-tool</strong> - Don't let one failure break everything</li>
<li><strong>Limit concurrency</strong> - Match <code>max_workers</code> to your use case</li>
<li><strong>Use stderr for progress</strong> - Keep stdout clean for piping</li>
<li><strong>Consider rate limits</strong> - Some providers limit concurrent requests</li>
</ul>
<h2 id="example-project">Full Example: orchestrated-discussions</h2>
<p>For a complete implementation of parallel SmartTools orchestration, see the
<a href="https://gitea.brrd.tech/rob/orchestrated-discussions" target="_blank">orchestrated-discussions</a>
project. It implements:</p>
<ul>
<li>Multiple AI "participants" as SmartTools</li>
<li>Parallel execution with live progress logging</li>
<li>Shared log files for real-time monitoring</li>
<li>Discussion workflows with voting and consensus</li>
</ul>
""",
"headings": [
("why-parallel", "Why Parallel Execution?"),
("basic-pattern", "Basic Pattern"),
("real-world-example", "Real-World Example"),
("with-progress", "Adding Progress Feedback"),
("error-handling", "Error Handling"),
("best-practices", "Best Practices"),
("example-project", "Full Example Project"),
],
},
} }
@ -397,4 +627,5 @@ def get_toc():
]), ]),
SimpleNamespace(slug="publishing", title="Publishing", children=[]), SimpleNamespace(slug="publishing", title="Publishing", children=[]),
SimpleNamespace(slug="providers", title="Providers", children=[]), SimpleNamespace(slug="providers", title="Providers", children=[]),
SimpleNamespace(slug="parallel-orchestration", title="Parallel Orchestration", children=[]),
] ]