"""Documentation content for SmartTools web UI. This module contains the actual documentation text that gets rendered on the /docs pages. Content is stored as markdown-ish HTML for simplicity. """ DOCS = { "getting-started": { "title": "Getting Started", "description": "Learn how to install SmartTools and create your first AI-powered CLI tool", "content": """

SmartTools lets you build custom AI-powered CLI commands using simple YAML configuration. Create tools that work with any AI provider and compose them like Unix pipes.

What is SmartTools?

SmartTools is a lightweight personal tool builder that lets you:

Quick Start

Get up and running in under a minute:

# Install SmartTools
pip install smarttools

# Create your first tool interactively
smarttools create

# Or install a tool from the registry
smarttools registry install official/summarize

# Use it!
cat article.txt | summarize

How It Works

Each tool is a YAML file that defines:

  1. Arguments - Custom flags your tool accepts
  2. Steps - Prompts to send to AI or Python code to run
  3. Output - How to format the final result

Here's a simple example:

name: summarize
version: "1.0.0"
description: Summarize text using AI

arguments:
  - flag: --max-length
    variable: max_length
    default: "200"
    description: Maximum summary length in words

steps:
  - type: prompt
    provider: claude
    prompt: |
      Summarize the following text in {max_length} words or less:

      {input}
    output_var: summary

output: "{summary}"

Next Steps

""", "headings": [ ("what-is-smarttools", "What is SmartTools?"), ("quick-start", "Quick Start"), ("how-it-works", "How It Works"), ("next-steps", "Next Steps"), ], }, "installation": { "title": "Installation", "description": "How to install SmartTools on your system", "parent": "getting-started", "content": """

SmartTools requires Python 3.8+ and works on Linux, macOS, and Windows.

Install with pip

The simplest way to install SmartTools:

pip install smarttools

Or with pipx for isolated installation:

pipx install smarttools

Verify Installation

smarttools --version
smarttools --help

Configure a Provider

SmartTools needs at least one AI provider configured. The easiest is Claude CLI:

# Install Claude CLI (if you have an Anthropic API key)
pip install claude-cli

# Or use OpenAI
pip install openai

# Configure your provider
smarttools config

Wrapper Scripts Location

SmartTools installs wrapper scripts to ~/.local/bin/. Make sure this is in your PATH:

# Add to ~/.bashrc or ~/.zshrc
export PATH="$HOME/.local/bin:$PATH"

Development Installation

To contribute or modify SmartTools:

git clone https://gitea.brrd.tech/rob/SmartTools.git
cd SmartTools
pip install -e ".[dev]"
""", "headings": [ ("pip-install", "Install with pip"), ("verify", "Verify Installation"), ("configure-provider", "Configure a Provider"), ("wrapper-scripts", "Wrapper Scripts Location"), ("development-install", "Development Installation"), ], }, "first-tool": { "title": "Your First Tool", "description": "Create your first SmartTools command step by step", "parent": "getting-started", "content": """

Let's create a simple tool that explains code. You'll learn the basics of tool configuration.

Create the Tool

Run the interactive creator:

smarttools create

Or create the file manually at ~/.smarttools/explain/config.yaml:

name: explain
version: "1.0.0"
description: Explain code or concepts in simple terms
category: code

arguments:
  - flag: --level
    variable: level
    default: "beginner"
    description: "Explanation level: beginner, intermediate, or expert"

steps:
  - type: prompt
    provider: claude
    prompt: |
      Explain the following in simple terms suitable for a {level}:

      {input}

      Be concise but thorough. Use examples where helpful.
    output_var: explanation

output: "{explanation}"

Test Your Tool

# Explain some code
echo "def fib(n): return n if n < 2 else fib(n-1) + fib(n-2)" | explain

# Explain for an expert
cat complex_algorithm.py | explain --level expert

Understanding the Config

Arguments

Each argument becomes a CLI flag. The variable name is used in templates:

arguments:
  - flag: --level        # CLI flag: --level beginner
    variable: level      # Use as {level} in prompts
    default: "beginner"  # Default if not specified

Steps

Steps run in order. Each step can be a prompt or Python code:

steps:
  - type: prompt
    provider: claude     # Which AI to use
    prompt: "..."        # The prompt template
    output_var: result   # Store response in {result}

Output

The output template formats the final result:

output: "{explanation}"  # Print the explanation variable

Next Steps

""", "headings": [ ("create-tool", "Create the Tool"), ("test-it", "Test Your Tool"), ("understanding-config", "Understanding the Config"), ("next", "Next Steps"), ], }, "publishing": { "title": "Publishing Tools", "description": "Share your tools with the SmartTools community", "content": """

Share your tools with the community by publishing to the SmartTools Registry.

Before Publishing

Make sure your tool has:

Create an Account

Register at the registry to get your publisher namespace.

Get an API Token

  1. Go to your Dashboard → Tokens
  2. Click "Create New Token"
  3. Copy the token (shown only once!)

Publish Your Tool

# Navigate to your tool directory
cd ~/.smarttools/my-tool/

# First time: enter your token when prompted
smarttools registry publish

# Dry run to validate without publishing
smarttools registry publish --dry-run

Versioning

Published versions are immutable. To update a tool:

  1. Make your changes
  2. Bump the version in config.yaml
  3. Run smarttools registry publish

Best Practices

""", "headings": [ ("before-publishing", "Before Publishing"), ("create-account", "Create an Account"), ("get-token", "Get an API Token"), ("publish", "Publish Your Tool"), ("versioning", "Versioning"), ("best-practices", "Best Practices"), ], }, "providers": { "title": "AI Providers", "description": "Configure different AI providers for your tools", "content": """

SmartTools works with any AI provider that has a CLI interface. Configure providers in ~/.smarttools/providers.yaml.

Provider Configuration

Create or edit ~/.smarttools/providers.yaml:

providers:
  - name: claude
    command: "claude -p"

  - name: openai
    command: "openai-cli"

  - name: ollama
    command: "ollama run llama2"

  - name: mock
    command: "echo '[MOCK RESPONSE]'"

Using Providers in Tools

Specify the provider in your step:

steps:
  - type: prompt
    provider: claude  # Uses the "claude" provider from config
    prompt: "..."
    output_var: response

Claude (Anthropic)

# Install
pip install claude-cli

# Configure with your API key
export ANTHROPIC_API_KEY="sk-ant-..."
# providers.yaml
providers:
  - name: claude
    command: "claude -p"

OpenAI

# Install
pip install openai-cli

# Configure
export OPENAI_API_KEY="sk-..."

Ollama (Local)

# Install Ollama from ollama.ai
# Pull a model
ollama pull llama2
# providers.yaml
providers:
  - name: ollama
    command: "ollama run llama2"

Testing with Mock Provider

Use the mock provider to test tools without API calls:

providers:
  - name: mock
    command: "echo 'This is a mock response for testing'"

Choosing a Provider

Provider Best For Cost
Claude Complex reasoning, long context Pay per token
OpenAI General purpose, fast Pay per token
Ollama Privacy, offline use Free (local)
""", "headings": [ ("provider-config", "Provider Configuration"), ("using-providers", "Using Providers in Tools"), ("popular-providers", "Popular Providers"), ("testing", "Testing with Mock Provider"), ("provider-selection", "Choosing a Provider"), ], }, "parallel-orchestration": { "title": "Parallel Orchestration", "description": "Run multiple SmartTools concurrently for faster workflows", "content": """

SmartTools executes steps sequentially within a tool, but you can run multiple tools in parallel using Python's ThreadPoolExecutor. This pattern is ideal for multi-agent workflows, parallel analysis, or any task where you need responses from multiple AI providers simultaneously.

Why Parallel Execution?

Consider a code review workflow that needs input from multiple perspectives:

Basic Pattern

Use Python's concurrent.futures to run multiple SmartTools in parallel:

import subprocess
from concurrent.futures import ThreadPoolExecutor, as_completed

def run_tool(tool_name: str, input_text: str) -> dict:
    \"\"\"Run a SmartTool and return its output.\"\"\"
    result = subprocess.run(
        [tool_name],
        input=input_text,
        capture_output=True,
        text=True
    )
    return {
        "tool": tool_name,
        "output": result.stdout,
        "success": result.returncode == 0
    }

def run_parallel(tools: list[str], input_text: str) -> list[dict]:
    \"\"\"Run multiple tools in parallel on the same input.\"\"\"
    results = []

    with ThreadPoolExecutor(max_workers=len(tools)) as executor:
        # Submit all tools
        futures = {
            executor.submit(run_tool, tool, input_text): tool
            for tool in tools
        }

        # Collect results as they complete
        for future in as_completed(futures):
            results.append(future.result())

    return results

# Example usage
tools = ["security-review", "performance-review", "style-review"]
code = open("main.py").read()

reviews = run_parallel(tools, code)
for review in reviews:
    print(f"=== {review['tool']} ===")
    print(review['output'])

Real-World Example: Multi-Perspective Analysis

Here's a complete script that gets multiple AI perspectives on a topic:

#!/usr/bin/env python3
\"\"\"Get multiple AI perspectives on a topic in parallel.\"\"\"

import subprocess
import json
from concurrent.futures import ThreadPoolExecutor, as_completed

# Define your perspective tools (each is a SmartTool)
PERSPECTIVES = [
    "perspective-optimist",    # Focuses on opportunities
    "perspective-critic",      # Identifies problems
    "perspective-pragmatist",  # Focuses on actionability
]

def get_perspective(tool: str, topic: str) -> dict:
    \"\"\"Get one perspective on a topic.\"\"\"
    result = subprocess.run(
        [tool],
        input=topic,
        capture_output=True,
        text=True,
        timeout=60  # Timeout after 60 seconds
    )

    return {
        "perspective": tool.replace("perspective-", ""),
        "response": result.stdout.strip(),
        "success": result.returncode == 0
    }

def analyze_topic(topic: str) -> list[dict]:
    \"\"\"Get all perspectives in parallel.\"\"\"
    with ThreadPoolExecutor(max_workers=len(PERSPECTIVES)) as executor:
        futures = {
            executor.submit(get_perspective, tool, topic): tool
            for tool in PERSPECTIVES
        }

        results = []
        for future in as_completed(futures):
            try:
                results.append(future.result())
            except Exception as e:
                tool = futures[future]
                results.append({
                    "perspective": tool,
                    "response": f"Error: {e}",
                    "success": False
                })

        return results

if __name__ == "__main__":
    import sys
    topic = sys.stdin.read() if not sys.stdin.isatty() else input("Topic: ")

    print("Gathering perspectives...\\n")
    perspectives = analyze_topic(topic)

    for p in perspectives:
        status = "✓" if p["success"] else "✗"
        print(f"[{status}] {p['perspective'].upper()}")
        print("-" * 40)
        print(p["response"])
        print()

Adding Progress Feedback

For long-running parallel tasks, show progress as tools complete:

import sys
from concurrent.futures import ThreadPoolExecutor, as_completed

def run_with_progress(tools: list[str], input_text: str):
    \"\"\"Run tools in parallel with progress updates.\"\"\"
    total = len(tools)
    completed = 0

    with ThreadPoolExecutor(max_workers=total) as executor:
        futures = {
            executor.submit(run_tool, tool, input_text): tool
            for tool in tools
        }

        results = []
        for future in as_completed(futures):
            completed += 1
            tool = futures[future]
            result = future.result()
            results.append(result)

            # Progress update
            status = "✓" if result["success"] else "✗"
            print(f"[{completed}/{total}] {status} {tool}", file=sys.stderr)

        return results

Error Handling

Handle failures gracefully so one tool doesn't break the entire workflow:

def run_tool_safe(tool_name: str, input_text: str, timeout: int = 120) -> dict:
    \"\"\"Run a tool with timeout and error handling.\"\"\"
    try:
        result = subprocess.run(
            [tool_name],
            input=input_text,
            capture_output=True,
            text=True,
            timeout=timeout
        )
        return {
            "tool": tool_name,
            "output": result.stdout,
            "error": result.stderr if result.returncode != 0 else None,
            "success": result.returncode == 0
        }
    except subprocess.TimeoutExpired:
        return {
            "tool": tool_name,
            "output": "",
            "error": f"Timeout after {timeout}s",
            "success": False
        }
    except FileNotFoundError:
        return {
            "tool": tool_name,
            "output": "",
            "error": f"Tool '{tool_name}' not found",
            "success": False
        }

Best Practices

Full Example: orchestrated-discussions

For a complete implementation of parallel SmartTools orchestration, see the orchestrated-discussions project. It implements:

""", "headings": [ ("why-parallel", "Why Parallel Execution?"), ("basic-pattern", "Basic Pattern"), ("real-world-example", "Real-World Example"), ("with-progress", "Adding Progress Feedback"), ("error-handling", "Error Handling"), ("best-practices", "Best Practices"), ("example-project", "Full Example Project"), ], }, "yaml-config": { "title": "Understanding YAML Config", "description": "Learn the structure of SmartTools configuration files", "content": """

Every SmartTool is defined by a YAML configuration file. This guide covers the complete structure and all available options.

File Location

Tool configs are stored in ~/.smarttools/<tool-name>/config.yaml.

Complete Structure

# Required fields
name: my-tool              # Tool name (lowercase, hyphens)
version: "1.0.0"           # Semver version string

# Recommended fields
description: "What this tool does"
category: text-processing  # For registry organization
tags:                      # Searchable tags
  - text
  - formatting

# Optional metadata
author: your-name
license: MIT
homepage: https://github.com/you/my-tool

# Arguments (custom CLI flags)
arguments:
  - flag: --format
    variable: format
    default: "markdown"
    description: Output format

# Processing steps
steps:
  - type: prompt
    provider: claude
    prompt: |
      Process this: {input}
    output_var: result

# Final output template
output: "{result}"

Required Fields

name

The tool's identifier. Must be lowercase with hyphens only:

name: my-cool-tool    # Good
name: MyCoolTool      # Bad - no uppercase
name: my_cool_tool    # Bad - no underscores

version

Semantic version string. Always quote it to prevent YAML parsing issues:

version: "1.0.0"      # Good
version: 1.0          # Bad - YAML parses as float

Variable Substitution

Use {variable} syntax in prompts and output:

To include literal braces, double them:

prompt: |
  Format as JSON: {{\"key\": \"value\"}}
  Input: {input}

Categories

Standard categories for the registry:

Validation

Test your config without running:

# Validate syntax
smarttools test my-tool --dry-run

# Check for common issues
smarttools registry publish --dry-run
""", "headings": [ ("file-location", "File Location"), ("complete-structure", "Complete Structure"), ("required-fields", "Required Fields"), ("variable-substitution", "Variable Substitution"), ("categories", "Categories"), ("validation", "Validation"), ], }, "arguments": { "title": "Custom Arguments", "description": "Add flags and options to make your tools flexible", "content": """

Arguments let users customize tool behavior with CLI flags like --format json or --verbose.

Basic Syntax

arguments:
  - flag: --format        # The CLI flag
    variable: format      # Variable name in templates
    default: "text"       # Default value if not provided
    description: "Output format (text, json, markdown)"

Using Arguments

Reference arguments in prompts using {variable_name}:

arguments:
  - flag: --tone
    variable: tone
    default: "professional"

steps:
  - type: prompt
    provider: claude
    prompt: |
      Rewrite this text with a {tone} tone:

      {input}
    output_var: result

Users can then run:

echo "Hey, fix this bug ASAP!" | tone-shift --tone friendly

Multiple Arguments

arguments:
  - flag: --lang
    variable: language
    default: "English"
    description: "Target language"

  - flag: --formality
    variable: formality
    default: "neutral"
    description: "Formality level (casual, neutral, formal)"

  - flag: --max-length
    variable: max_length
    default: "500"
    description: "Maximum output length in words"

Argument Patterns

Choice Arguments

Document valid choices in the description:

- flag: --style
  variable: style
  default: "concise"
  description: "Writing style: concise, detailed, or academic"

Numeric Arguments

Always quote defaults to avoid YAML issues:

- flag: --max-tokens
  variable: max_tokens
  default: "1000"  # Quoted string, not integer

Boolean-like Arguments

Use string values for conditional prompts:

- flag: --verbose
  variable: verbose
  default: "no"
  description: "Include detailed explanations (yes/no)"

Using in Prompts

Combine multiple arguments in your prompt template:

steps:
  - type: prompt
    provider: claude
    prompt: |
      Translate the following text to {language}.
      Use a {formality} register.
      Keep the response under {max_length} words.

      Text to translate:
      {input}
    output_var: translation

Best Practices

""", "headings": [ ("basic-syntax", "Basic Syntax"), ("using-arguments", "Using Arguments"), ("multiple-arguments", "Multiple Arguments"), ("argument-types", "Argument Patterns"), ("in-prompts", "Using in Prompts"), ("best-practices", "Best Practices"), ], }, "multi-step": { "title": "Multi-Step Workflows", "description": "Chain prompts and code steps together", "content": """

Complex tools can chain multiple steps together. Each step's output becomes available to subsequent steps.

How Steps Flow

steps:
  # Step 1: Extract key points
  - type: prompt
    provider: claude
    prompt: "Extract 5 key points from: {input}"
    output_var: key_points

  # Step 2: Use step 1's output
  - type: prompt
    provider: claude
    prompt: |
      Create a summary from these points:
      {key_points}
    output_var: summary

output: "{summary}"

Variables flow through the pipeline:

Mixing Prompt and Code Steps

Combine AI calls with Python processing:

steps:
  # Step 1: AI extracts data
  - type: prompt
    provider: claude
    prompt: |
      Extract all email addresses from this text as a comma-separated list:
      {input}
    output_var: emails_raw

  # Step 2: Python cleans the data
  - type: code
    code: |
      emails = [e.strip() for e in emails_raw.split(',')]
      emails = [e for e in emails if '@' in e]
      email_count = len(emails)
      cleaned_emails = '\\n'.join(sorted(set(emails)))
    output_var: cleaned_emails, email_count

  # Step 3: AI formats output
  - type: prompt
    provider: claude
    prompt: |
      Format these {email_count} emails as a nice list:
      {cleaned_emails}
    output_var: formatted

output: "{formatted}"

Step Dependencies

If any step fails, execution stops. Design steps to handle edge cases:

steps:
  - type: code
    code: |
      # Handle empty input gracefully
      if not input.strip():
          result = "No input provided"
          skip_ai = "yes"
      else:
          result = input
          skip_ai = "no"
    output_var: result, skip_ai

  - type: prompt
    provider: claude
    prompt: |
      {result}
      # AI prompt only runs if skip_ai is "no"
    output_var: ai_response

Common Patterns

Extract → Transform → Format

steps:
  - type: prompt    # Extract structured data
  - type: code      # Transform/filter
  - type: prompt    # Format for output

Analyze → Synthesize

steps:
  - type: prompt    # Break down into parts
  - type: prompt    # Combine insights

Validate → Process

steps:
  - type: code      # Validate input format
  - type: prompt    # Process if valid

Debugging Multi-Step Tools

# Show prompts without running
cat test.txt | my-tool --dry-run

# See verbose output
cat test.txt | my-tool --verbose
""", "headings": [ ("step-flow", "How Steps Flow"), ("mixed-steps", "Mixing Prompt and Code Steps"), ("error-handling", "Step Dependencies"), ("common-patterns", "Common Patterns"), ("debugging", "Debugging Multi-Step Tools"), ], }, "code-steps": { "title": "Code Steps", "description": "Add Python code processing between AI calls", "content": """

Code steps let you run Python code to process data, validate input, or transform AI outputs between prompts.

Basic Syntax

steps:
  - type: code
    code: |
      # Python code here
      result = input.upper()
    output_var: result

Available Variables

Code steps have access to:

arguments:
  - flag: --max
    variable: max_items
    default: "10"

steps:
  - type: prompt
    prompt: "List items from: {input}"
    output_var: items_raw

  - type: code
    code: |
      # Access argument and previous step output
      items = items_raw.strip().split('\\n')
      limited = items[:int(max_items)]
      result = '\\n'.join(limited)
    output_var: result

Multiple Output Variables

Return multiple values with comma-separated output_var:

- type: code
  code: |
    lines = input.strip().split('\\n')
    line_count = len(lines)
    word_count = len(input.split())
    char_count = len(input)
  output_var: line_count, word_count, char_count

Common Operations

Text Processing

- type: code
  code: |
    # Remove empty lines
    lines = [l for l in input.split('\\n') if l.strip()]
    cleaned = '\\n'.join(lines)
  output_var: cleaned

JSON Parsing

- type: code
  code: |
    import json
    data = json.loads(ai_response)
    formatted = json.dumps(data, indent=2)
  output_var: formatted

Data Validation

- type: code
  code: |
    import re
    emails = re.findall(r'[\\w.-]+@[\\w.-]+', input)
    valid_emails = '\\n'.join(emails) if emails else "No emails found"
  output_var: valid_emails

File Operations

- type: code
  code: |
    from pathlib import Path
    # Write to temp file
    output_path = Path('/tmp/output.txt')
    output_path.write_text(processed_text)
    result = f"Saved to {output_path}"
  output_var: result

Using Imports

Standard library imports work in code steps:

- type: code
  code: |
    import json
    import re
    from datetime import datetime
    from pathlib import Path

    timestamp = datetime.now().isoformat()
    result = f"Processed at {timestamp}"
  output_var: result

Error Handling

Handle exceptions to prevent tool failures:

- type: code
  code: |
    import json
    try:
        data = json.loads(ai_response)
        result = data.get('summary', 'No summary found')
    except json.JSONDecodeError:
        result = ai_response  # Fall back to raw response
  output_var: result

Security Notes

""", "headings": [ ("basic-syntax", "Basic Syntax"), ("available-variables", "Available Variables"), ("multiple-outputs", "Multiple Output Variables"), ("common-operations", "Common Operations"), ("using-imports", "Using Imports"), ("error-handling", "Error Handling"), ("security", "Security Notes"), ], }, "advanced-workflows": { "title": "Advanced Workflows", "description": "Complex multi-provider and advanced tool patterns", "content": """

Take your tools to the next level with advanced patterns like multi-provider workflows, dynamic prompts, and complex data pipelines.

Multi-Provider Workflows

Use different AI providers for different tasks:

steps:
  # Fast model for extraction
  - type: prompt
    provider: opencode-grok
    prompt: "Extract key facts from: {input}"
    output_var: facts

  # Powerful model for synthesis
  - type: prompt
    provider: claude-opus
    prompt: |
      Create a comprehensive analysis from these facts:
      {facts}
    output_var: analysis

Conditional Logic with Code

Use code steps to implement branching:

steps:
  # Analyze input type
  - type: code
    code: |
      if input.strip().startswith('{'):
          input_type = "json"
          processed = input
      elif ',' in input and '\\n' in input:
          input_type = "csv"
          processed = input
      else:
          input_type = "text"
          processed = input
    output_var: input_type, processed

  # Different prompt based on type
  - type: prompt
    provider: claude
    prompt: |
      This is {input_type} data. Analyze it appropriately:
      {processed}
    output_var: result

Iterative Refinement

Multiple passes for quality improvement:

steps:
  # First draft
  - type: prompt
    provider: opencode-deepseek
    prompt: "Write a summary of: {input}"
    output_var: draft

  # Critique
  - type: prompt
    provider: claude-haiku
    prompt: |
      Review this summary for accuracy and clarity.
      List specific improvements needed:
      {draft}
    output_var: critique

  # Final version
  - type: prompt
    provider: claude-sonnet
    prompt: |
      Improve this summary based on the feedback:

      Original: {draft}

      Feedback: {critique}
    output_var: final

Data Processing Pipelines

name: csv-analyzer
steps:
  # Parse CSV
  - type: code
    code: |
      import csv
      from io import StringIO
      reader = csv.DictReader(StringIO(input))
      rows = list(reader)
      headers = list(rows[0].keys()) if rows else []
      row_count = len(rows)
      sample = rows[:5]
    output_var: headers, row_count, sample

  # AI analysis
  - type: prompt
    provider: claude
    prompt: |
      Analyze this CSV data:
      - Columns: {headers}
      - Row count: {row_count}
      - Sample rows: {sample}

      Provide insights about the data structure and patterns.
    output_var: analysis

  # Generate code
  - type: prompt
    provider: claude
    prompt: |
      Based on this analysis: {analysis}

      Write Python code to process this CSV and extract key metrics.
    output_var: code

Template Composition

Build prompts dynamically:

arguments:
  - flag: --task
    variable: task
    default: "summarize"

steps:
  - type: code
    code: |
      templates = {
          "summarize": "Summarize this concisely:",
          "explain": "Explain this for a beginner:",
          "critique": "Provide constructive criticism of:",
          "expand": "Expand on this with more detail:"
      }
      instruction = templates.get(task, templates["summarize"])
    output_var: instruction

  - type: prompt
    provider: claude
    prompt: |
      {instruction}

      {input}
    output_var: result

Integrating External Tools

steps:
  # Use code to call external commands
  - type: code
    code: |
      import subprocess
      # Run linter
      result = subprocess.run(
          ['pylint', '--output-format=json', '-'],
          input=input,
          capture_output=True,
          text=True
      )
      lint_output = result.stdout
    output_var: lint_output

  # AI interprets results
  - type: prompt
    provider: claude
    prompt: |
      Explain these linting results in plain English
      and suggest fixes:

      {lint_output}
    output_var: explanation

Performance Tips

""", "headings": [ ("multi-provider", "Multi-Provider Workflows"), ("conditional-logic", "Conditional Logic with Code"), ("iterative-refinement", "Iterative Refinement"), ("data-pipelines", "Data Processing Pipelines"), ("template-composition", "Template Composition"), ("external-tools", "Integrating External Tools"), ("performance-tips", "Performance Tips"), ], }, } def get_doc(path: str) -> dict: """Get documentation content by path.""" # Normalize path path = path.strip("/").replace("docs/", "") or "getting-started" return DOCS.get(path, None) def get_toc(): """Get table of contents structure.""" from types import SimpleNamespace return [ SimpleNamespace(slug="getting-started", title="Getting Started", children=[ SimpleNamespace(slug="installation", title="Installation"), SimpleNamespace(slug="first-tool", title="Your First Tool"), SimpleNamespace(slug="yaml-config", title="YAML Config"), ]), SimpleNamespace(slug="arguments", title="Custom Arguments", children=[]), SimpleNamespace(slug="multi-step", title="Multi-Step Workflows", children=[ SimpleNamespace(slug="code-steps", title="Code Steps"), ]), SimpleNamespace(slug="providers", title="Providers", children=[]), SimpleNamespace(slug="publishing", title="Publishing", children=[]), SimpleNamespace(slug="advanced-workflows", title="Advanced Workflows", children=[ SimpleNamespace(slug="parallel-orchestration", title="Parallel Orchestration"), ]), ]