"""Documentation content for SmartTools web UI. This module contains the actual documentation text that gets rendered on the /docs pages. Content is stored as markdown-ish HTML for simplicity. """ DOCS = { "getting-started": { "title": "Getting Started", "description": "Learn how to install SmartTools and create your first AI-powered CLI tool", "content": """
SmartTools lets you build custom AI-powered CLI commands using simple YAML configuration. Create tools that work with any AI provider and compose them like Unix pipes.
SmartTools is a lightweight personal tool builder that lets you:
Get up and running in under a minute:
# Install SmartTools
pip install smarttools
# Create your first tool interactively
smarttools create
# Or install a tool from the registry
smarttools registry install official/summarize
# Use it!
cat article.txt | summarize
Each tool is a YAML file that defines:
Here's a simple example:
name: summarize
version: "1.0.0"
description: Summarize text using AI
arguments:
- flag: --max-length
variable: max_length
default: "200"
description: Maximum summary length in words
steps:
- type: prompt
provider: claude
prompt: |
Summarize the following text in {max_length} words or less:
{input}
output_var: summary
output: "{summary}"
SmartTools requires Python 3.8+ and works on Linux, macOS, and Windows.
The simplest way to install SmartTools:
pip install smarttools
Or with pipx for isolated installation:
pipx install smarttools
smarttools --version
smarttools --help
SmartTools needs at least one AI provider configured. The easiest is Claude CLI:
# Install Claude CLI (if you have an Anthropic API key)
pip install claude-cli
# Or use OpenAI
pip install openai
# Configure your provider
smarttools config
SmartTools installs wrapper scripts to ~/.local/bin/. Make sure this is in your PATH:
# Add to ~/.bashrc or ~/.zshrc
export PATH="$HOME/.local/bin:$PATH"
To contribute or modify SmartTools:
git clone https://gitea.brrd.tech/rob/SmartTools.git
cd SmartTools
pip install -e ".[dev]"
""",
"headings": [
("pip-install", "Install with pip"),
("verify", "Verify Installation"),
("configure-provider", "Configure a Provider"),
("wrapper-scripts", "Wrapper Scripts Location"),
("development-install", "Development Installation"),
],
},
"first-tool": {
"title": "Your First Tool",
"description": "Create your first SmartTools command step by step",
"parent": "getting-started",
"content": """
Let's create a simple tool that explains code. You'll learn the basics of tool configuration.
Run the interactive creator:
smarttools create
Or create the file manually at ~/.smarttools/explain/config.yaml:
name: explain
version: "1.0.0"
description: Explain code or concepts in simple terms
category: code
arguments:
- flag: --level
variable: level
default: "beginner"
description: "Explanation level: beginner, intermediate, or expert"
steps:
- type: prompt
provider: claude
prompt: |
Explain the following in simple terms suitable for a {level}:
{input}
Be concise but thorough. Use examples where helpful.
output_var: explanation
output: "{explanation}"
# Explain some code
echo "def fib(n): return n if n < 2 else fib(n-1) + fib(n-2)" | explain
# Explain for an expert
cat complex_algorithm.py | explain --level expert
Each argument becomes a CLI flag. The variable name is used in templates:
arguments:
- flag: --level # CLI flag: --level beginner
variable: level # Use as {level} in prompts
default: "beginner" # Default if not specified
Steps run in order. Each step can be a prompt or Python code:
steps:
- type: prompt
provider: claude # Which AI to use
prompt: "..." # The prompt template
output_var: result # Store response in {result}
The output template formats the final result:
output: "{explanation}" # Print the explanation variable
Share your tools with the community by publishing to the SmartTools Registry.
Make sure your tool has:
name and descriptionversion (semver format: 1.0.0)README.md file with usage examplesRegister at the registry to get your publisher namespace.
# Navigate to your tool directory
cd ~/.smarttools/my-tool/
# First time: enter your token when prompted
smarttools registry publish
# Dry run to validate without publishing
smarttools registry publish --dry-run
Published versions are immutable. To update a tool:
config.yamlsmarttools registry publishSmartTools works with any AI provider that has a CLI interface. Configure providers in
~/.smarttools/providers.yaml.
Create or edit ~/.smarttools/providers.yaml:
providers:
- name: claude
command: "claude -p"
- name: openai
command: "openai-cli"
- name: ollama
command: "ollama run llama2"
- name: mock
command: "echo '[MOCK RESPONSE]'"
Specify the provider in your step:
steps:
- type: prompt
provider: claude # Uses the "claude" provider from config
prompt: "..."
output_var: response
# Install
pip install claude-cli
# Configure with your API key
export ANTHROPIC_API_KEY="sk-ant-..."
# providers.yaml
providers:
- name: claude
command: "claude -p"
# Install
pip install openai-cli
# Configure
export OPENAI_API_KEY="sk-..."
# Install Ollama from ollama.ai
# Pull a model
ollama pull llama2
# providers.yaml
providers:
- name: ollama
command: "ollama run llama2"
Use the mock provider to test tools without API calls:
providers:
- name: mock
command: "echo 'This is a mock response for testing'"
| Provider | Best For | Cost |
|---|---|---|
| Claude | Complex reasoning, long context | Pay per token |
| OpenAI | General purpose, fast | Pay per token |
| Ollama | Privacy, offline use | Free (local) |
SmartTools executes steps sequentially within a tool, but you can run multiple tools in parallel using Python's ThreadPoolExecutor. This pattern is ideal for multi-agent workflows, parallel analysis, or any task where you need responses from multiple AI providers simultaneously.
Consider a code review workflow that needs input from multiple perspectives:
Use Python's concurrent.futures to run multiple SmartTools in parallel:
import subprocess
from concurrent.futures import ThreadPoolExecutor, as_completed
def run_tool(tool_name: str, input_text: str) -> dict:
\"\"\"Run a SmartTool and return its output.\"\"\"
result = subprocess.run(
[tool_name],
input=input_text,
capture_output=True,
text=True
)
return {
"tool": tool_name,
"output": result.stdout,
"success": result.returncode == 0
}
def run_parallel(tools: list[str], input_text: str) -> list[dict]:
\"\"\"Run multiple tools in parallel on the same input.\"\"\"
results = []
with ThreadPoolExecutor(max_workers=len(tools)) as executor:
# Submit all tools
futures = {
executor.submit(run_tool, tool, input_text): tool
for tool in tools
}
# Collect results as they complete
for future in as_completed(futures):
results.append(future.result())
return results
# Example usage
tools = ["security-review", "performance-review", "style-review"]
code = open("main.py").read()
reviews = run_parallel(tools, code)
for review in reviews:
print(f"=== {review['tool']} ===")
print(review['output'])
Here's a complete script that gets multiple AI perspectives on a topic:
#!/usr/bin/env python3
\"\"\"Get multiple AI perspectives on a topic in parallel.\"\"\"
import subprocess
import json
from concurrent.futures import ThreadPoolExecutor, as_completed
# Define your perspective tools (each is a SmartTool)
PERSPECTIVES = [
"perspective-optimist", # Focuses on opportunities
"perspective-critic", # Identifies problems
"perspective-pragmatist", # Focuses on actionability
]
def get_perspective(tool: str, topic: str) -> dict:
\"\"\"Get one perspective on a topic.\"\"\"
result = subprocess.run(
[tool],
input=topic,
capture_output=True,
text=True,
timeout=60 # Timeout after 60 seconds
)
return {
"perspective": tool.replace("perspective-", ""),
"response": result.stdout.strip(),
"success": result.returncode == 0
}
def analyze_topic(topic: str) -> list[dict]:
\"\"\"Get all perspectives in parallel.\"\"\"
with ThreadPoolExecutor(max_workers=len(PERSPECTIVES)) as executor:
futures = {
executor.submit(get_perspective, tool, topic): tool
for tool in PERSPECTIVES
}
results = []
for future in as_completed(futures):
try:
results.append(future.result())
except Exception as e:
tool = futures[future]
results.append({
"perspective": tool,
"response": f"Error: {e}",
"success": False
})
return results
if __name__ == "__main__":
import sys
topic = sys.stdin.read() if not sys.stdin.isatty() else input("Topic: ")
print("Gathering perspectives...\\n")
perspectives = analyze_topic(topic)
for p in perspectives:
status = "✓" if p["success"] else "✗"
print(f"[{status}] {p['perspective'].upper()}")
print("-" * 40)
print(p["response"])
print()
For long-running parallel tasks, show progress as tools complete:
import sys
from concurrent.futures import ThreadPoolExecutor, as_completed
def run_with_progress(tools: list[str], input_text: str):
\"\"\"Run tools in parallel with progress updates.\"\"\"
total = len(tools)
completed = 0
with ThreadPoolExecutor(max_workers=total) as executor:
futures = {
executor.submit(run_tool, tool, input_text): tool
for tool in tools
}
results = []
for future in as_completed(futures):
completed += 1
tool = futures[future]
result = future.result()
results.append(result)
# Progress update
status = "✓" if result["success"] else "✗"
print(f"[{completed}/{total}] {status} {tool}", file=sys.stderr)
return results
Handle failures gracefully so one tool doesn't break the entire workflow:
def run_tool_safe(tool_name: str, input_text: str, timeout: int = 120) -> dict:
\"\"\"Run a tool with timeout and error handling.\"\"\"
try:
result = subprocess.run(
[tool_name],
input=input_text,
capture_output=True,
text=True,
timeout=timeout
)
return {
"tool": tool_name,
"output": result.stdout,
"error": result.stderr if result.returncode != 0 else None,
"success": result.returncode == 0
}
except subprocess.TimeoutExpired:
return {
"tool": tool_name,
"output": "",
"error": f"Timeout after {timeout}s",
"success": False
}
except FileNotFoundError:
return {
"tool": tool_name,
"output": "",
"error": f"Tool '{tool_name}' not found",
"success": False
}
max_workers to your use caseFor a complete implementation of parallel SmartTools orchestration, see the orchestrated-discussions project. It implements:
Every SmartTool is defined by a YAML configuration file. This guide covers the complete structure and all available options.
Tool configs are stored in ~/.smarttools/<tool-name>/config.yaml.
# Required fields
name: my-tool # Tool name (lowercase, hyphens)
version: "1.0.0" # Semver version string
# Recommended fields
description: "What this tool does"
category: text-processing # For registry organization
tags: # Searchable tags
- text
- formatting
# Optional metadata
author: your-name
license: MIT
homepage: https://github.com/you/my-tool
# Arguments (custom CLI flags)
arguments:
- flag: --format
variable: format
default: "markdown"
description: Output format
# Processing steps
steps:
- type: prompt
provider: claude
prompt: |
Process this: {input}
output_var: result
# Final output template
output: "{result}"
The tool's identifier. Must be lowercase with hyphens only:
name: my-cool-tool # Good
name: MyCoolTool # Bad - no uppercase
name: my_cool_tool # Bad - no underscores
Semantic version string. Always quote it to prevent YAML parsing issues:
version: "1.0.0" # Good
version: 1.0 # Bad - YAML parses as float
Use {variable} syntax in prompts and output:
{input} - Content piped to the tool{variable_name} - From arguments or previous stepsTo include literal braces, double them:
prompt: |
Format as JSON: {{\"key\": \"value\"}}
Input: {input}
Standard categories for the registry:
text-processing - Summarize, translate, formatcode-analysis - Review, explain, generatedata-extraction - Parse, extract, convertcontent-creation - Write, expand, draftproductivity - Automate, organizeeducation - Explain, teach, simplifyTest your config without running:
# Validate syntax
smarttools test my-tool --dry-run
# Check for common issues
smarttools registry publish --dry-run
""",
"headings": [
("file-location", "File Location"),
("complete-structure", "Complete Structure"),
("required-fields", "Required Fields"),
("variable-substitution", "Variable Substitution"),
("categories", "Categories"),
("validation", "Validation"),
],
},
"arguments": {
"title": "Custom Arguments",
"description": "Add flags and options to make your tools flexible",
"content": """
Arguments let users customize tool behavior with CLI flags like --format json
or --verbose.
arguments:
- flag: --format # The CLI flag
variable: format # Variable name in templates
default: "text" # Default value if not provided
description: "Output format (text, json, markdown)"
Reference arguments in prompts using {variable_name}:
arguments:
- flag: --tone
variable: tone
default: "professional"
steps:
- type: prompt
provider: claude
prompt: |
Rewrite this text with a {tone} tone:
{input}
output_var: result
Users can then run:
echo "Hey, fix this bug ASAP!" | tone-shift --tone friendly
arguments:
- flag: --lang
variable: language
default: "English"
description: "Target language"
- flag: --formality
variable: formality
default: "neutral"
description: "Formality level (casual, neutral, formal)"
- flag: --max-length
variable: max_length
default: "500"
description: "Maximum output length in words"
Document valid choices in the description:
- flag: --style
variable: style
default: "concise"
description: "Writing style: concise, detailed, or academic"
Always quote defaults to avoid YAML issues:
- flag: --max-tokens
variable: max_tokens
default: "1000" # Quoted string, not integer
Use string values for conditional prompts:
- flag: --verbose
variable: verbose
default: "no"
description: "Include detailed explanations (yes/no)"
Combine multiple arguments in your prompt template:
steps:
- type: prompt
provider: claude
prompt: |
Translate the following text to {language}.
Use a {formality} register.
Keep the response under {max_length} words.
Text to translate:
{input}
output_var: translation
target_language not tl--lang not --target-languageComplex tools can chain multiple steps together. Each step's output becomes available to subsequent steps.
steps:
# Step 1: Extract key points
- type: prompt
provider: claude
prompt: "Extract 5 key points from: {input}"
output_var: key_points
# Step 2: Use step 1's output
- type: prompt
provider: claude
prompt: |
Create a summary from these points:
{key_points}
output_var: summary
output: "{summary}"
Variables flow through the pipeline:
{input} → available in all steps{key_points} → available after step 1{summary} → available after step 2Combine AI calls with Python processing:
steps:
# Step 1: AI extracts data
- type: prompt
provider: claude
prompt: |
Extract all email addresses from this text as a comma-separated list:
{input}
output_var: emails_raw
# Step 2: Python cleans the data
- type: code
code: |
emails = [e.strip() for e in emails_raw.split(',')]
emails = [e for e in emails if '@' in e]
email_count = len(emails)
cleaned_emails = '\\n'.join(sorted(set(emails)))
output_var: cleaned_emails, email_count
# Step 3: AI formats output
- type: prompt
provider: claude
prompt: |
Format these {email_count} emails as a nice list:
{cleaned_emails}
output_var: formatted
output: "{formatted}"
If any step fails, execution stops. Design steps to handle edge cases:
steps:
- type: code
code: |
# Handle empty input gracefully
if not input.strip():
result = "No input provided"
skip_ai = "yes"
else:
result = input
skip_ai = "no"
output_var: result, skip_ai
- type: prompt
provider: claude
prompt: |
{result}
# AI prompt only runs if skip_ai is "no"
output_var: ai_response
steps:
- type: prompt # Extract structured data
- type: code # Transform/filter
- type: prompt # Format for output
steps:
- type: prompt # Break down into parts
- type: prompt # Combine insights
steps:
- type: code # Validate input format
- type: prompt # Process if valid
# Show prompts without running
cat test.txt | my-tool --dry-run
# See verbose output
cat test.txt | my-tool --verbose
""",
"headings": [
("step-flow", "How Steps Flow"),
("mixed-steps", "Mixing Prompt and Code Steps"),
("error-handling", "Step Dependencies"),
("common-patterns", "Common Patterns"),
("debugging", "Debugging Multi-Step Tools"),
],
},
"code-steps": {
"title": "Code Steps",
"description": "Add Python code processing between AI calls",
"content": """
Code steps let you run Python code to process data, validate input, or transform AI outputs between prompts.
steps:
- type: code
code: |
# Python code here
result = input.upper()
output_var: result
Code steps have access to:
input - The original input textarguments:
- flag: --max
variable: max_items
default: "10"
steps:
- type: prompt
prompt: "List items from: {input}"
output_var: items_raw
- type: code
code: |
# Access argument and previous step output
items = items_raw.strip().split('\\n')
limited = items[:int(max_items)]
result = '\\n'.join(limited)
output_var: result
Return multiple values with comma-separated output_var:
- type: code
code: |
lines = input.strip().split('\\n')
line_count = len(lines)
word_count = len(input.split())
char_count = len(input)
output_var: line_count, word_count, char_count
- type: code
code: |
# Remove empty lines
lines = [l for l in input.split('\\n') if l.strip()]
cleaned = '\\n'.join(lines)
output_var: cleaned
- type: code
code: |
import json
data = json.loads(ai_response)
formatted = json.dumps(data, indent=2)
output_var: formatted
- type: code
code: |
import re
emails = re.findall(r'[\\w.-]+@[\\w.-]+', input)
valid_emails = '\\n'.join(emails) if emails else "No emails found"
output_var: valid_emails
- type: code
code: |
from pathlib import Path
# Write to temp file
output_path = Path('/tmp/output.txt')
output_path.write_text(processed_text)
result = f"Saved to {output_path}"
output_var: result
Standard library imports work in code steps:
- type: code
code: |
import json
import re
from datetime import datetime
from pathlib import Path
timestamp = datetime.now().isoformat()
result = f"Processed at {timestamp}"
output_var: result
Handle exceptions to prevent tool failures:
- type: code
code: |
import json
try:
data = json.loads(ai_response)
result = data.get('summary', 'No summary found')
except json.JSONDecodeError:
result = ai_response # Fall back to raw response
output_var: result
eval() on untrusted inputTake your tools to the next level with advanced patterns like multi-provider workflows, dynamic prompts, and complex data pipelines.
Use different AI providers for different tasks:
steps:
# Fast model for extraction
- type: prompt
provider: opencode-grok
prompt: "Extract key facts from: {input}"
output_var: facts
# Powerful model for synthesis
- type: prompt
provider: claude-opus
prompt: |
Create a comprehensive analysis from these facts:
{facts}
output_var: analysis
Use code steps to implement branching:
steps:
# Analyze input type
- type: code
code: |
if input.strip().startswith('{'):
input_type = "json"
processed = input
elif ',' in input and '\\n' in input:
input_type = "csv"
processed = input
else:
input_type = "text"
processed = input
output_var: input_type, processed
# Different prompt based on type
- type: prompt
provider: claude
prompt: |
This is {input_type} data. Analyze it appropriately:
{processed}
output_var: result
Multiple passes for quality improvement:
steps:
# First draft
- type: prompt
provider: opencode-deepseek
prompt: "Write a summary of: {input}"
output_var: draft
# Critique
- type: prompt
provider: claude-haiku
prompt: |
Review this summary for accuracy and clarity.
List specific improvements needed:
{draft}
output_var: critique
# Final version
- type: prompt
provider: claude-sonnet
prompt: |
Improve this summary based on the feedback:
Original: {draft}
Feedback: {critique}
output_var: final
name: csv-analyzer
steps:
# Parse CSV
- type: code
code: |
import csv
from io import StringIO
reader = csv.DictReader(StringIO(input))
rows = list(reader)
headers = list(rows[0].keys()) if rows else []
row_count = len(rows)
sample = rows[:5]
output_var: headers, row_count, sample
# AI analysis
- type: prompt
provider: claude
prompt: |
Analyze this CSV data:
- Columns: {headers}
- Row count: {row_count}
- Sample rows: {sample}
Provide insights about the data structure and patterns.
output_var: analysis
# Generate code
- type: prompt
provider: claude
prompt: |
Based on this analysis: {analysis}
Write Python code to process this CSV and extract key metrics.
output_var: code
Build prompts dynamically:
arguments:
- flag: --task
variable: task
default: "summarize"
steps:
- type: code
code: |
templates = {
"summarize": "Summarize this concisely:",
"explain": "Explain this for a beginner:",
"critique": "Provide constructive criticism of:",
"expand": "Expand on this with more detail:"
}
instruction = templates.get(task, templates["summarize"])
output_var: instruction
- type: prompt
provider: claude
prompt: |
{instruction}
{input}
output_var: result
steps:
# Use code to call external commands
- type: code
code: |
import subprocess
# Run linter
result = subprocess.run(
['pylint', '--output-format=json', '-'],
input=input,
capture_output=True,
text=True
)
lint_output = result.stdout
output_var: lint_output
# AI interprets results
- type: prompt
provider: claude
prompt: |
Explain these linting results in plain English
and suggest fixes:
{lint_output}
output_var: explanation