# discussion-pragmatist - Shipping-focused pragmatist participant for discussions # Usage: cat discussion.md | discussion-pragmatist --callout "Is this MVP-ready?" name: discussion-pragmatist description: Shipping-focused pragmatist participant for discussions category: Discussion meta: display_name: AI-Pragmatist alias: pragmatist type: voting expertise: - MVP scoping - Shipping velocity - Trade-off analysis - Iterative development - Technical debt management concerns: - "Can we ship this incrementally?" - "Are we over-engineering this?" - "What's the simplest thing that could work?" - "Is this scope creep?" provider: opencode-deepseek arguments: - flag: --callout variable: callout default: "" description: Specific question or @mention context - flag: --templates-dir variable: templates_dir default: "templates" description: Path to templates directory - flag: --diagrams-dir variable: diagrams_dir default: "diagrams" description: Path to save diagrams - flag: --log-file variable: log_file default: "" description: Path to log file for progress updates steps: # Step 1: Extract phase context from template - type: code code: | import re import os phase_match = re.search(r'', input, re.IGNORECASE) template_match = re.search(r'', input, re.IGNORECASE) current_phase = phase_match.group(1) if phase_match else "initial_feedback" template_name = template_match.group(1) if template_match else "feature" template_path = os.path.join(templates_dir, template_name + ".yaml") phase_goal = "Provide practical feedback" phase_instructions = "Review the proposal for complexity and shipping readiness." if os.path.exists(template_path): import yaml with open(template_path, 'r') as f: template = yaml.safe_load(f) phases = template.get("phases", {}) phase_info = phases.get(current_phase, {}) phase_goal = phase_info.get("goal", phase_goal) phase_instructions = phase_info.get("instructions", phase_instructions) phase_context = "Current Phase: " + current_phase + "\n" phase_context += "Phase Goal: " + phase_goal + "\n" phase_context += "Phase Instructions:\n" + phase_instructions output_var: phase_context, current_phase # Step 2: Prepare diagram path (pragmatist uses diagrams sparingly) - type: code code: | import re import os title_match = re.search(r'', input) discussion_name = "discussion" if title_match: discussion_name = title_match.group(1).strip().lower() discussion_name = re.sub(r'[^a-z0-9]+', '-', discussion_name) os.makedirs(diagrams_dir, exist_ok=True) existing = [] if os.path.exists(diagrams_dir): for f in os.listdir(diagrams_dir): if f.startswith(discussion_name): existing.append(f) next_num = len(existing) + 1 diagram_path = diagrams_dir + "/" + discussion_name + "_mvp_" + str(next_num) + ".puml" output_var: diagram_path # Step 3: Log progress before AI call - type: code code: | import sys import datetime as dt timestamp = dt.datetime.now().strftime("%H:%M:%S") for msg in [f"Phase: {current_phase}", "Calling AI provider..."]: line = f"[{timestamp}] [pragmatist] {msg}" print(line, file=sys.stderr) sys.stderr.flush() if log_file: with open(log_file, 'a') as f: f.write(line + "\n") f.flush() output_var: _progress1 # Step 4: Generate response - type: prompt prompt: | You are AI-Pragmatist (also known as Maya), a shipping-focused engineer who advocates for practical solutions and incremental delivery. ## FIRST: Understand the Goals Before critiquing, understand what the project is trying to achieve: - What problem is being solved? - Who is the target user? - What's the actual scope and ambition? Don't assume every project should be an MVP. Some projects have legitimate complexity. Your job is to identify unnecessary complexity, not to minimize all projects to their smallest possible form. ## Your Role - Advocate for simpler solutions where appropriate - Identify genuine over-engineering and scope creep - Suggest pragmatic approaches that match the project's goals - Balance quality with delivery speed - Challenge unnecessary complexity, but accept necessary complexity - Engage with all aspects of the discussion, bringing practical perspective ## Your Perspective - "Done is better than perfect when it's good enough" - Ship early and iterate, but understand the iteration plan - Complexity is sometimes necessary - distinguish essential from accidental - Technical debt is acceptable if managed consciously - Match the solution to the problem size ## Questions You Ask - Is this the simplest solution that achieves the actual goals? - Can we defer this complexity, or is it core to the value? - What's the minimum version that delivers real value? - Are we solving problems we don't have, or planning for known needs? - What are the trade-offs of cutting this? ## Phase Context {phase_context} ## Diagrams Only create a diagram if it helps show a simpler approach. Use simple flowcharts to contrast complex vs MVP solutions. Diagram path to use: {diagram_path} IMPORTANT: When you create a diagram, your comment MUST include: DIAGRAM: {diagram_path} This marker makes the diagram discoverable. Example comment structure: "Here's my MVP analysis... [Your comparison of complex vs simple approaches] DIAGRAM: {diagram_path}" ## Current Discussion {input} ## Your Task {callout} Follow the phase instructions. Analyze from a practical shipping perspective. Flag over-engineering with CONCERN: COMPLEXITY. ## Response Format Respond with valid JSON only. Use \n for newlines in strings (not literal newlines): {{ "comment": "Line 1\nLine 2\nCONCERN: COMPLEXITY", "vote": "READY" or "CHANGES" or "REJECT" or null, "diagram": "@startuml\nrectangle MVP\n@enduml" }} Important: The diagram field must use \n for newlines, not actual line breaks. Vote meanings: - READY: Good enough to ship - CHANGES: Simpler approach possible (suggest what) - REJECT: Too complex, needs fundamental simplification - null: Comment only, no vote change If you have nothing meaningful to add, respond: {{"sentinel": "NO_RESPONSE"}} provider: opencode-deepseek output_var: response # Step 5: Log progress after AI call - type: code code: | import sys import datetime as dt timestamp = dt.datetime.now().strftime("%H:%M:%S") line = f"[{timestamp}] [pragmatist] AI response received" print(line, file=sys.stderr) sys.stderr.flush() if log_file: with open(log_file, 'a') as f: f.write(line + "\n") f.flush() output_var: _progress2 # Step 6: Extract JSON from response (may be wrapped in markdown code block) - type: code code: | import re json_text = response.strip() code_block = re.search(r'```(?:json)?\s*(.*?)```', json_text, re.DOTALL) if code_block: json_text = code_block.group(1).strip() output_var: json_text # Step 5: Parse JSON - type: code code: | import json try: parsed = json.loads(json_text) except json.JSONDecodeError as e: # AI often returns literal newlines in JSON strings - escape them fixed = json_text.replace('\n', '\\n') try: parsed = json.loads(fixed) except json.JSONDecodeError: # Last resort: try to extract just the fields we need via regex import re comment_match = re.search(r'"comment"\s*:\s*"(.*?)"(?=\s*[,}])', json_text, re.DOTALL) vote_match = re.search(r'"vote"\s*:\s*("?\w+"?|null)', json_text) diagram_match = re.search(r'"diagram"\s*:\s*"(.*?)"(?=\s*[,}])', json_text, re.DOTALL) parsed = { "comment": comment_match.group(1).replace('\n', ' ') if comment_match else "Parse error", "vote": vote_match.group(1).strip('"') if vote_match else None, "diagram": diagram_match.group(1) if diagram_match else None } if parsed["vote"] == "null": parsed["vote"] = None comment = parsed.get("comment", "") vote = parsed.get("vote") diagram_content = parsed.get("diagram") has_diagram = "true" if diagram_content else "false" output_var: comment, vote, diagram_content, has_diagram # Step 6: Save diagram if present - type: code code: | if has_diagram == "true" and diagram_content: with open(diagram_path, 'w') as f: f.write(diagram_content) saved_diagram = diagram_path else: saved_diagram = "" output_var: saved_diagram # Step 7: Build final response - type: code code: | import json result = {"comment": comment, "vote": vote} if saved_diagram: result["diagram_file"] = saved_diagram final_response = json.dumps(result) output_var: final_response output: "{final_response}"