777 lines
31 KiB
Markdown
777 lines
31 KiB
Markdown
<!-- DISCUSSION -->
|
|
<!-- Title: AI enabled home lab productivity and development tool -->
|
|
<!-- Phase: initial_feedback -->
|
|
<!-- Status: OPEN -->
|
|
<!-- Created: 2025-12-16T06:07:59Z -->
|
|
<!-- Template: brainstorm -->
|
|
<!-- Participants: architect, security, pragmatist -->
|
|
|
|
# AI enabled home lab productivity and development tool
|
|
|
|
## Context
|
|
- AI generated checkable todo lists with timelines from conversations
|
|
- periodic productivity auditing and sumaries
|
|
- periodic task re-evaluating and planning
|
|
- beggining of day conversations and task deligation as well as end of day check in and summarizing
|
|
- periodic non intrusive ai lead interviews to establish why goals were not met and use for input in future planning
|
|
|
|
## Requirements
|
|
|
|
- [ ] Must run locally in home lab (no cloud dependency)
|
|
- [ ] Conversation interface for daily standups and check-ins
|
|
- [ ] Persistent storage for tasks, goals, and progress history
|
|
- [ ] Scheduling system for periodic check-ins (daily, weekly)
|
|
- [ ] Non-intrusive notification system for reminders
|
|
- [ ] Export/summary generation (daily, weekly reports)
|
|
|
|
## Open Questions
|
|
|
|
- What AI model/provider should power the conversations? (local LLM vs API)
|
|
- How should tasks be stored? (markdown files, database, calendar integration)
|
|
- What platforms need support? (CLI, web UI, mobile notifications)
|
|
- How to handle multiple projects/contexts?
|
|
- What level of automation vs user prompting for check-ins?
|
|
- How to measure "productivity" - what metrics matter?
|
|
|
|
|
|
---
|
|
|
|
*Discussion begins below.*
|
|
|
|
---
|
|
|
|
Name: AI-Architect
|
|
## Architectural Analysis\n\n### System Boundaries & Concerns\n\nThis proposal bundles several distinct capabilities that should be architecturally separated:\n\n1. **Task Management System** - CRUD operations, hierarchies, dependencies\n2. **Conversation Agent** - AI-powered standups and interviews\n3. **Analytics/Reporting Engine** - Summaries, productivity metrics\n4. **Scheduling/Notification Service** - Periodic triggers and reminders\n5. **Storage Layer** - Persistent state management\n\nARTIFACT: diagrams/homelab-arch.puml\n\n### Critical Architectural Decisions Needed\n\n**1. Data Model & Storage Strategy**\n\nThe proposal lacks clarity on how data flows and persists:\n- What's the schema for tasks vs goals vs journal entries vs metrics?\n- How do you handle task history/versioning (user might want undo/audit trail)?\n- How do conversations map to structured task updates?\n- Where's the single source of truth when AI suggests changes?\n\nRecommendation: Adopt **event sourcing** pattern. Store all interactions as events (task created, goal updated, standup completed), derive current state. This gives you audit trail, undo, and analytics for free.\n\n**2. AI Integration Architecture**\n\nLocal LLM vs API is a false dichotomy - you need abstraction:\n```\nConversation Interface\n ↓\nIntent Parser (deterministic)\n ↓\nAI Provider Abstraction Layer\n ↓ (strategy pattern)\n[Local LLM] [API Provider] [Hybrid]\n```\n\nStart with API (faster iteration), add local LLM support later. Don't let AI provider choice dictate your architecture.\n\n**3. Temporal Concerns**\n\nThis system is fundamentally about time-series data:\n- Goals set at T0 may be irrelevant at T30\n- \"Why goals weren't met\" requires comparing past intent with current reality\n- Productivity trends need historical context\n\nRecommendation: Model time explicitly. Each entity should be temporal (valid-from/valid-to). This prevents \"updating\" goals and losing historical context.\n\n**4. Integration Strategy**\n\nDon't build a walled garden. This should be a **productivity hub**, not a replacement for existing tools:\n- Import from/export to standard formats (iCal, Markdown, JSON)\n- Webhook/API support for external tools (GitHub, GitLab, Jira)\n- File-based storage enables git versioning and external processing\n\nRecommendation: Follow **Unix philosophy** - do one thing well, compose with other tools.\n\n### Architectural Risks\n\n**RISK: Scope Creep**\nThis description contains 6+ distinct products. Without clear boundaries, you'll build a monolith that does everything poorly.\n\n**RISK: AI Over-reliance**\nIf core functionality (task CRUD, scheduling) depends on AI availability, system becomes fragile. AI should augment, not replace, deterministic logic.\n\n**RISK: Local-First Constraints**\nLocal-only means:\n- No mobile push notifications (requires cloud relay)\n- No multi-device sync (without building sync infrastructure)\n- Harder integration with cloud tools (GitHub, etc.)\n\nConsider hybrid: core logic local, optional sync/notification services.\n\n**RISK: Privacy/Security Model Undefined**\nPersonal productivity data is sensitive. Needs:\n- Encryption at rest if using local LLM\n- Clear data retention policies\n- Audit logging for AI access to personal data\n\n### Recommended Architecture\n\n**Phase 1 Foundation:**\n1. File-based event store (append-only markdown or JSONLines)\n2. CLI for manual task management (prove data model works)\n3. Simple scheduler for periodic prompts (cron + notification)\n4. Markdown export (human-readable, git-friendly)\n\n**Phase 2 AI Layer:**\n1. Conversation parser (maps natural language → structured events)\n2. AI provider abstraction (start with API, add local later)\n3. Summarization service (reads event store, generates reports)\n\n**Phase 3 Intelligence:**\n1. Analytics engine (productivity metrics from events)\n2. Adaptive scheduling (learns optimal check-in times)\n3. Retrospective interviewer (compares goals vs outcomes)\n\n**Key Patterns to Apply:**\n- **Event Sourcing**: All changes as events, state is derived\n- **CQRS**: Separate write (AI conversations) from read (reports/queries)\n- **Strategy Pattern**: Pluggable AI providers, notification channels\n- **Repository Pattern**: Abstract storage (easy migration from files → DB later)\n\n### Open Questions for Next Phase\n\n1. **Who is the user?** Solo developer or team? Multi-tenant requirements?\n2. **What's the MVP?** Which of these 6 features is the actual核心 (core)?\n3. **What existing tools do you use?** This should integrate, not replace\n4. **What's your deployment model?** Docker, systemd service, cron jobs?\n5. **How do you handle schema evolution?** Data format will change - migration strategy?\n\n### Verdict\n\nThe concept is sound but architecturally under-specified. Before writing code:\n1. Define bounded contexts (separate task mgmt from analytics from AI)\n2. Design the data model (entity relationships, temporal aspects)\n3. Choose file format (Markdown? YAML? JSONLines?)\n4. Sketch the MVP (1-2 features max)\n\nThis prevents building a big ball of mud.
|
|
VOTE: CHANGES
|
|
|
|
---
|
|
|
|
Name: AI-Security
|
|
## Security Analysis - AI-Enabled Home Lab Tool
|
|
|
|
I've reviewed the proposal from a security perspective. Here are my findings:
|
|
|
|
### Critical Security Considerations
|
|
|
|
**CONCERN: Sensitive data exposure** - Daily conversations, goals, task histories, and "why goals weren't met" interviews will contain highly personal and potentially sensitive information about work patterns, mental state, and personal life.
|
|
|
|
**CONCERN: Local LLM model security** - If using local models, need to verify model provenance and ensure no data exfiltration capabilities are embedded.
|
|
|
|
**CONCERN: Authentication and access control** - No mention of how the system authenticates users or restricts access to personal productivity data.
|
|
|
|
### Threat Model Overview
|
|
|
|
I've created a threat model diagram showing the primary attack surfaces and trust boundaries.
|
|
|
|
DIAGRAM: diagrams/ai-enabled-home-lab-productivity-and-development-tool_security_1.puml
|
|
|
|
### Key Security Requirements
|
|
|
|
**Q: How will conversation data be encrypted at rest?** - Given the personal nature of daily check-ins and retrospectives, encryption is essential.
|
|
|
|
**Q: What authentication mechanism will protect the web UI (if implemented)?** - Home lab doesn't mean no authentication needed.
|
|
|
|
**Q: How will the notification system be secured?** - Push notifications or external integrations could leak task/goal information.
|
|
|
|
**TODO: Define data retention policy** - How long to keep conversation histories, and secure deletion procedures.
|
|
|
|
**TODO: Implement audit logging** - Track who accessed what data and when, especially for sensitive retrospective interviews.
|
|
|
|
**TODO: Establish backup security** - Encrypted backups of personal productivity data with secure key management.
|
|
|
|
### Recommended Security Controls
|
|
|
|
1. **Data Protection**
|
|
- Encrypt SQLite/database at rest (e.g., SQLCipher)
|
|
- If using markdown files, consider encrypted filesystem or file-level encryption
|
|
- Secure deletion for removed tasks/conversations
|
|
|
|
2. **Access Control**
|
|
- Authentication even for local-only deployments (protect against other home lab services/users)
|
|
- Consider hardware token support (YubiKey) for high-value productivity data
|
|
- Session management with timeouts
|
|
|
|
3. **Network Security**
|
|
- If web UI: HTTPS only (even localhost), strong CSP headers
|
|
- Rate limiting on API endpoints to prevent brute force
|
|
- No external network calls without explicit user consent and audit trail
|
|
|
|
4. **Input Validation**
|
|
- Sanitize all user inputs to prevent injection attacks
|
|
- Validate task/goal formats before storage
|
|
- Prevent path traversal in export/summary generation
|
|
|
|
5. **AI-Specific Risks**
|
|
- Prompt injection protection - users might unintentionally include malicious patterns
|
|
- Model output validation - ensure AI can't suggest executing arbitrary commands
|
|
- Context isolation between projects to prevent data leakage
|
|
|
|
### Privacy Considerations
|
|
|
|
**CONCERN: Interview data sensitivity** - "AI-led interviews to establish why goals weren't met" is psychologically sensitive data. This needs the highest protection level.
|
|
|
|
Recommendation: Implement granular privacy controls allowing users to mark certain conversations as "extra sensitive" with enhanced encryption or opt-out from persistent storage.
|
|
|
|
### Lower Priority Items
|
|
|
|
- Dependency scanning for local LLM libraries
|
|
- Regular security updates for web framework (if used)
|
|
- Sandboxing for AI model execution
|
|
- Secure random number generation for session tokens
|
|
|
|
### Verdict
|
|
|
|
The concept is sound for a home lab project, but given the highly personal nature of productivity/goal data, security must be built in from the start, not added later.
|
|
|
|
VOTE: CHANGES
|
|
|
|
Required changes before proceeding:
|
|
1. Define authentication mechanism
|
|
2. Specify encryption approach for data at rest
|
|
3. Establish data retention and deletion policy
|
|
4. Address the "sensitive interview data" storage concern
|
|
|
|
DIAGRAM: diagrams/ai-enabled-home-lab-productivity-and-development-tool_security_1.puml
|
|
VOTE: CHANGES
|
|
|
|
---
|
|
|
|
Name: AI-Pragmatist
|
|
This proposal has classic over-engineering warning signs. Let me break down what we can ship NOW vs what we're dreaming about.
|
|
|
|
## CONCERN: COMPLEXITY - Scope Creep Alert
|
|
|
|
This isn't one project, it's 3-4 projects mashed together:
|
|
1. Task management system
|
|
2. AI conversation bot
|
|
3. Productivity analytics engine
|
|
4. Interview/retrospective system
|
|
|
|
## MVP That Ships This Week
|
|
|
|
Start with the SMALLEST thing that adds value:
|
|
|
|
**Week 1 MVP: Daily Check-in Bot**
|
|
- Single markdown file for tasks (`tasks.md`)
|
|
- Two CLI commands: `homelab start-day` and `homelab end-day`
|
|
- AI reads tasks.md, asks 2-3 questions, appends today's plan
|
|
- End of day: quick review, mark done items, note blockers
|
|
- That's it. No scheduling, no analytics, no interviews.
|
|
|
|
## What To Defer (Maybe Forever)
|
|
|
|
**"Periodic productivity auditing"** - Who asked for this? Start with manual `homelab summary` command you run when YOU want it.
|
|
|
|
**"Timelines from conversations"** - Just write tasks with optional due dates. Don't build a project management system.
|
|
|
|
**"Non-intrusive AI lead interviews"** - This is session 10 material. You don't even know if daily check-ins work yet.
|
|
|
|
**"Periodic task re-evaluating"** - Manual for now. Add automation after 2 weeks of actual usage.
|
|
|
|
## Open Questions - Pragmatic Answers
|
|
|
|
**Q: What AI model?**
|
|
A: Whatever you already have API keys for. Don't block on this. OpenAI API or Anthropic works fine. Local LLM setup is a 2-week rabbit hole.
|
|
|
|
**Q: Storage?**
|
|
A: Markdown files in `~/.homelab/`. Grep-able, git-friendly, zero setup. Database is premature.
|
|
|
|
**Q: Platform support?**
|
|
A: CLI only. Web UI is month 2. Mobile is month 6 if you're still using this.
|
|
|
|
**Q: Multiple projects?**
|
|
A: One context. Add tags later if needed (#work, #personal). Don't build project management.
|
|
|
|
**Q: Automation level?**
|
|
A: Zero automation. You type commands. Add cron jobs after proving you use it daily for a week.
|
|
|
|
**Q: Productivity metrics?**
|
|
A: Count completed tasks. That's it. Fancy metrics come after you have 30 days of data.
|
|
|
|
## Shipping Strategy
|
|
|
|
1. **Day 1-2:** `homelab start-day` that reads tasks.md and asks "What's your focus today?"
|
|
2. **Day 3-4:** `homelab end-day` that reviews what you said you'd do
|
|
3. **Day 5:** Use it yourself for 3 days
|
|
4. **Week 2:** Add `homelab add-task` command
|
|
5. **Week 3:** Add `homelab summary --week`
|
|
|
|
If you're not using it daily by week 2, the fancy features won't save it.
|
|
|
|
## Reality Check
|
|
|
|
The requirements list says "must run locally" but also implies complex scheduling and notifications. Pick one:
|
|
- **Simple:** CLI tool you run manually = ships fast
|
|
- **Complex:** Background service with notifications = 3x the work
|
|
|
|
Start simple. Add complexity only when manual gets annoying.
|
|
|
|
DIAGRAM: diagrams/ai-enabled-home-lab-productivity-and-development-tool_mvp_1.puml
|
|
|
|
VOTE: CHANGES
|
|
|
|
Simplify to daily check-in CLI tool. Defer analytics, automation, and interviews until you prove the core loop works.
|
|
|
|
DIAGRAM: diagrams/ai-enabled-home-lab-productivity-and-development-tool_mvp_1.puml
|
|
VOTE: CHANGES
|
|
|
|
---
|
|
|
|
Name: rob
|
|
Thanks for the thorough feedback. Here are my thoughts:
|
|
|
|
## Scope Decision
|
|
|
|
Pragmatist is right - I'm describing 4 projects. Let me narrow to the core value proposition:
|
|
|
|
**MVP Focus: Daily AI Check-in CLI**
|
|
- `homelab start-day` - Review yesterday, plan today
|
|
- `homelab end-day` - Quick reflection, capture blockers
|
|
- Simple markdown file storage
|
|
|
|
I'll defer: analytics, automated scheduling, interviews, web UI, notifications.
|
|
|
|
## Addressing Architect's Concerns
|
|
|
|
1. **Data Model**: Starting with append-only markdown files. Each day gets a section. Git-friendly, grep-able, zero infrastructure.
|
|
|
|
2. **AI Provider**: Will use Claude API initially (already have access). Abstract behind a simple interface so I can swap later.
|
|
|
|
3. **Single user**: Just me, no multi-tenant complexity.
|
|
|
|
## Addressing Security's Concerns
|
|
|
|
1. **Encryption**: Home directory permissions + encrypted drive is sufficient for MVP. Not storing secrets, just task notes.
|
|
|
|
2. **Authentication**: CLI tool runs as my user. No web UI = no auth needed initially.
|
|
|
|
3. **Sensitive data**: Fair point about interview data. For MVP, I control what I type - will be mindful.
|
|
|
|
## Revised MVP Requirements
|
|
|
|
- [ ] CLI tool with start-day and end-day commands
|
|
- [ ] Markdown file storage in ~/.homelab/
|
|
- [ ] Claude API integration for conversation
|
|
- [ ] No scheduling, no automation, no web UI
|
|
|
|
## Open Questions Answered
|
|
|
|
- **Platform**: CLI only
|
|
- **Storage**: Markdown files
|
|
- **AI**: Claude API
|
|
- **Metrics**: Count completed tasks (manual)
|
|
|
|
Ready to move to sketching the implementation?
|
|
VOTE: READY
|
|
|
|
---
|
|
|
|
Name: AI-Architect
|
|
## Architectural Review - Revised MVP
|
|
|
|
### Excellent Scope Reduction
|
|
|
|
The pivot to a focused CLI tool is architecturally sound. You've eliminated 80% of complexity while keeping the core value proposition. This is the right move.
|
|
|
|
### Architecture Validation
|
|
|
|
**Data Model: Markdown Files ✓**
|
|
|
|
Append-only markdown in `~/.homelab/` is perfect for MVP:
|
|
- Git-friendly (automatic versioning and audit trail)
|
|
- Grep-able (ad-hoc queries without building a query engine)
|
|
- Human-readable (can edit manually if needed)
|
|
- Zero infrastructure overhead
|
|
|
|
Suggested file structure:
|
|
```
|
|
~/.homelab/
|
|
├── journal.md # Daily entries (append-only)
|
|
├── tasks.md # Current active tasks
|
|
└── archive/ # Completed tasks (optional)
|
|
└── 2025-12.md
|
|
```
|
|
|
|
This gives you separation of concerns: journal is temporal, tasks.md is current state.
|
|
|
|
**AI Provider Abstraction ✓**
|
|
|
|
Starting with Claude API behind an interface is the right call. Minimal abstraction needed:
|
|
```python
|
|
# Simple provider interface
|
|
class ConversationProvider:
|
|
def chat(self, messages: list) -> str:
|
|
pass
|
|
|
|
class ClaudeProvider(ConversationProvider):
|
|
def chat(self, messages: list) -> str:
|
|
# API call here
|
|
pass
|
|
```
|
|
|
|
Don't over-engineer this. Single method is enough for MVP.
|
|
|
|
**Security Model: Sufficient ✓**
|
|
|
|
For single-user CLI:
|
|
- Filesystem permissions (0600 on journal files)
|
|
- Encrypted drive (already have)
|
|
- No network exposure
|
|
|
|
This is adequate. Don't add complexity you don't need.
|
|
|
|
### Architectural Recommendations
|
|
|
|
**1. Command Structure**
|
|
|
|
Keep commands simple and composable:
|
|
```bash
|
|
homelab start-day # Interactive check-in
|
|
homelab end-day # Interactive reflection
|
|
homelab add-task # Quick task capture (optional)
|
|
homelab show # View current tasks
|
|
```
|
|
|
|
Follow CLI best practices:
|
|
- `--help` on all commands
|
|
- Exit codes (0 = success, non-zero = error)
|
|
- JSON output option (`--json`) for future scripting
|
|
|
|
**2. Conversation Flow Architecture**
|
|
|
|
```
|
|
User runs command
|
|
↓
|
|
Read current state (tasks.md, journal.md)
|
|
↓
|
|
Build context for AI (recent entries + current tasks)
|
|
↓
|
|
AI conversation (2-3 exchanges max)
|
|
↓
|
|
Parse AI response for structured data
|
|
↓
|
|
Append to journal.md
|
|
Update tasks.md
|
|
```
|
|
|
|
Key: **Parse AI output for structure**. Don't just append freeform text. Extract:
|
|
- Tasks mentioned (parse "- [ ] task" format)
|
|
- Blockers (parse "BLOCKED: reason")
|
|
- Priorities (parse "PRIORITY: high/med/low")
|
|
|
|
This makes future analytics trivial (just grep for patterns).
|
|
|
|
**3. State Management**
|
|
|
|
For MVP, two files hold all state:
|
|
- `journal.md`: **append-only**, timestamped entries
|
|
- `tasks.md`: **current state**, updated in-place
|
|
|
|
This separation prevents journal pollution while keeping tasks easily editable.
|
|
|
|
**4. Configuration**
|
|
|
|
Simple YAML config in `~/.homelab/config.yaml`:
|
|
```yaml
|
|
ai:
|
|
provider: claude
|
|
api_key_env: ANTHROPIC_API_KEY # Don't store keys in config
|
|
model: claude-3-5-sonnet-20241022
|
|
|
|
paths:
|
|
journal: ~/.homelab/journal.md
|
|
tasks: ~/.homelab/tasks.md
|
|
```
|
|
|
|
Read from env vars for secrets. Never commit API keys.
|
|
|
|
### Critical Design Decisions
|
|
|
|
**DECISION: Journal Entry Format**
|
|
|
|
Standardize on a format NOW to avoid migration pain:
|
|
```markdown
|
|
## 2025-12-16 Monday
|
|
|
|
### Morning Check-in (08:30)
|
|
Focus: MVP for homelab tool
|
|
Planned:
|
|
- [ ] Design CLI architecture
|
|
- [ ] Set up project structure
|
|
|
|
### Evening Reflection (18:00)
|
|
Completed:
|
|
- [x] Design CLI architecture
|
|
- [ ] Set up project structure (blocked: need to finalize storage format)
|
|
|
|
Blockers: Deciding between JSON and markdown
|
|
Notes: Made good progress on design
|
|
```
|
|
|
|
Consistent structure = easy parsing later.
|
|
|
|
**DECISION: AI Context Window Management**
|
|
|
|
Don't send entire journal history to AI every time. Limit context:
|
|
- Last 3 days of journal entries
|
|
- Current tasks.md (all tasks)
|
|
- System prompt with personality/instructions
|
|
|
|
This keeps tokens low and responses focused.
|
|
|
|
**DECISION: Error Handling Strategy**
|
|
|
|
For MVP, fail gracefully:
|
|
- API down? Degrade to manual entry (prompt user for text)
|
|
- Malformed markdown? Show warning but don't block
|
|
- Missing config? Use sensible defaults
|
|
|
|
Robustness > perfection for a personal tool.
|
|
|
|
### Implementation Phases
|
|
|
|
**Phase 1: Scaffolding (Day 1)**
|
|
- CLI framework (argparse or click)
|
|
- Config loading
|
|
- File I/O (read/write markdown)
|
|
- Basic start-day command (no AI, just prompts)
|
|
|
|
**Phase 2: AI Integration (Day 2-3)**
|
|
- Claude API client
|
|
- Conversation loop
|
|
- Response parsing
|
|
- Append to journal
|
|
|
|
**Phase 3: Polish (Day 4-5)**
|
|
- end-day command
|
|
- Task extraction from conversations
|
|
- Help text and error messages
|
|
- Self-dogfood for 3 days
|
|
|
|
### Risks & Mitigations
|
|
|
|
**RISK: API Costs**
|
|
Mitigation: Set token limits, cache responses, use cheaper model (Haiku) for simple prompts.
|
|
|
|
**RISK: Journal Size Growth**
|
|
Mitigation: Archive old entries monthly. Keep last 90 days in main journal.
|
|
|
|
**RISK: AI Hallucinating Tasks**
|
|
Mitigation: Always show proposed changes before writing. Require user confirmation.
|
|
|
|
### Testing Strategy
|
|
|
|
For MVP:
|
|
- Manual testing (use it yourself daily)
|
|
- Unit tests for markdown parsing/writing
|
|
- Integration test with mocked AI responses
|
|
|
|
No need for comprehensive test suite yet. Add tests when you find bugs.
|
|
|
|
### Future Architecture Considerations
|
|
|
|
**Later additions (if you actually use it daily):**
|
|
1. `homelab summary --week` - Read journal, generate summary
|
|
2. `homelab trends` - Parse task completion patterns
|
|
3. Local LLM support - Swap provider implementation
|
|
4. Export formats (PDF, HTML) - Read markdown, convert
|
|
|
|
All of these are easy to add because of clean separation:
|
|
- Data layer (markdown files)
|
|
- AI layer (provider interface)
|
|
- CLI layer (commands)
|
|
|
|
No refactoring needed, just new commands.
|
|
|
|
### Verdict
|
|
|
|
The revised MVP is **architecturally sound and shippable**. Key strengths:
|
|
|
|
✓ Clear scope boundary (CLI only, single user)
|
|
✓ Simple data model (markdown files)
|
|
✓ Minimal abstractions (just enough for future flexibility)
|
|
✓ No premature optimization
|
|
✓ Fails gracefully (manual fallback if AI unavailable)
|
|
|
|
One recommendation: **Add a `homelab init` command** that creates the directory structure and config template. This makes onboarding trivial.
|
|
|
|
VOTE: READY
|
|
VOTE: READY
|
|
|
|
---
|
|
|
|
Name: AI-Security
|
|
## Security Assessment - Revised MVP
|
|
|
|
Good news: The scope reduction dramatically shrinks the attack surface. Here's my security analysis of the revised CLI-only MVP.
|
|
|
|
### Security Posture: Much Improved
|
|
|
|
The pivot to CLI-only with local markdown storage eliminates most of my previous concerns:
|
|
- ✅ No web UI = no CSRF, XSS, or session management issues
|
|
- ✅ No background service = no daemon security hardening needed
|
|
- ✅ No notifications = no external communication channels to secure
|
|
- ✅ Single user = no authorization complexity
|
|
|
|
### Remaining Security Considerations
|
|
|
|
**1. API Key Security**
|
|
|
|
CONCERN: Claude API key storage - This is now the primary security risk.
|
|
|
|
**TODO: Implement secure credential storage**
|
|
- Use system keyring (e.g., `keyring` Python library) rather than plaintext config file
|
|
- Never log API keys or include in markdown output
|
|
- Document key rotation procedure
|
|
|
|
Example: Don't store in `~/.homelab/config.yaml`, use OS keychain.
|
|
|
|
**2. Data Protection**
|
|
|
|
The markdown files will contain:
|
|
- Daily work patterns
|
|
- Task priorities
|
|
- Blockers and frustrations
|
|
- Performance reflection
|
|
|
|
Recommendation (not blocking):
|
|
- Ensure `~/.homelab/` has 0700 permissions (user-only)
|
|
- Consider adding `.homelab/` to backup encryption scope
|
|
- Document that git commits of this data should use private repos
|
|
|
|
**3. AI Prompt Injection (Low Risk)**
|
|
|
|
If you accidentally paste malicious content during check-ins, Claude's output could theoretically include harmful suggestions.
|
|
|
|
Mitigation: Don't execute AI suggestions automatically. Since this is conversational CLI, you're the human in the loop - low risk.
|
|
|
|
**4. Dependency Security**
|
|
|
|
TODO: Pin dependencies and use virtual environment
|
|
- `requirements.txt` with exact versions
|
|
- Regular updates for the Anthropic SDK
|
|
- Consider `pip-audit` in your dev workflow
|
|
|
|
### Low Priority / Future Considerations
|
|
|
|
- Rate limiting Claude API calls (cost control, not security)
|
|
- Input sanitization if you add shell command execution features
|
|
- Audit logging if you later add web UI or sharing features
|
|
|
|
### Verdict
|
|
|
|
The revised MVP has a reasonable security posture for a personal CLI tool. The main risk is API key exposure.
|
|
|
|
**Required for READY:**
|
|
- Confirm you'll use secure credential storage (keyring/keychain, not plaintext config)
|
|
- Set proper file permissions on ~/.homelab/
|
|
|
|
Once you confirm the API key handling approach, this is READY from a security perspective.
|
|
VOTE: CHANGES
|
|
|
|
---
|
|
|
|
Name: AI-Pragmatist
|
|
Parse error
|
|
VOTE: NONE
|
|
|
|
---
|
|
|
|
Name: AI-Architect
|
|
## Architectural Review - API Key Security
|
|
|
|
Security raises a valid concern about credential management, but I want to provide architectural context on the tradeoffs.
|
|
|
|
### Keyring vs Environment Variables vs Config File
|
|
|
|
**Option 1: System Keyring** (Security's recommendation)
|
|
- Pros: OS-level encryption, no plaintext storage
|
|
- Cons: Adds dependency (`keyring` library), complicates deployment, harder to debug
|
|
- Complexity: Medium
|
|
|
|
**Option 2: Environment Variable** (My recommendation for MVP)
|
|
- Pros: Standard practice, works everywhere, zero dependencies
|
|
- Cons: Visible in process list, shell history risk
|
|
- Complexity: Zero
|
|
|
|
**Option 3: Config file with 0600 permissions**
|
|
- Pros: Simple, explicit, easy to manage
|
|
- Cons: Plaintext on disk
|
|
- Complexity: Minimal
|
|
|
|
### Architectural Recommendation
|
|
|
|
For a personal CLI tool running on an encrypted drive with filesystem permissions, **environment variables are sufficient**:
|
|
|
|
```bash
|
|
# In ~/.bashrc or ~/.zshrc
|
|
export ANTHROPIC_API_KEY="sk-ant-..."
|
|
```
|
|
|
|
Config reads from env:
|
|
```python
|
|
import os
|
|
api_key = os.environ.get('ANTHROPIC_API_KEY')
|
|
if not api_key:
|
|
print("Error: Set ANTHROPIC_API_KEY environment variable")
|
|
sys.exit(1)
|
|
```
|
|
|
|
**Why this is architecturally sound:**
|
|
1. **Zero dependencies** - No keyring library, no OS-specific code
|
|
2. **Standard practice** - How AWS CLI, GitHub CLI, and most dev tools work
|
|
3. **Portable** - Works on Linux, macOS, Windows (WSL)
|
|
4. **Debuggable** - Clear error message if not set
|
|
5. **Secure enough** - On encrypted drive with user-only shell config
|
|
|
|
### Security vs Pragmatism Balance
|
|
|
|
Security's keyring recommendation is **architecturally over-engineered for this threat model**:
|
|
|
|
- **Threat**: Someone with filesystem access reads API key
|
|
- **Reality**: If attacker has filesystem access to your home directory, they can:
|
|
- Read your SSH keys
|
|
- Access your browser session tokens
|
|
- Read the journal.md (which is the actual sensitive data)
|
|
|
|
Defending the API key with keyring while leaving journal.md as plaintext markdown is **security theater**.
|
|
|
|
### Proper Threat Modeling
|
|
|
|
The real risks, in order:
|
|
1. **Journal data exposure** - Contains personal productivity patterns
|
|
2. **API key exposure** - Could run up charges or access Claude
|
|
3. **Dependency vulnerabilities** - Outdated packages
|
|
|
|
**Mitigation strategy:**
|
|
1. Filesystem permissions (0700 on ~/.homelab/)
|
|
2. Encrypted drive (already have)
|
|
3. Environment variable for API key
|
|
4. Dependabot/pip-audit for dependencies
|
|
|
|
This provides defense-in-depth without unnecessary complexity.
|
|
|
|
### Alternative: Hybrid Approach
|
|
|
|
If you want to satisfy Security's concern without adding dependencies:
|
|
|
|
```python
|
|
# Try env var first, fall back to config with warning
|
|
api_key = os.environ.get('ANTHROPIC_API_KEY')
|
|
if not api_key:
|
|
# Read from config but warn
|
|
api_key = config.get('ai', {}).get('api_key')
|
|
if api_key:
|
|
print("WARNING: API key in config file. Use ANTHROPIC_API_KEY env var instead.")
|
|
```
|
|
|
|
This educates users toward better practice without blocking functionality.
|
|
|
|
### Decision: Defer to Implementation Phase
|
|
|
|
This is an implementation detail, not an architectural blocker. Both approaches work:
|
|
|
|
- **MVP (Week 1)**: Environment variable (ships fastest)
|
|
- **Hardening (Week 4)**: Add keyring support as optional enhancement
|
|
|
|
Don't block on this. The architecture supports either approach via the provider interface.
|
|
|
|
### File Permissions - Agreed
|
|
|
|
Security's recommendation for 0700 permissions is correct and trivial:
|
|
|
|
```python
|
|
import os
|
|
homelab_dir = os.path.expanduser('~/.homelab')
|
|
os.makedirs(homelab_dir, mode=0o700, exist_ok=True)
|
|
```
|
|
|
|
This should be in the `homelab init` command I previously recommended.
|
|
|
|
### Verdict
|
|
|
|
The credential storage approach doesn't affect the core architecture. Start with environment variables (standard practice), add keyring later if needed.
|
|
|
|
The file permissions requirement is valid and easy to implement.
|
|
|
|
**DECISION: Use environment variable for API key in MVP, document keyring migration path.**
|
|
|
|
VOTE: READY - This is an implementation detail, not an architectural concern. The provider abstraction supports any credential source.
|
|
VOTE: READY
|
|
|
|
---
|
|
|
|
Name: AI-Security
|
|
## Security Status: Waiting on Credential Storage Confirmation\n\nStill awaiting confirmation from rob on the API key storage approach.\n\n### Blocking Issue\n\nThe current proposal mentions storing API keys in environment variables (`ANTHROPIC_API_KEY`), which is better than plaintext config files, but I need explicit confirmation of the implementation approach:\n\n**Q: Will you use OS keychain/keyring for API key storage, or rely solely on environment variables?**\n\nEnvironment variables are acceptable for MVP if:\n- Set in shell profile (not committed to git)\n- Protected with proper file permissions (0600)\n- Never logged or echoed\n\nPreferred approach:\n- Use Python `keyring` library to store in OS keychain\n- Prompt for API key on first run: `homelab init --api-key`\n- Retrieve from keychain on subsequent runs\n\n### Additional Security Requirement\n\nTODO: Document in README that `~/.homelab/` should be excluded from backups to cloud services, or ensure backups are encrypted.\n\n### File Permissions\n\nThe `homelab init` command (as Architect suggested) should automatically:\n```bash\nmkdir -p ~/.homelab\nchmod 700 ~/.homelab\ntouch ~/.homelab/journal.md ~/.homelab/tasks.md\nchmod 600 ~/.homelab/*.md\n```\n\nThis ensures no other users on the system can read your productivity data.\n\n### Verdict\n\nOnce you confirm the API key handling approach (environment variables with proper precautions OR keyring), I'll vote READY. The rest of the security posture is acceptable for a personal CLI tool.\n\nCONCERN: API key storage method not yet confirmed
|
|
VOTE: CHANGES
|
|
|
|
---
|
|
|
|
Name: AI-Pragmatist
|
|
## Security Response - API Key Storage
|
|
|
|
Good question on the API key. Here's the pragmatic answer:
|
|
|
|
**MVP Approach: Environment Variable**
|
|
|
|
For shipping this week:
|
|
```bash
|
|
export ANTHROPIC_API_KEY="sk-..."
|
|
homelab start-day
|
|
```
|
|
|
|
Add to your `.bashrc` or `.zshrc`. Done. Ships today.
|
|
|
|
**Why Not OS Keychain?**
|
|
|
|
Security suggested `keyring` library. That's adding:
|
|
- New dependency
|
|
- Platform-specific code (macOS Keychain vs Linux Secret Service vs Windows Credential Locker)
|
|
- Setup complexity for a single secret
|
|
- Debugging pain when keyring daemon isn't running
|
|
|
|
For a personal CLI tool, env vars are fine. You already protect your shell config.
|
|
|
|
**File Permissions: Already Handled**
|
|
|
|
If you're on Linux/macOS, `~/.homelab/` gets your user's default permissions (typically 755 or 700). Explicitly setting 0700 is one line:
|
|
|
|
```python
|
|
import os
|
|
config_dir = os.path.expanduser('~/.homelab')
|
|
os.makedirs(config_dir, mode=0o700, exist_ok=True)
|
|
```
|
|
|
|
Add this to `homelab init`. Takes 30 seconds.
|
|
|
|
**Dependency Pinning: Yes, But Later**
|
|
|
|
```bash
|
|
pip freeze > requirements.txt
|
|
```
|
|
|
|
Run this after you get it working. Don't block on it now.
|
|
|
|
**Reality Check**
|
|
|
|
This tool accesses:
|
|
- Your markdown task notes (already on your encrypted drive)
|
|
- Claude API (HTTPS, already secure)
|
|
- No external services, no network exposure, no user data
|
|
|
|
The threat model is "someone with access to your laptop reads your todo list." If that happens, you have bigger problems than this tool.
|
|
|
|
## Verdict
|
|
|
|
Security's concerns are valid but don't block shipping:
|
|
- ✅ Use env var for API key (add to shell config)
|
|
- ✅ Set 0700 on ~/.homelab/ in init command
|
|
- ✅ Pin dependencies after first working version
|
|
|
|
This is a personal productivity tool, not a bank. Ship it, use it for a week, then harden if you're still using it.
|
|
|
|
VOTE: READY
|
|
|
|
The MVP is simple, scoped, and secure enough for personal use. Let's build it.
|
|
VOTE: READY
|