Fixes four issues identified in the community code review:
- Session persistence broken on Windows: session keys like
"telegram:123456" contain ':', which is illegal in Windows
filenames. filepath.Base() strips drive-letter prefixes on Windows,
causing Save() to silently fail. Added sanitizeFilename() to
replace invalid chars in the filename while keeping the original
key in the JSON payload.
- HTTP client with no timeout: HTTPProvider used Timeout: 0 (infinite
wait), which can hang the entire agent if an API endpoint becomes
unresponsive. Set a 120s safety timeout.
- Slack AllowFrom type mismatch: SlackConfig used plain []string
while every other channel uses FlexibleStringSlice, so numeric
user IDs in Slack config would fail to parse.
- Token estimation wrong for CJK: estimateTokens() divided byte
length by 4, but CJK characters are 3 bytes each, causing ~3x
overestimation and premature summarization. Switched to
utf8.RuneCountInString() / 3 for better cross-language accuracy.
Also added unit tests for the session filename sanitization.
Ref #116
Add ClaudeCliProvider that executes the local CLI as a subprocess,
enabling PicoClaw to leverage advanced capabilities (MCP tools,
workspace awareness, session management) through any messaging channel.
- Implement LLMProvider interface via subprocess execution
- Support --system-prompt, --model, --output-format json flags
- Parse real v2.x JSON response format including usage tokens
- Handle error responses, stderr, context cancellation
- Register "claude-cli", "claude-code", "claudecode" aliases in CreateProvider
- 56 unit tests with mock scripts + 3 integration tests against real binary
- 100% coverage on all functions except stripToolCallsJSON (85.7%)
- Add Provider field to AgentDefaults struct
- Modify CreateProvider to use explicit provider field first, fallback to model name detection
- Allows using models without provider prefix (e.g., llama-3.1-8b-instant instead of groq/llama-3.1-8b-instant)
- Supports all providers: groq, openai, anthropic, openrouter, zhipu, gemini, vllm
- Backward compatible with existing configs
Fixes issue where models without provider prefix could not use configured API keys.
Add ClaudeProvider (anthropic-sdk-go) and CodexProvider (openai-go) that
use the correct subscription endpoints and API formats:
- CodexProvider: chatgpt.com/backend-api/codex/responses (Responses API)
with OAuth Bearer auth and Chatgpt-Account-Id header
- ClaudeProvider: api.anthropic.com/v1/messages (Messages API) with
Authorization: Bearer token auth
Update CreateProvider() routing to use new SDK-based providers when
auth_method is "oauth" or "token", removing the stopgap that sent
subscription tokens to pay-per-token endpoints.
Closes#18
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>