Sidecar Protocol
All communication between the Tauri Rust core and the Python sidecar is newline-delimited JSON (NDJSON) over the process’s stdin/stdout pipe. Each message is a single JSON object on one line.
Tauri → Sidecar (stdin)
Section titled “Tauri → Sidecar (stdin)”User input
Section titled “User input”{"type": "input", "text": "refactor this function to use async/await"}Change working directory
Section titled “Change working directory”{"type": "cd", "path": "/Users/alice/projects/myapp"}Reset conversation
Section titled “Reset conversation”{"type": "reset"}Interrupt running agent
Section titled “Interrupt running agent”{"type": "interrupt"}Load a model
Section titled “Load a model”{"type": "load_model", "model_path": "~/CyberPaw/models/gemma-4-E4B-it-Q4_K_M.gguf", "backend": "llamacpp"}Update config at runtime
Section titled “Update config at runtime”{"type": "config", "patch": {"permission_mode": "auto_read", "max_new_tokens": 2048}}Tool permission response
Section titled “Tool permission response”{"type": "tool_ack", "id": "perm_a1b2c3d4", "decision": "allow"}Download a model
Section titled “Download a model”{"type": "download_start", "model_id": "gemma-4-e2b-q4km", "dest_dir": "~/CyberPaw/models"}Request current model status
Section titled “Request current model status”{"type": "status_request"}Sidecar → Tauri (stdout)
Section titled “Sidecar → Tauri (stdout)”Streamed token
Section titled “Streamed token”{"type": "token", "text": "Here is the refactored function:"}Tool call started
Section titled “Tool call started”{"type": "tool_start", "id": "tu_abc123", "tool": "Read", "input": {"file_path": "src/main.py"}}Tool call completed
Section titled “Tool call completed”{"type": "tool_end", "id": "tu_abc123", "tool": "Read", "summary": "Read 142 lines", "is_error": false}Agent phase change
Section titled “Agent phase change”{"type": "status", "phase": "thinking"}{"type": "status", "phase": "tool_running", "tool": "Bash"}{"type": "status", "phase": "idle"}Model load progress
Section titled “Model load progress”{"type": "model_progress", "pct": 42}Model ready
Section titled “Model ready”{"type": "model_status", "loaded": true, "backend": "llama.cpp", "context_size": 32768, "max_new_tokens": 4096}Memory stats (from status_request poll)
Section titled “Memory stats (from status_request poll)”{"type": "model_status", "backend": "llama.cpp", "loaded": true, "vram_used_mb": 4200, "model_size_mb": 3800, "kv_cache_mb": 400}Generation stats (after each turn)
Section titled “Generation stats (after each turn)”{"type": "generation_stats", "tokens": 312, "elapsed_ms": 8400, "tokens_per_sec": 37.1}Permission request (tool needs user approval)
Section titled “Permission request (tool needs user approval)”{"type": "tool_permission_request", "id": "perm_a1b2c3d4", "tool": "Bash", "input": {"command": "rm -rf dist/"}}Download progress
Section titled “Download progress”{"type": "download_progress", "model_id": "gemma-4-e2b-q4km", "pct": 67, "downloaded_mb": 1940.2, "total_mb": 2900.0, "speed_mbps": 12.4}Download complete
Section titled “Download complete”{"type": "download_done", "model_id": "gemma-4-e2b-q4km", "path": "/Users/alice/CyberPaw/models/gemma-4-E2B-it-Q4_K_M.gguf"}{"type": "error", "message": "Model not loaded yet."}Message ordering
Section titled “Message ordering”A typical agent turn looks like:
← {"type": "status", "phase": "thinking"}← {"type": "token", "text": "I'll read the file first.\n"}← {"type": "tool_start", "id": "t1", "tool": "Read", ...}← {"type": "status", "phase": "tool_running", "tool": "Read"}← {"type": "tool_end", "id": "t1", ...}← {"type": "status", "phase": "thinking"}← {"type": "token", "text": "The file contains..."}← {"type": "generation_stats", ...}← {"type": "status", "phase": "idle"}