merged
EpicenterHQ/epicenter • 11:59 PM - Feb 20, 2026
braden-w

New /ai/chat SSE endpoint lets the extension stream LLM responses via TanStack AI without ever embedding or storing API keys client-side. Supports OpenAI, Anthropic, Gemini, Grok, and local Ollama out of the box, with per-request x-provider-api-key passthrough, solid abort handling (no orphaned calls), and clear error codes (401/499/502).

Server gets streaming AI chat - EpicenterHQ/epicenter