New /ai/chat SSE endpoint lets the extension stream LLM responses via TanStack AI without ever embedding or storing API keys client-side. Supports OpenAI, Anthropic, Gemini, Grok, and local Ollama out of the box, with per-request x-provider-api-key passthrough, solid abort handling (no orphaned calls), and clear error codes (401/499/502).
