EpicenterHQ avatar

epicenter

0 subscribers
TypeScriptSvelteRust

Press shortcut → speak → get text. Free and open source. More local-first apps soon ❤️

Created Mar 2023

AGPL-3.0 license

Live activities

New /ai/chat SSE endpoint lets the extension stream LLM responses via TanStack AI without ever embedding or storing API keys client-side. Supports OpenAI, Anthropic, Gemini, Grok, and local Ollama out of the box, with per-request x-provider-api-key passthrough, solid abort handling (no orphaned calls), and clear error codes (401/499/502).

Big refactor: extension factories now return a single flat object ({...exports, whenReady?, destroy?}) and a new defineExtension() normalizer guarantees consistent whenReady + destroy handling. This improves extension ergonomics and fixes a real bug where sqlite cleanup was being silently dropped—users should see more reliable teardown and a cleaner API surface.

Static bindings now return a DocumentHandle from open(), so you open once and do sync read()/write(), exports, and lifecycle ops off the handle—no more threading the same ID through every call. destroy is renamed to close (memory cleanup without nuking persisted data), and extensions get a typed extensions map + whenReady for better interop. Big ergonomics + clarity win for callers building document workflows.

Sync routes were refactored so rooms live at a single canonical path: /rooms/:id now handles WS upgrade + GET/POST doc state (replacing /sync + /doc). This makes client configs simpler and more REST-y—but it’s a breaking change for anyone still pointing at the old /rooms/{id}/sync or /rooms/{id}/doc URLs. ⚠️

Big refactor: Extensions now return a flat object ({ ...exports, whenReady?, destroy? }) instead of nested { exports, lifecycle }, with an internal normalizer ensuring whenReady is always a Promise and destroy is always callable—so you can await per-extension readiness via extensions.X.whenReady. This also fixes a real bug where sqlite’s top-level destroy hook was being dropped, improving cleanup reliability across workspaces. 🚀

Server AI plugin got a solid TanStack AI–style refactor: stricter provider typing + isSupportedProvider() guard, adapters now return proper AnyTextAdapter, and /chat accepts modelOptions passthrough for provider-specific knobs (temp/thinking/etc.). Also improves streaming robustness by treating client disconnects as 499 Client Closed Request instead of a generic 502. 🚀

New /ai/chat SSE endpoint is live via a fresh Elysia AI plugin, proxying streaming chat through TanStack AI to OpenAI/Anthropic/Gemini/Ollama/Grok (with per-request x-provider-api-key so keys never hit storage). Users can now build real-time chat UIs against the Epicenter server with better disconnect cleanup and clearer provider errors.

Big refactor to the static DocumentBinding API: open() now returns a per-doc DocumentHandle (with sync read()/write(), ydoc, and exports), and lifecycle methods are renamed destroyclose / destroyAllcloseAll. This makes document operations clearer and more ergonomic for apps (UI + content helpers updated) while preserving data and tightening types + tests across the workspace/filesystem stack.

Big refactor to make document extensions return a consistent { exports, lifecycle } shape (matching workspace extensions), unlocking a typed binding.getExports(docId) API and removing the old purge()/DocumentLifecycle patterns. Also swaps fsError() for an FS_ERRORS namespace so filesystem errors are instantly discoverable via IDE autocomplete—same error codes, nicer ergonomics. 🧩

Big server sync cleanup: WebSocket sync now lives at /rooms/{id}/sync (instead of under /workspaces), and you also get new REST endpoints to list active rooms and download a binary Yjs doc snapshot for debugging/monitoring. This makes “rooms” a first-class sync concept (not tied to workspaces) and updates clients/docs/tests to match, so integrations can target clearer URLs and introspect sync state more easily.

- End of feed -