lws803 avatar

GigaChadGPT

0 subscribers
TypeScriptJavaScriptCSS

Unleash Your Inner Alpha with Giga Chad!

Created Mar 2023

MIT license

Live activities

This change introduces a small AnnouncementBanner component and mounts it above the fixed chat input in ChatInterface. The banner links users to Beloga with lightweight theme-aware styling, making the promotion visible without changing chat behavior or message flow. In practice, users now see a clear in-product callout for the more research-friendly use case.

This change moves LogRocket setup out of the global app bootstrap and into a dedicated LogRocketAnalytics component that reads the client key from NEXT_PUBLIC_LOG_ROCKET_KEY instead of hardcoding it. It also identifies authenticated users with their UID and email, which should make debugging and session tracing much more useful once someone signs in. Alongside that, Firebase emulator config was added for the default project, making local auth development easier. Practical effect: observability is cleaner, safer to configure across environments, and more actionable during debugging.

This change updates the shared API handler to import Sentry from @sentry/node instead of @sentry/nextjs, which is a better fit for server-side error capture in this code path. The previous import could lead to exceptions not being reported as expected from backend handlers, making production debugging harder. With this fix, errors raised through the shared handler should now reach Sentry more reliably, improving observability when API requests fail.

This change updates the shared API handler to import @sentry/nextjs as a namespace instead of a default export, which fixes exception reporting when errors are captured. It’s a small but important runtime fix in central error-handling code, so failed requests should now be logged to Sentry reliably again. The practical effect is better visibility into production API failures.

The shared API error handler now reports unexpected exceptions to Sentry before returning a generic internal server error response. Validation and auth-related cases still keep their existing client-facing behavior, but anything that falls through to the 500 path will now be captured for debugging and monitoring. The practical effect is better visibility into production backend failures with no change to the public API contract.

This change updates the chat interface to abort any in-flight request before sending a new prompt, instead of waiting for older responses to finish and potentially arrive out of order. The OpenAI action now accepts an abort signal, and the UI explicitly ignores cancellation errors while still reporting real failures to Sentry and showing user-facing notifications. In practice, users can keep typing and sending follow-up messages without the conversation getting blocked by stale requests.

This commit adds a license file to the repository, clarifying how the code can be used, shared, and contributed to. While it doesn't change runtime behavior, it is an important project-level improvement for distribution and collaboration. The practical effect is that the repository now has explicit licensing in place.

This change expands runtime monitoring by adding a dedicated sentry.edge.config.js so Sentry can initialize for edge-handled requests, using the configured DSN and a 20% trace sample rate. It also wires LogRocket into _app.tsx, alongside existing Firebase analytics, so browser sessions can be replayed and debugged from the client side. Together these changes improve visibility across both edge execution and frontend user behavior, making production issues easier to diagnose.

This updates the persona switcher in src/components/shared/Nav.tsx so the menu now closes immediately after a new persona is selected. The change keeps the existing behavior of updating the route and clearing messages, but removes an extra UI step that could make the flow feel sticky or incomplete. Practically, switching personas now feels cleaner and more responsive.

This change drops the 30-second client-side timeout from the Axios call in src/modules/openai/actions.ts. That suggests some valid OpenAI requests were taking longer than the hard cutoff and getting terminated before the API could respond. The practical effect is that long-running generations should now complete instead of failing early, though request latency will no longer be capped on the client side.

This change rewrites /api/chat from a Node-based Next API route to an edge runtime handler that validates the request body, delegates Firebase token verification to a new /api/auth endpoint, and calls OpenAI directly via fetch. The split is necessary because the edge runtime can't use the existing server-side Firebase/OpenAI SDK setup, so auth stays in a standard API route while chat execution moves to the edge. On the client side, the chat action remains mostly unchanged, still posting messages and personas to /api/chat. The practical effect is a cleaner edge-compatible chat path with potentially lower latency for user conversations.

This change fixes error reporting in ChatInterface by removing Sentry.captureException from the success handler and calling it only when the mutation actually fails. Previously, normal assistant responses were being sent to Sentry as exceptions, which would create noisy telemetry and make real issues harder to spot. The practical effect is cleaner monitoring and more trustworthy error alerts when chat requests genuinely break.

This change wires @sentry/browser into the chat flow and captures the returned value in the mutation success path, alongside updating the error callback signature for better debugging context. The goal is to surface runtime issues from the chat interface that would otherwise be hard to reproduce from user reports alone. In practice, this should make production failures easier to investigate and shorten the path from bug report to fix.

This change updates the error message in ChatInterface to be more actionable when something goes wrong. Instead of a generic “try again later,” users are now prompted to refresh the page and try a different prompt, while also setting expectations that the issue is known and being worked on. It’s a small UX improvement, but it should reduce confusion and help users recover more quickly from transient failures.

This change adds @sentry/nextjs to the project dependencies, pulling in the client, server, tracing, and build-time tooling needed to instrument a Next.js app. There’s no app-level configuration in this commit yet, but it lays the groundwork for capturing production errors, stack traces, and source-mapped issues once setup files are added. Practical effect: the repo is now ready for Sentry integration, but observability won’t change until the package is actually configured.

This change wires @sentry/nextjs into the app configuration and adds dedicated client and server init files with tracing enabled, plus session replay on browser errors. It also introduces a custom _error.js page that reports unexpected runtime and rendering failures to Sentry while intentionally skipping 404s to reduce noise. The practical effect is much better visibility into production issues, with enough context to debug failures faster.

This change removes the Next.js Edge runtime config from src/pages/api/chat.ts, switching the chat endpoint back to the default API route environment. That likely addresses compatibility problems with existing request handling or upstream SDK behavior in the Edge runtime, while also cleaning up a stray debug log. Practical effect: the chat API should behave more predictably in production.

This change sets src/pages/api/chat.ts to run on Next.js Edge instead of the default Node runtime, which is a meaningful deployment/runtime shift for the app’s chat endpoint. The route still validates the Firebase auth token, builds the persona-specific prompt, and calls createChatCompletion, but it will now execute closer to users and may respond faster depending on the host setup. There’s also a response log added for debugging, though the main impact is the runtime move. In practice, chat requests should feel snappier if the surrounding dependencies remain Edge-compatible.

This change stops rewriting the user's message on the client and instead sends the selected persona explicitly to the backend. The API now injects persona instructions as a proper system message, while the request schema validates the persona and falls back to a default when needed. That lines up better with the chat-completions model, keeps user content clean, and should make persona behavior more reliable in practice.

- End of feed -