Replaced the older generative-ai client with Google's newly standardized @google/genai library across the application's chat, suggestions, and tracking endpoints. As part of the upgrade, we're now capturing the outputDimensionality: 768 parameter in the semantic embedding configurations to natively downscale the vector outputs right at the model layer. This ensures seamless compatibility and search performance when storing the text embeddings in our Postgres HNSW index.