April 6th, 2025
API
App
Models
Mistral 24B is now the default Venice model for all users, bringing multi-modal capabilities to every one. Llama 70B has been updated to a Pro model.
Launched a preview of our new image inference infrastructure to our beta testers.
Image styles within the image settings are now searchable.
Added Conversation Forking to allow splitting conversations in new directions per this Featurebase request.
Added a “Jump to Bottom” mechanism per the Featurebase request.
Updated the app background to prevent a white flicker when loading the app.
Support pressing Enter to save the name when re-naming a conversation.
Update token dashboard to show 2 digits of prevision for staked VVV.
Updated authentication logic to programmatically reload authentication tokens if they time out due to the window being backgrounded.
Prevent the sidebar from automatically opening when the window is small.
Fixed a bug with text settings not properly saving.
Update image variant generation to more gracefully handle error cases.
Launched a new backend service to support the growth of Venice — migrated authentication, image generation and upscale to that service. This new service should be more performant and provide Venice a better platform to scale our user continued user growth on.
Added the nextEpochBegins
key to the api_keys/rate_limits
endpoint. Docs have been updated. Solves this Featurebase request.
Added response_format
support to Qwen VL in the API.
Fixed a bug where messages to Mistral including a reasoning_content
null parameter would throw an error.