Skip to main content
OWhisper can be thought of as something like Ollama but for Speech-to-Text(both real-time and batch).
This came from our experience while building Hyprnote, where users consistently requested bringing a custom STT endpoint, just like plugging in an openai-compatible LLM endpoint. OWhisper is intended for 2 use cases:
  1. Quickly serving a lightweight model locally for prototyping or personal use.
  2. Deploying larger models or connecting to other cloud-hosted models on your own infrastructure.

FAQ

Currently, OWhisper is part of the Hyprnote repo.
It is GPLv3 currently, but we want to make it MIT at some point. (It depends on some code from Hyprnote, which is GPL licensed.)