Local LLM
Run local LLM inference server
Notes
- The server is intended to be
openai-compatible
.
Commands
ts
import { commands } from "@hypr/plugin-local-llm";
- downloadModel
- isModelDownloaded
- isModelDownloading
- isServerRunning
- listOllamaModels
- startServer
- stopServer