Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
a GGUF parser that works on remotely hosted files
Various utilities for maintaining Ollama compatibility with models on Hugging Face hub
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level