Important: This documentation covers Yarn 1 (Classic).
For Yarn 2+ docs and migration guide, see yarnpkg.com.

Package detail

node-llama-cpp

withcatai35.6kMIT3.7.0TypeScript support: included

Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level

llama, llama-cpp, llama.cpp, bindings, ai, cmake, cmake-js, prebuilt-binaries, llm, gguf, metal, cuda, vulkan, grammar, embedding, rerank, reranking, json-grammar, json-schema-grammar, functions, function-calling, token-prediction, speculative-decoding, temperature, minP, topK, topP, seed, json-schema, raspberry-pi, self-hosted, local, catai, mistral, deepseek, qwen, qwq, typescript, lora, batching, gpu

readme

node-llama-cpp Logo

node-llama-cpp

Run AI models locally on your machine

Pre-built bindings are provided with a fallback to building from source with cmake

Build License Types Version

DeepSeek R1 is here!

Features

Documentation

Try It Without Installing

Chat with a model in your terminal using a single command:

npx -y node-llama-cpp chat

Installation

npm install node-llama-cpp

This package comes with pre-built binaries for macOS, Linux and Windows.

If binaries are not available for your platform, it'll fallback to download a release of llama.cpp and build it from source with cmake. To disable this behavior, set the environment variable NODE_LLAMA_CPP_SKIP_DOWNLOAD to true.

Usage

import {fileURLToPath} from "url";
import path from "path";
import {getLlama, LlamaChatSession} from "node-llama-cpp";

const __dirname = path.dirname(fileURLToPath(import.meta.url));

const llama = await getLlama();
const model = await llama.loadModel({
    modelPath: path.join(__dirname, "models", "Meta-Llama-3.1-8B-Instruct.Q4_K_M.gguf")
});
const context = await model.createContext();
const session = new LlamaChatSession({
    contextSequence: context.getSequence()
});


const q1 = "Hi there, how are you?";
console.log("User: " + q1);

const a1 = await session.prompt(q1);
console.log("AI: " + a1);


const q2 = "Summarize what you said";
console.log("User: " + q2);

const a2 = await session.prompt(q2);
console.log("AI: " + a2);

For more examples, see the getting started guide

Contributing

To contribute to node-llama-cpp read the contribution guide.

Acknowledgements


Star please

If you like this repo, star it ✨