Important: This documentation covers Yarn 1 (Classic).
For Yarn 2+ docs and migration guide, see yarnpkg.com.

Package detail

tiktoken

dqbd1.4mMIT1.0.21TypeScript support: included

JS/WASM bindings for tiktoken

readme

⏳ tiktoken

tiktoken is a BPE tokeniser for use with OpenAI's models, forked from the original tiktoken library to provide JS/WASM bindings for NodeJS and other JS runtimes.

This repository contains the following packages:

  • tiktoken (formally hosted at @dqbd/tiktoken): WASM bindings for the original Python library, providing full 1-to-1 feature parity.
  • js-tiktoken: Pure JavaScript port of the original library with the core functionality, suitable for environments where WASM is not well supported or not desired (such as edge runtimes).

Documentation for js-tiktoken can be found in here. Documentation for the tiktoken can be found here below.

The WASM version of tiktoken can be installed from NPM:

npm install tiktoken

Usage

Basic usage follows, which includes all the OpenAI encoders and ranks:

import assert from "node:assert";
import { get_encoding, encoding_for_model } from "tiktoken";

const enc = get_encoding("gpt2");
assert(
  new TextDecoder().decode(enc.decode(enc.encode("hello world"))) ===
    "hello world"
);

// To get the tokeniser corresponding to a specific model in the OpenAI API:
const enc = encoding_for_model("text-davinci-003");

// Extend existing encoding with custom special tokens
const enc = encoding_for_model("gpt2", {
  "<|im_start|>": 100264,
  "<|im_end|>": 100265,
});

// don't forget to free the encoder after it is not used
enc.free();

In constrained environments (eg. Edge Runtime, Cloudflare Workers), where you don't want to load all the encoders at once, you can use the lightweight WASM binary via tiktoken/lite.

const { Tiktoken } = require("tiktoken/lite");
const cl100k_base = require("tiktoken/encoders/cl100k_base.json");

const encoding = new Tiktoken(
  cl100k_base.bpe_ranks,
  cl100k_base.special_tokens,
  cl100k_base.pat_str
);
const tokens = encoding.encode("hello world");
encoding.free();

If you want to fetch the latest ranks, use the load function:

const { Tiktoken } = require("tiktoken/lite");
const { load } = require("tiktoken/load");
const registry = require("tiktoken/registry.json");
const models = require("tiktoken/model_to_encoding.json");

async function main() {
  const model = await load(registry[models["gpt-3.5-turbo"]]);
  const encoder = new Tiktoken(
    model.bpe_ranks,
    model.special_tokens,
    model.pat_str
  );
  const tokens = encoder.encode("hello world");
  encoder.free();
}

main();

If desired, you can create a Tiktoken instance directly with custom ranks, special tokens and regex pattern:

import { Tiktoken } from "../pkg";
import { readFileSync } from "fs";

const encoder = new Tiktoken(
  readFileSync("./ranks/gpt2.tiktoken").toString("utf-8"),
  { "<|endoftext|>": 50256, "<|im_start|>": 100264, "<|im_end|>": 100265 },
  "'s|'t|'re|'ve|'m|'ll|'d| ?\\p{L}+| ?\\p{N}+| ?[^\\s\\p{L}\\p{N}]+|\\s+(?!\\S)|\\s+"
);

Finally, you can a custom init function to override the WASM initialization logic for non-Node environments. This is useful if you are using a bundler that does not support WASM ESM integration.

import { get_encoding, init } from "tiktoken/init";

async function main() {
  const wasm = "..."; // fetch the WASM binary somehow
  await init((imports) => WebAssembly.instantiate(wasm, imports));

  const encoding = get_encoding("cl100k_base");
  const tokens = encoding.encode("hello world");
  encoding.free();
}

main();

Compatibility

As this is a WASM library, there might be some issues with specific runtimes. If you encounter any issues, please open an issue.

Runtime Status Notes
Node.js
Bun
Vite See here for notes
Next.js See here for notes
Create React App (via Craco) See here for notes
Vercel Edge Runtime See here for notes
Cloudflare Workers See here for notes
Electron See here for notes
Deno Currently unsupported (see dqbd/tiktoken#22)
Svelte + Cloudflare Workers Currently unsupported (see dqbd/tiktoken#37)

For unsupported runtimes, consider using js-tiktoken, which is a pure JS implementation of the tokeniser.

Vite

If you are using Vite, you will need to add both the vite-plugin-wasm and vite-plugin-top-level-await. Add the following to your vite.config.js:

import wasm from "vite-plugin-wasm";
import topLevelAwait from "vite-plugin-top-level-await";
import { defineConfig } from "vite";

export default defineConfig({
  plugins: [wasm(), topLevelAwait()],
});

Next.js

Both API routes and /pages are supported with the following next.config.js configuration.

// next.config.json
const config = {
  webpack(config, { isServer, dev }) {
    config.experiments = {
      asyncWebAssembly: true,
      layers: true,
    };

    return config;
  },
};

Usage in pages:

import { get_encoding } from "tiktoken";
import { useState } from "react";

const encoding = get_encoding("cl100k_base");

export default function Home() {
  const [input, setInput] = useState("hello world");
  const tokens = encoding.encode(input);

  return (
    <div>
      <input
        type="text"
        value={input}
        onChange={(e) => setInput(e.target.value)}
      />
      <div>{tokens.toString()}</div>
    </div>
  );
}

Usage in API routes:

import { get_encoding } from "tiktoken";
import { NextApiRequest, NextApiResponse } from "next";

export default function handler(req: NextApiRequest, res: NextApiResponse) {
  const encoding = get_encoding("cl100k_base");
  const tokens = encoding.encode("hello world");
  encoding.free();
  return res.status(200).json({ tokens });
}

Create React App

By default, the Webpack configugration found in Create React App does not support WASM ESM modules. To add support, please do the following:

  1. Swap react-scripts with craco, using the guide found here: https://craco.js.org/docs/getting-started/.
  2. Add the following to craco.config.js:
module.exports = {
  webpack: {
    configure: (config) => {
      config.experiments = {
        asyncWebAssembly: true,
        layers: true,
      };

      // turn off static file serving of WASM files
      // we need to let Webpack handle WASM import
      config.module.rules
        .find((i) => "oneOf" in i)
        .oneOf.find((i) => i.type === "asset/resource")
        .exclude.push(/\.wasm$/);

      return config;
    },
  },
};

Vercel Edge Runtime

Vercel Edge Runtime does support WASM modules by adding a ?module suffix. Initialize the encoder with the following snippet:

// @ts-expect-error
import wasm from "tiktoken/lite/tiktoken_bg.wasm?module";
import model from "tiktoken/encoders/cl100k_base.json";
import { init, Tiktoken } from "tiktoken/lite/init";

export const config = { runtime: "edge" };

export default async function (req: Request) {
  await init((imports) => WebAssembly.instantiate(wasm, imports));

  const encoding = new Tiktoken(
    model.bpe_ranks,
    model.special_tokens,
    model.pat_str
  );

  const tokens = encoding.encode("hello world");
  encoding.free();

  return new Response(`${tokens}`);
}

Cloudflare Workers

Similar to Vercel Edge Runtime, Cloudflare Workers must import the WASM binary file manually and use the tiktoken/lite version to fit the 1 MB limit. However, users need to point directly at the WASM binary via a relative path (including ./node_modules/).

Add the following rule to the wrangler.toml to upload WASM during build:

[[rules]]
globs = ["**/*.wasm"]
type = "CompiledWasm"

Initialize the encoder with the following snippet:

import { init, Tiktoken } from "tiktoken/lite/init";
import wasm from "./node_modules/tiktoken/lite/tiktoken_bg.wasm";
import model from "tiktoken/encoders/cl100k_base.json";

export default {
  async fetch() {
    await init((imports) => WebAssembly.instantiate(wasm, imports));
    const encoder = new Tiktoken(
      model.bpe_ranks,
      model.special_tokens,
      model.pat_str
    );
    const tokens = encoder.encode("test");
    encoder.free();
    return new Response(`${tokens}`);
  },
};

Electron

To use tiktoken in your Electron main process, you need to make sure the WASM binary gets copied into your application package.

Assuming a setup with Electron Forge and @electron-forge/plugin-webpack, add the following to your webpack.main.config.js:

const CopyPlugin = require("copy-webpack-plugin");

module.exports = {
  // ...
  plugins: [
    new CopyPlugin({
      patterns: [
        { from: "./node_modules/tiktoken/tiktoken_bg.wasm" },
      ],
    }),
  ],
};

Development

To build the tiktoken library, make sure to have:

  • Rust and wasm-pack installed.
  • Node.js 18+ is required to build the JS bindings and fetch the latest encoder ranks via fetch.

Install all the dev-dependencies with yarn install and build both WASM binary and JS bindings with yarn build.

Acknowledgements

changelog

Changelog

This is the changelog for the open source version of tiktoken.

[v0.5.1]

  • Add encoding_name_for_model, undo some renames to variables that are implementation details

[v0.5.0]

  • Add tiktoken._educational submodule to better document how byte pair encoding works
  • Ensure encoding_for_model knows about several new models
  • Add decode_with_offets
  • Better error for failures with the plugin mechanism
  • Make more tests public
  • Update versions of dependencies

[v0.4.0]

  • Add decode_batch and decode_bytes_batch
  • Improve error messages and handling

[v0.3.3]

  • tiktoken will now make a best effort attempt to replace surrogate pairs with the corresponding Unicode character and will replace lone surrogates with the Unicode replacement character.

[v0.3.2]

  • Add encoding for GPT-4

[v0.3.1]

  • Build aarch64 wheels
  • Make blobfile an optional dependency

Thank you to @messense for the environment variable that makes cargo not OOM under emulation!

[v0.3.0]

  • Improve performance by 5-20%; thank you to @nistath!
  • Add gpt-3.5-turbo models to encoding_for_model
  • Add prefix matching to encoding_for_model to better support future model versions
  • Fix a bug in the README instructions on extending tiktoken
  • Update the set of available encodings
  • Add packaging metadata

[v0.2.0]

  • Add tiktoken.encoding_for_model to get the encoding for a specific model
  • Improve portability of caching logic

Thank you to @fritzo, @arvid220u, @khanhvu207, @henriktorget for various small corrections

[v0.1.2]

  • Avoid use of blobfile for public files
  • Add support for Python 3.8
  • Add py.typed
  • Improve the public tests

[v0.1.1]

  • Initial release