Important: This documentation covers Yarn 1 (Classic).
For Yarn 2+ docs and migration guide, see yarnpkg.com.

Package detail

llamaindex

run-llama81.6kMIT0.8.31TypeScript support: included

LlamaIndex logo

LlamaIndex.TS

Data framework for your LLM application.

llm, llama, openai, gpt, data science, prompt, prompt engineering, chatgpt, machine learning, ml, embedding, vectorstore, data framework, llamaindex

readme

LlamaIndex logo

LlamaIndex.TS

Data framework for your LLM application.

NPM Version NPM License NPM Downloads Discord

Use your own data with large language models (LLMs, OpenAI ChatGPT and others) in JS runtime environments with TypeScript support.

Documentation: https://ts.llamaindex.ai/

Try examples online:

Open in Stackblitz

What is LlamaIndex.TS?

LlamaIndex.TS aims to be a lightweight, easy to use set of libraries to help you integrate large language models into your applications with your own data.

Compatibility

Multiple JS Environment Support

LlamaIndex.TS supports multiple JS environments, including:

  • Node.js >= 20 ✅
  • Deno ✅
  • Bun ✅
  • Nitro ✅
  • Vercel Edge Runtime ✅ (with some limitations)
  • Cloudflare Workers ✅ (with some limitations)

For now, browser support is limited due to the lack of support for AsyncLocalStorage-like APIs

Supported LLMs:

  • OpenAI LLms
  • Anthropic LLms
  • Groq LLMs
  • Llama2, Llama3, Llama3.1 LLMs
  • MistralAI LLMs
  • Fireworks LLMs
  • DeepSeek LLMs
  • ReplicateAI LLMs
  • TogetherAI LLMs
  • HuggingFace LLms
  • DeepInfra LLMs
  • Gemini LLMs

Getting started

npm install llamaindex
pnpm install llamaindex
yarn add llamaindex

Setup in Node.js, Deno, Bun, TypeScript...?

See our official document: https://ts.llamaindex.ai/docs/llamaindex/getting_started/

Tips when using in non-Node.js environments

When you are importing llamaindex in a non-Node.js environment(such as Vercel Edge, Cloudflare Workers, etc.) Some classes are not exported from top-level entry file.

The reason is that some classes are only compatible with Node.js runtime,(e.g. PDFReader) which uses Node.js specific APIs(like fs, child_process, crypto).

If you need any of those classes, you have to import them instead directly though their file path in the package. Here's an example for importing the PineconeVectorStore class:

import { PineconeVectorStore } from "llamaindex/storage/vectorStore/PineconeVectorStore";

As the PDFReader is not working with the Edge runtime, here's how to use the SimpleDirectoryReader with the LlamaParseReader to load PDFs:

import { SimpleDirectoryReader } from "llamaindex/readers/SimpleDirectoryReader";
import { LlamaParseReader } from "llamaindex/readers/LlamaParseReader";

export const DATA_DIR = "./data";

export async function getDocuments() {
  const reader = new SimpleDirectoryReader();
  // Load PDFs using LlamaParseReader
  return await reader.loadData({
    directoryPath: DATA_DIR,
    fileExtToReader: {
      pdf: new LlamaParseReader({ resultType: "markdown" }),
    },
  });
}

Note: Reader classes have to be added explictly to the fileExtToReader map in the Edge version of the SimpleDirectoryReader.

You'll find a complete example with LlamaIndexTS here: https://github.com/run-llama/create_llama_projects/tree/main/nextjs-edge-llamaparse

Playground

Check out our NextJS playground at https://llama-playground.vercel.app/. The source is available at https://github.com/run-llama/ts-playground

Core concepts for getting started:

  • Document: A document represents a text file, PDF file or other contiguous piece of data.

  • Node: The basic data building block. Most commonly, these are parts of the document split into manageable pieces that are small enough to be fed into an embedding model and LLM.

  • Embedding: Embeddings are sets of floating point numbers which represent the data in a Node. By comparing the similarity of embeddings, we can derive an understanding of the similarity of two pieces of data. One use case is to compare the embedding of a question with the embeddings of our Nodes to see which Nodes may contain the data needed to answer that question. Because the default service context is OpenAI, the default embedding is OpenAIEmbedding. If using different models, say through Ollama, use this Embedding (see all here).

  • Indices: Indices store the Nodes and the embeddings of those nodes. QueryEngines retrieve Nodes from these Indices using embedding similarity.

  • QueryEngine: Query engines are what generate the query you put in and give you back the result. Query engines generally combine a pre-built prompt with selected Nodes from your Index to give the LLM the context it needs to answer your query. To build a query engine from your Index (recommended), use the asQueryEngine method on your Index. See all query engines here.

  • ChatEngine: A ChatEngine helps you build a chatbot that will interact with your Indices. See all chat engines here.

  • SimplePrompt: A simple standardized function call definition that takes in inputs and formats them in a template literal. SimplePrompts can be specialized using currying and combined using other SimplePrompt functions.

Contributing:

Please see our contributing guide for more information. You are highly encouraged to contribute to LlamaIndex.TS!

Community

Please join our Discord! https://discord.com/invite/eN6D2HQ4aX

changelog

llamaindex

0.8.31

Patch Changes

0.8.30

Patch Changes

0.8.29

Patch Changes

  • dd596a0: Add Gemini 2.0 Flash Experimental

0.8.28

Patch Changes

0.8.27

Patch Changes

0.8.26

Patch Changes

0.8.25

Patch Changes

0.8.24

Patch Changes

  • 515f2c1: Add vector store for CosmosDB

0.8.23

Patch Changes

0.8.22

Patch Changes

0.8.21

Patch Changes

  • 83c3897: fix: pinecone vector store search
  • efa2211: feat: add Azure Cosmos DB mongo vCore DocumentStore, IndexStore, KVStore

0.8.20

Patch Changes

  • 02b22da: fix: supports Vercel bundling

0.8.19

Patch Changes

0.8.18

Patch Changes

0.8.17

Patch Changes

0.8.16

Patch Changes

0.8.15

Patch Changes

  • 3d503cb: Update azure cosmos db
  • 5dae534: fix: propagate queryStr to concrete vectorStore

0.8.14

Patch Changes

  • 630b425: feat: add Azure CosmosDB NoSql Chat store

0.8.13

Patch Changes

0.8.12

Patch Changes

0.8.11

Patch Changes

0.8.10

Patch Changes

0.8.9

Patch Changes

0.8.8

Patch Changes

0.8.7

Patch Changes

0.8.6

Patch Changes

0.8.5

Patch Changes

0.8.4

Patch Changes

  • 35430d3: Feature/ Add AzureCosmosDBNoSqlVectorStore and SimpleCosmosDBReader
  • Updated dependencies [35430d3]

0.8.3

Patch Changes

0.8.2

Patch Changes

  • c7a918c: fix: export postprocessors in core

0.8.1

Patch Changes

0.8.0

Minor Changes

  • 98ba1e7: fea:t implement context-aware agent

Patch Changes

0.7.10

Patch Changes

0.7.9

Patch Changes

0.7.8

Patch Changes

0.7.7

Patch Changes

0.7.6

Patch Changes

  • 534d550: fix: replicate deps warning in nextjs

0.7.5

Patch Changes

  • e9a111d: fix: VectorIndexRetrieverOptions typing
  • 9f22aae: fix: unable to resolve unpdf in nextjs

0.7.4

Patch Changes

0.7.3

Patch Changes

0.7.2

Patch Changes

0.7.1

Patch Changes

0.7.0

Minor Changes

  • 1364e8e: update metadata extractors to use PromptTemplate
  • 96fc69c: Correct initialization of QuestionsAnsweredExtractor so that it uses the promptTemplate arg when passed in

Patch Changes

0.6.22

Patch Changes

  • 5729bd9: Fix LlamaCloud API calls for ensuring an index and for file uploads

0.6.21

Patch Changes

  • 6f75306: feat: support metadata filters for AstraDB
  • 94cb4ad: feat: Add metadata filters to ChromaDb and update to 1.9.2

0.6.20

Patch Changes

0.6.19

Patch Changes

  • 62cba52: Add ensureIndex function to LlamaCloudIndex
  • d265e96: fix: ignore resolving unpdf for nextjs
  • d30bbf7: Convert undefined values to null in LlamaCloud filters
  • 53fd00a: Fix getPipelineId in LlamaCloudIndex

0.6.18

Patch Changes

0.6.17

Patch Changes

0.6.16

Patch Changes

0.6.15

Patch Changes

0.6.14

Patch Changes

0.6.13

Patch Changes

0.6.12

Patch Changes

  • f7b4e94: feat: add filters for pinecone
  • 78037a6: fix: bypass service context embed model
  • 1d9e3b1: fix: export llama reader in non-nodejs runtime

0.6.11

Patch Changes

0.6.10

Patch Changes

0.6.9

Patch Changes

0.6.8

Patch Changes

0.6.7

Patch Changes

  • 23bcc37: fix: add serializer in doc store

    PostgresDocumentStore now will not use JSON.stringify for better performance

0.6.6

Patch Changes

0.6.5

Patch Changes

  • e9714db: feat: update PGVectorStore

    • move constructor parameter config.user | config.database | config.password | config.connectionString into config.clientConfig
    • if you pass pg.Client or pg.Pool instance to PGVectorStore, move it to config.client, setting config.shouldConnect to false if it's already connected
    • default value of PGVectorStore.collection is now "data" instead of "" (empty string)

0.6.4

Patch Changes

0.6.3

Patch Changes

  • 2cd1383: refactor: align response-synthesizers & chat-engine module

    • builtin event system
    • correct class extends
    • aligin APIs, naming with llama-index python
    • move stream out of first parameter to second parameter for the better tyep checking
    • remove JSONQueryEngine in @llamaindex/experimental, as the code quality is not satisify and we will bring it back later
  • 5c4badb: Extend JinaAPIEmbedding parameters

  • Updated dependencies [fb36eff]
  • Updated dependencies [d24d3d1]
  • Updated dependencies [2cd1383]

0.6.2

Patch Changes

0.6.1

Patch Changes

  • fbd5e01: refactor: move groq as llm package
  • 6b70c54: feat: update JinaAIEmbedding, support embedding v3
  • 1a6137b: feat: experimental support for browser

    If you see bundler issue in next.js edge runtime, please bump to next@14 latest version.

  • 85c2e19: feat: @llamaindex/cloud package update

    • Bump to latest openapi schema
    • Move LlamaParse class from llamaindex, this will allow you use llamaparse in more non-node.js environment
  • Updated dependencies [ac07e3c]

  • Updated dependencies [fbd5e01]
  • Updated dependencies [70ccb4a]
  • Updated dependencies [1a6137b]
  • Updated dependencies [85c2e19]
  • Updated dependencies [ac07e3c]

0.6.0

Minor Changes

  • 11feef8: Add workflows

Patch Changes

0.5.27

Patch Changes

  • 7edeb1c: feat: decouple openai from llamaindex module

    This should be a non-breaking change, but just you can now only install @llamaindex/openai to reduce the bundle size in the future

  • Updated dependencies [7edeb1c]

0.5.26

Patch Changes

  • ffe0cd1: faet: add openai o1 support
  • ffe0cd1: feat: add PostgreSQL storage

0.5.25

Patch Changes

  • 4810364: fix: handle RouterQueryEngine with string query
  • d3bc663: refactor: export vector store only in nodejs environment on top level

    If you see some missing modules error, please change vector store related imports to llamaindex/vector-store

  • Updated dependencies [4810364]

0.5.24

Patch Changes

0.5.23

Patch Changes

0.5.22

Patch Changes

0.5.21

Patch Changes

  • ae1149f: feat: add JSON streaming to JSONReader
  • 2411c9f: Auto-create index for MongoDB vector store (if not exists)
  • e8f229c: Remove logging from MongoDB Atlas Vector Store
  • 11b3856: implement filters for MongoDBAtlasVectorSearch
  • 83d7f41: Fix database insertion for PGVectorStore

    It will now:

    • throw an error if there is an insertion error.
    • Upsert documents with the same id.
    • add all documents to the database as a single INSERT call (inside a transaction).
  • 0148354: refactor: prompt system

    Add PromptTemplate module with strong type check.

  • 1711f6d: Export imageToDataUrl for using images in chat

  • Updated dependencies [0148354]

0.5.20

Patch Changes

  • d9d6c56: Add support for MetadataFilters for PostgreSQL
  • 22ff486: Add tiktoken WASM to withLlamaIndex
  • eed0b04: fix: use LLM metadata mode for generating context of ContextChatEngine

0.5.19

Patch Changes

  • fcbf183: implement llamacloud file service

0.5.18

Patch Changes

  • 8b66cf4: feat: support organization id in llamacloud index
  • Updated dependencies [e27e7dd]

0.5.17

Patch Changes

  • c654398: Implement Weaviate Vector Store in TS

0.5.16

Patch Changes

0.5.15

Patch Changes

  • 01c184c: Add is_empty operator for filtering vector store
  • 07a275f: chore: bump openai

0.5.14

Patch Changes

  • c825a2f: Add gpt-4o-mini to Azure. Add 2024-06-01 API version for Azure

0.5.13

Patch Changes

0.5.12

Patch Changes

  • 345300f: feat: add splitByPage mode to LlamaParseReader
  • da5cfc4: Add metadatafilter options to retriever constructors
  • da5cfc4: Fix system prompt not used in ContextChatEngine
  • Updated dependencies [0452af9]

0.5.11

Patch Changes

0.5.10

Patch Changes

  • 086b940: feat: add DeepSeek LLM
  • 5d5716b: feat: add a reader for JSON data
  • 91d02a4: feat: support transform component callable
  • fb6db45: feat: add pageSeparator params to LlamaParseReader
  • Updated dependencies [91d02a4]

0.5.9

Patch Changes

  • 15962b3: feat: node parser refactor

    Align the text splitter logic with Python; it has almost the same logic as Python; Zod checks for input and better error messages and event system.

    This change will not be considered a breaking change since it doesn't have a significant output difference from the last version, but some edge cases will change, like the page separator and parameter for the constructor.

  • Updated dependencies [15962b3]

0.5.8

Patch Changes

  • 3d5ba08: fix: update user agent in AssemblyAI
  • d917cdc: Add azure interpreter tool to tool factory

0.5.7

Patch Changes

  • ec59acd: fix: bundling issue with pnpm

0.5.6

Patch Changes

  • 2562244: feat: add gpt4o-mini
  • 325aa51: Implement Jina embedding through Jina api
  • ab700ea: Add missing authentication to LlamaCloudIndex.fromDocuments
  • 92f0782: feat: use query bundle
  • 6cf6ae6: feat: abstract query type
  • b7cfe5b: fix: passing max_token option to replicate's api call
  • Updated dependencies [6cf6ae6]

0.5.5

Patch Changes

0.5.4

Patch Changes

  • 1a65ead: feat: add vendorMultimodal params to LlamaParseReader

0.5.3

Patch Changes

  • 9bbbc67: feat: add a reader for Discord messages
  • b3681bf: fix: DataCloneError when using FunctionTool
  • Updated dependencies [b3681bf]

0.5.2

Patch Changes

0.5.1

Patch Changes

0.5.0

Minor Changes

  • 16ef5dd: refactor: simplify callback manager

    Change event.detail.payload to event.detail

Patch Changes

  • 16ef5dd: refactor: move callback manager & llm to core module

    For people who import llamaindex/llms/base or llamaindex/llms/utils, use @llamaindex/core/llms and @llamaindex/core/utils instead.

  • 36ddec4: fix: typo in custom page separator parameter for LlamaParse

  • Updated dependencies [16ef5dd]
  • Updated dependencies [16ef5dd]
  • Updated dependencies [36ddec4]

0.4.14

Patch Changes

0.4.13

Patch Changes

  • e8f8bea: feat: add boundingBox and targetPages to LlamaParseReader
  • 304484b: feat: add ignoreErrors flag to LlamaParseReader

0.4.12

Patch Changes

0.4.11

Patch Changes

  • 8bf5b4a: fix: llama parse input spreadsheet

0.4.10

Patch Changes

  • 7dce3d2: fix: disable External Filters for Gemini

0.4.9

Patch Changes

  • 3a96a48: fix: anthroipic image input

0.4.8

Patch Changes

  • 83ebdfb: fix: next.js build error

0.4.7

Patch Changes

0.4.6

Patch Changes

  • 1feb23b: feat: Gemini tool calling for agent support
  • 08c55ec: Add metadata to PDFs and use Uint8Array for readers content

0.4.5

Patch Changes

  • 6c3e5d0: fix: switch to correct reference for a static function

0.4.4

Patch Changes

  • 42eb73a: Fix IngestionPipeline not working without vectorStores

0.4.3

Patch Changes

  • 2ef62a9: feat: added support for embeddings via HuggingFace Inference API
  • Updated dependencies [d4e853c]
  • Updated dependencies [a94b8ec]

0.4.2

Patch Changes

  • a87a4d1: feat: added tool support calling for Bedrock's Calude and general llm support for agents
  • 0730140: include node relationships when converting jsonToDoc
  • Updated dependencies [f3b34b4]

0.4.1

Patch Changes

  • 3c47910: fix: groq llm
  • ed467a9: Add model ids for Anthropic Claude 3.5 Sonnet model on Anthropic and Bedrock
  • cba5406: fix: every Llama Parse job being called "blob"
  • Updated dependencies [56fabbb]

0.4.0

Minor Changes

  • 436bc41: Unify chat engine response and agent response

Patch Changes

  • a44e54f: Truncate text to embed for OpenAI if it exceeds maxTokens
  • a51ed8d: feat: add support for managed identity for Azure OpenAI
  • d3b635b: fix: agents to use chat history

0.3.17

Patch Changes

  • 6bc5bdd: feat: add cache disabling, fast mode, do not unroll columns mode and custom page separator to LlamaParseReader
  • bf25ff6: fix: polyfill for cloudflare worker
  • e6d6576: chore: use unpdf

0.3.16

Patch Changes

  • 11ae926: feat: add numCandidates setting to MongoDBAtlasVectorStore for tuning queries
  • 631f000: feat: DeepInfra LLM implementation
  • 1378ec4: feat: set default model to gpt-4o
  • 6b1ded4: add gpt4o-mode, invalidate cache and skip diagonal text to LlamaParseReader
  • 4d4bd85: Show error message if agent tool is called with partial JSON
  • 24a9d1e: add json mode and image retrieval to LlamaParseReader
  • 45952de: add concurrency management for SimpleDirectoryReader
  • 54230f0: feat: Gemini GA release models
  • a29d835: setDocumentHash should be async
  • 73819bf: Unify metadata and ID handling of documents, allow files to be read by Buffer

0.3.15

Patch Changes

  • 6e156ed: Use images in context chat engine
  • 265976d: fix bug with node decorator
  • 8e26f75: Add retrieval for images using multi-modal messages

0.3.14

Patch Changes

  • 6ff7576: Added GPT-4o for Azure
  • 94543de: Added the latest preview gemini models and multi modal images taken into account

0.3.13

Patch Changes

  • 1b1081b: Add vectorStores to storage context to define vector store per modality
  • 37525df: Added support for accessing Gemini via Vertex AI
  • 660a2b3: Fix text before heading in markdown reader
  • a1f2475: Add system prompt to ContextChatEngine

0.3.12

Patch Changes

  • 34fb1d8: fix: cloudflare dev

0.3.11

Patch Changes

  • e072c45: fix: remove non-standard API pipeline
  • 9e133ac: refactor: remove defaultFS from parameters

    We don't accept passing fs in the parameter since it's unnecessary for a determined JS environment.

    This was a polyfill way for the non-Node.js environment, but now we use another way to polyfill APIs.

  • 447105a: Improve Gemini message and context preparation

  • 320be3f: Force ChromaDB version to 1.7.3 (to prevent NextJS issues)
  • Updated dependencies [e072c45]
  • Updated dependencies [9e133ac]

0.3.10

Patch Changes

  • 4aba02e: feat: support gpt4-o

0.3.9

Patch Changes

  • c3747d0: fix: import @xenova/transformers

    For now, if you use llamaindex in next.js, you need to add a plugin from llamaindex/next to ensure some module resolutions are correct.

0.3.8

Patch Changes

  • ce94780: Add page number to read PDFs and use generated IDs for PDF and markdown content

0.3.7

Patch Changes

  • b6a6606: feat: allow change host of ollama
  • b6a6606: chore: export ollama in default js runtime

0.3.6

Patch Changes

  • efa326a: chore: update package.json
  • Updated dependencies [efa326a]
  • Updated dependencies [efa326a]

0.3.5

Patch Changes

  • bc7a11c: fix: inline ollama build
  • 2fe2b81: fix: filter with multiple filters in ChromaDB
  • 5596e31: feat: improve @llamaindex/env
  • e74fe88: fix: change <-> to <=> in the SELECT query
  • be5df5b: fix: anthropic agent on multiple chat
  • Updated dependencies [5596e31]

0.3.4

Patch Changes

  • 1dce275: fix: export StorageContext on edge runtime
  • d10533e: feat: add hugging face llm
  • 2008efe: feat: add verbose mode to Agent
  • 5e61934: fix: remove clone object in CallbackManager.dispatchEvent
  • 9e74a43: feat: add top k to asQueryEngine
  • ee719a1: fix: streaming for ReAct Agent

0.3.3

Patch Changes

  • e8c41c5: fix: wrong gemini streaming chat response

0.3.2

Patch Changes

  • 61103b6: fix: streaming for Agent.createTask API

0.3.1

Patch Changes

  • 46227f2: fix: build error on next.js nodejs runtime

0.3.0

Minor Changes

  • 5016f21: feat: improve next.js/cloudflare/vite support

Patch Changes

0.2.13

Patch Changes

  • 6277105: fix: allow passing empty tools to llms

0.2.12

Patch Changes

  • d8d952d: feat: add gemini llm and embedding

0.2.11

Patch Changes

  • 87142b2: refactor: use ollama official sdk
  • 5a6cc0e: feat: support jina ai embedding and reranker
  • 87142b2: feat: support output to json format

0.2.10

Patch Changes

  • cf70edb: Llama 3 support

0.2.9

Patch Changes

  • 76c3fd6: Add score to source nodes response
  • 208282d: feat: init anthropic agent

    remove the tool | function type in MessageType. Replace with assistant instead. This is because these two types are only available for OpenAI. Since OpenAI deprecates the function type, we support the Claude 3 tool call.

0.2.8

Patch Changes

  • Add ToolsFactory to generate agent tools

0.2.7

Patch Changes

0.2.6

Patch Changes

  • a3b4409: Fix agent streaming with new OpenAI models

0.2.5

Patch Changes

  • 7d56cdf: Allow OpenAIAgent to be called without tools

0.2.4

Patch Changes

  • 3bc77f7: gpt-4-turbo GA
  • 8d2b21e: Mistral 0.1.3

0.2.3

Patch Changes

  • f0704ec: Support streaming for OpenAI agent (and OpenAI tool calls)
  • Removed 'parentEvent' - Use 'event.reason?.computedCallers' instead
  • 3cbfa98: Added LlamaCloudIndex.fromDocuments

0.2.2

Patch Changes

  • 3f8407c: Add pipeline.register to create a managed index in LlamaCloud
  • 60a1603: fix: make edge run build after core
  • fececd8: feat: add tool factory
  • 1115f83: fix: throw error when no pipelines exist for the retriever
  • 7a23cc6: feat: improve CallbackManager
  • ea467fa: Update the list of supported Azure OpenAI API versions as of 2024-04-02.
  • 6d9e015: feat: use claude3 with react agent
  • 0b665bd: feat: add wikipedia tool
  • 24b4033: feat: add result type json
  • 8b28092: Add support for doc store strategies to VectorStoreIndex.fromDocuments
  • Updated dependencies [7a23cc6]

0.2.1

Patch Changes

  • 41210df: Add auto create milvus collection and add milvus node metadata
  • 137cf67: Use Pinecone namespaces for all operations
  • 259c842: Add support for edge runtime by using @llamaindex/edge

0.2.0

Minor Changes

  • bf583a7: Use parameter object for retrieve function of Retriever (to align usage with query function of QueryEngine)

Patch Changes

  • d2e8d0c: add support for Milvus vector store
  • aefc326: feat: experimental package + json query engine
  • 484a710: - Add missing exports:
    • IndexStructType,
    • IndexDict,
    • jsonToIndexStruct,
    • IndexList,
    • IndexStruct
    • Fix IndexDict.toJson() method
  • d766bd0: Add streaming to agents
  • dd95927: add Claude Haiku support and update anthropic SDK

0.1.21

Patch Changes

  • 552a61a: Add quantized parameter to HuggingFaceEmbedding
  • d824876: Add support for Claude 3

0.1.20

Patch Changes

  • 64683a5: fix: prefix messages always true
  • 698cd9c: fix: step wise agent + examples
  • 7257751: fixed removeRefDocNode and persist store on delete
  • 5116ad8: fix: compatibility issue with Deno
  • Updated dependencies [5116ad8]

0.1.19

Patch Changes

  • 026d068: feat: enhance pinecone usage

0.1.18

Patch Changes

  • 90027a7: Add splitLongSentences option to SimpleNodeParser
  • c57bd11: feat: update and refactor title extractor

0.1.17

Patch Changes

  • c8396c5: feat: add base evaluator and correctness evaluator
  • c8396c5: feat: add base evaluator and correctness evaluator
  • cf87f84: fix: type backward compatibility
  • 09bf27a: Add Groq LLM to LlamaIndex
  • Updated dependencies [cf87f84]

0.1.16

Patch Changes

0.1.15

Patch Changes

  • 3a6e287: build: improve tree-shake & reduce unused package import

0.1.14

Patch Changes

0.1.13

Patch Changes

  • b8be4c0: build: use ESM as default
  • 65d8346: feat: abstract @llamaindex/env package

0.1.12

Patch Changes

  • a5e4e6d: Add using a managed index from LlamaCloud
  • cfdd6db: fix: update pinecone vector store
  • 59f9fb6: Add Fireworks to LlamaIndex
  • 95add73: feat: multi-document agent

0.1.11

Patch Changes

  • 255ae7d: chore: update example (perfoms better with default model)
  • cf3b757: feat: add filtering of metadata to PGVectorStore
  • ee9f3f3: chore: refactor openai agent utils
  • e78e9f4: feat(reranker): cohere reranker
  • f205358: feat: markdown node parser
  • dd05413: feat: use batching in vector store index
  • 383933a: Add reader for LlamaParse

0.1.10

Patch Changes

  • b6c1500: feat(embedBatchSize): add batching for embeddings
  • 6cc3a36: fix: update VectorIndexRetriever constructor parameters' type.
  • cd82947: feat(queryEngineTool): add query engine tool to agents

0.1.9

Patch Changes

  • 09464e6: add OpenAIAgent (thanks @EmanuelCampos)

0.1.8

Patch Changes

  • d903da6: easier prompt customization for SimpleResponseBuilder
  • ab9d941: fix(cyclic): remove cyclic structures from transform hash
  • 177b446: chore: improve extractors prompt

0.1.7

Patch Changes

  • d687c11: feat(router): add router query engine

0.1.6

Patch Changes

  • cf44640: fix: instanceof issue

    This will fix QueryEngine cannot run.

  • 7231ddb: feat: allow SimpleDirectoryReader to get a string

0.1.5

Patch Changes

  • 8a9b78a: chore: split readers into different files

0.1.4

Patch Changes

  • 88696e1: refactor: use pdf2json instead of pdfjs-dist

    Please add pdf2json to serverComponentsExternalPackages if you have to parse pdf in runtime.

    // next.config.js
    /** @type {import('next').NextConfig} */
    const nextConfig = {
      experimental: {
        serverComponentsExternalPackages: ["pdf2json"],
      },
    };
    
    module.exports = nextConfig;

0.1.3

Patch Changes

  • 9ce7d3d: update dependencies
  • 7d50196: fix: output target causes not implemented error

0.1.2

  • e4b807a: fix: invalid package.json

0.1.1

No changes for this release.

0.1.0

Minor Changes

  • 3154f52: chore: add qdrant readme

Patch Changes

  • bb66cb7: add new OpenAI embeddings (with dimension reduction support)

0.0.51

Patch Changes

  • fda8024: revert: export conditions not working with moduleResolution node

0.0.50

Patch Changes

  • 8a729cd: fix bugs in Together.AI integration (thanks @Nutlope for reporting)

0.0.49

Patch Changes

  • eee3922: feat(qdrant): Add Qdrant Vector DB
  • e2790da: Preview: Add ingestion pipeline (incl. different strategies to handle doc store duplicates)
  • bff40f2: feat: use conditional exports

    The benefit of conditional exports is we split the llamaindex into different files. This will improve the tree shake if you are building web apps.

    This also requires node16 (see https://nodejs.org/api/packages.html#conditional-exports).

    If you are seeing typescript issue TS2724('llamaindex' has no exported member named XXX):

    1. update moduleResolution to bundler in tsconfig.json, more for the web applications like Next.js, and vite, but still works for ts-node or tsx.
    2. consider the ES module in your project, add "type": "module" into package.json and update moduleResolution to node16 or nodenext in tsconfig.json.

    We still support both cjs and esm, but you should update tsconfig.json to make the typescript happy.

  • 2d8845b: feat(extractors): add keyword extractor and base extractor

0.0.48

Patch Changes

  • 34a26e5: Remove HistoryChatEngine and use ChatHistory for all chat engines

0.0.47

Patch Changes

  • 844029d: Add streaming support for QueryEngine (and unify streaming interface with ChatEngine)
  • 844029d: Breaking: Use parameter object for query and chat methods of ChatEngine and QueryEngine

0.0.46

Patch Changes

  • 977f284: fixing import statement
  • 5d3bb66: fix: class SimpleKVStore might throw error in ES module
  • f18c9f6: refactor: Updated low-level streaming interface

0.0.45

Patch Changes

  • 2e6b36e: feat: support together AI

0.0.44

Patch Changes

  • 648482b: Feat: Add support for Chroma DB as a vector store

0.0.43

Patch Changes

  • Fix performance issue parsing nodes: use regex to split texts

0.0.42

Patch Changes

  • 16f04c7: Add local embeddings using hugging face
  • 16f04c7: Add sentence window retrieval

0.0.41

Patch Changes

  • c835f78: Use compromise as sentence tokenizer
  • c835f78: Removed pdf-parse, and directly use latest pdf.js
  • c835f78: Added pinecone vector DB
  • c835f78: Added support for Ollama

0.0.40

Patch Changes

  • e9f6de1: Added support for multi-modal RAG (retriever and query engine) incl. an example Fixed persisting and loading image vector stores
  • 606ffa4: Updated Astra client and added associated type changes

0.0.39

Patch Changes

  • 21510bd: Added support for MistralAI (LLM and Embeddings)
  • 25141b8: Add support for AstraDB vector store

0.0.38

Patch Changes

  • 786c25d: Fixes to the PGVectorStore (thanks @mtutty)
  • bf9e263: Azure bugfix (thanks @parhammmm)
  • bf9e263: AssemblyAI updates (thanks @Swimburger)
  • 786c25d: Add GPT-4 Vision support (thanks @marcusschiesser)
  • bf9e263: Internationalization of docs (thanks @hexapode and @disiok)

0.0.37

Patch Changes

  • 3bab231: Fixed errors (#225 and #226) Thanks @marcusschiesser

0.0.36

Patch Changes

  • Support for Claude 2.1
  • Add AssemblyAI integration (thanks @Swimburger)
  • Use cryptoJS (thanks @marcusschiesser)
  • Add PGVectorStore (thanks @mtutty)
  • Add CLIP embeddings (thanks @marcusschiesser)
  • Add MongoDB support (thanks @marcusschiesser)

0.0.35

Patch Changes

  • 63f2108: Add multimodal support (thanks @marcusschiesser)

0.0.34

Patch Changes

  • 2a27e21: Add support for gpt-3.5-turbo-1106

0.0.33

Patch Changes

  • 5e2e92c: gpt-4-1106-preview and gpt-4-vision-preview from OpenAI dev day

0.0.32

Patch Changes

  • 90c0b83: Add HTMLReader (thanks @mtutty)
  • dfd22aa: Add observer/filter to the SimpleDirectoryReader (thanks @mtutty)

0.0.31

Patch Changes

  • 6c55b2d: Give HistoryChatEngine pluggable options (thanks @marcusschiesser)
  • 8aa8c65: Add SimilarityPostProcessor (thanks @TomPenguin)
  • 6c55b2d: Added LLMMetadata (thanks @marcusschiesser)

0.0.30

Patch Changes

  • 139abad: Streaming improvements including Anthropic (thanks @kkang2097)
  • 139abad: Portkey integration (Thank you @noble-varghese)
  • eb0e994: Add export for PromptHelper (thanks @zigamall)
  • eb0e994: Publish ESM module again
  • 139abad: Pinecone demo (thanks @Einsenhorn)

0.0.29

Patch Changes

  • a52143b: Added DocxReader for Word documents (thanks @jayantasamaddar)
  • 1b7fd95: Updated OpenAI streaming (thanks @kkang2097)
  • 0db3f41: Migrated to Tiktoken lite, which hopefully fixes the Windows issue

0.0.28

Patch Changes

  • 96bb657: Typesafe metadata (thanks @TomPenguin)
  • 96bb657: MongoReader (thanks @kkang2097)
  • 837854d: Make OutputParser less strict and add tests (Thanks @kkang2097)

0.0.27

Patch Changes

  • 4a5591b: Chat History summarization (thanks @marcusschiesser)
  • 4a5591b: Notion database support (thanks @TomPenguin)
  • 4a5591b: KeywordIndex (thanks @swk777)

0.0.26

Patch Changes

  • 5bb55bc: Add notion loader (thank you @TomPenguin!)

0.0.25

Patch Changes

  • e21eca2: OpenAI 4.3.1 and Anthropic 0.6.2
  • 40a8f07: Update READMEs (thanks @andfk)
  • 40a8f07: Bug: missing exports from storage (thanks @aashutoshrathi)

0.0.24

Patch Changes

  • e4af7b3: Renamed ListIndex to SummaryIndex to better indicate its use.
  • 259fe63: Strong types for prompts.

0.0.23

Patch Changes

  • Added MetadataMode to ResponseSynthesizer (thanks @TomPenguin)
  • 9d6b2ed: Added Markdown Reader (huge shoutout to @swk777)

0.0.22

Patch Changes

  • 454f3f8: CJK sentence splitting (thanks @TomPenguin)
  • 454f3f8: Export options for Windows formatted text files
  • 454f3f8: Disable long sentence splitting by default
  • 454f3f8: Make sentence splitter not split on decimals.
  • 99df58f: Anthropic 0.6.1 and OpenAI 4.2.0. Changed Anthropic timeout back to 60s

0.0.21

Patch Changes

  • f7a57ca: Fixed metadata deserialization (thanks @marcagve)
  • 0a09de2: Update to OpenAI 4.1.0
  • f7a57ca: ChatGPT optimized prompts (thanks @LoganMarkewich)

0.0.20

Patch Changes

  • b526a2d: added additionalSessionOptions and additionalChatOptions
  • b526a2d: OpenAI v4.0.1
  • b526a2d: OpenAI moved timeout back to 60 seconds

0.0.19

Patch Changes

  • a747f28: Add PapaCSVReader (thank you @swk777)
  • 355910b: OpenAI v4 (final), Anthropic 0.6, Replicate 0.16.1
  • 355910b: Breaking: Removed NodeWithEmbeddings (just use BaseNode)

0.0.18

Patch Changes

  • 824c13c: Breaking: allow documents to be reimported with hash checking.
  • 18b8915: Update storage exports (thanks @TomPenguin)
  • ade9d8f: Bug fix: use session in OpenAI Embeddings (thanks @swk777)
  • 824c13c: Breaking: removed nodeId and docId. Just use id_

0.0.17

Patch Changes

  • f80b062: Breaking: changed default temp to 0.1 matching new Python change by @logan-markewich
  • b3fec86: Add support for new Replicate 4 bit Llama2 models
  • b3fec86: Bug fixes for Llama2 Replicate

0.0.16

Patch Changes

  • ec12633: Breaking: make vector store abstraction async (thank you @tyre for the PR)
  • 9214b06: Fix persistence bug (thanks @HenryHengZJ)
  • 3e52972: Fix Node initializer bug (thank you @tyre for the PR)
  • 3316c6b: Add Azure OpenAI support
  • 3316c6b: OpenAI Node v4-beta.8

0.0.15

Patch Changes

  • b501eb5: Added Anthropic Claude support
  • f9d1a6e: Add Top P

0.0.14

Patch Changes

  • 4ef334a: JSDoc and Github Actions thanks to @kevinlu1248, @sweep-ai
  • 0af7773: Added Meta strategy for Llama2
  • bea4af9: Fixed sentence splitter overlap logic
  • 4ef334a: asQueryEngine bug fix from @ysak-y

0.0.13

Patch Changes

  • 4f6f245: Moved to OpenAI NPM v4

0.0.12

Patch Changes

  • 68bdaaa: Updated dependencies and README

0.0.11

Patch Changes

  • fb7fb76: Added back PDF loader

0.0.10

Patch Changes

  • 6f2cb31: Fixed tokenizer decoder

0.0.9

Patch Changes

  • 02d9bb0: Remove ESM export for now (causing issues with edge functions)

0.0.8

Patch Changes

  • ea5038e: Disabling PDF loader for now to fix module import

0.0.7

Patch Changes

  • 9fa6d4a: Make second argument of fromDocuments optional again

0.0.6

Patch Changes

  • Better persistence interface (thanks Logan)

0.0.5

Patch Changes

  • 5a765aa: Updated README

0.0.4

Patch Changes

  • c65d671: Added README and CONTRIBUTING

0.0.3

Patch Changes

  • ca9410f: Added more documentation

0.0.2

Patch Changes

  • Initial release