Important: This documentation covers Yarn 1 (Classic).
For Yarn 2+ docs and migration guide, see yarnpkg.com.

Package detail

@langchain/openai

langchain-ai6.5mMIT1.1.2TypeScript support: included

OpenAI integrations for LangChain.js

llm, ai, gpt3, chain, prompt, prompt engineering, chatgpt, machine learning, ml, openai, embeddings, vectorstores

readme

@langchain/openai

This package contains the LangChain.js integrations for OpenAI through their SDK.

Installation

`bash npm2yarn npm install @langchain/openai @langchain/core


This package, along with the main LangChain package, depends on [`@langchain/core`](https://npmjs.com/package/@langchain/core/).
If you are using this package with other LangChain packages, you should make sure that all of the packages depend on the same instance of @langchain/core.
You can do so by adding appropriate fields to your project's `package.json` like this:

```json
{
  "name": "your-project",
  "version": "0.0.0",
  "dependencies": {
    "@langchain/core": "^0.3.0",
    "@langchain/openai": "^0.0.0"
  },
  "resolutions": {
    "@langchain/core": "^0.3.0"
  },
  "overrides": {
    "@langchain/core": "^0.3.0"
  },
  "pnpm": {
    "overrides": {
      "@langchain/core": "^0.3.0"
    }
  }
}

The field you need depends on the package manager you're using, but we recommend adding a field for the common pnpm, npm, and yarn to maximize compatibility.

Chat Models

This package contains the ChatOpenAI class, which is the recommended way to interface with the OpenAI series of models.

To use, install the requirements, and configure your environment.

export OPENAI_API_KEY=your-api-key

Then initialize

import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({
  apiKey: process.env.OPENAI_API_KEY,
  model: "gpt-4-1106-preview",
});
const response = await model.invoke(new HumanMessage("Hello world!"));

Streaming

import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({
  apiKey: process.env.OPENAI_API_KEY,
  model: "gpt-4-1106-preview",
});
const response = await model.stream(new HumanMessage("Hello world!"));

Embeddings

This package also adds support for OpenAI's embeddings model.

import { OpenAIEmbeddings } from "@langchain/openai";

const embeddings = new OpenAIEmbeddings({
  apiKey: process.env.OPENAI_API_KEY,
});
const res = await embeddings.embedQuery("Hello world");

Development

To develop the OpenAI package, you'll need to follow these instructions:

Install dependencies

pnpm install

Build the package

pnpm build

Or from the repo root:

pnpm build --filter=@langchain/openai

Run tests

Test files should live within a tests/ file in the src/ folder. Unit tests should end in .test.ts and integration tests should end in .int.test.ts:

$ pnpm test
$ pnpm test:int

Lint & Format

Run the linter & formatter to ensure your code is up to standard:

pnpm lint && pnpm format

Adding new entrypoints

If you add a new file to be exported, either import & re-export from src/index.ts, or add it to the exports field in the package.json file and run pnpm build to generate the new entrypoint.

changelog

@langchain/openai

1.1.2

Patch Changes

1.1.1

Patch Changes

1.1.0

Minor Changes

  • 8319201: hoist message/tool conversion utilities from classes

Patch Changes

  • 4906522: fix(openai): pair reasoning with function_call id

1.0.0

This release updates the package for compatibility with LangChain v1.0. See the v1.0 release notes for details on what's new.

0.6.16

Patch Changes

  • b8ffc1e: fix(openai): Remove raw OpenAI fields from token usage

0.6.15

Patch Changes

  • e63c7cc: fix(openai): Convert OpenAI responses API usage to tracing format

0.6.14

Patch Changes

  • d38e9d6: fix(openai): fix streaming in openai

0.6.12

Patch Changes

  • 41bd944: support base64 embeddings format
  • 707a768: handle undefined disableStreaming to restore streaming functionality

0.6.11

Patch Changes

  • 65459e3: use proper casing for reasoning effort param

0.6.10

Patch Changes

  • 4a3f5af: add verbosity to json schema response format (#8754)
  • 424360b: re-add reasoning_effort param