Important: This documentation covers Yarn 1 (Classic).
For Yarn 2+ docs and migration guide, see yarnpkg.com.

Package detail

@ai-sdk/openai

vercel1.9mApache-2.01.1.0TypeScript support: included

The OpenAI provider for the AI SDK contains language model support for the OpenAI chat and completion APIs and embedding model support for the OpenAI embeddings A

ai

readme

AI SDK - OpenAI Provider

The OpenAI provider for the AI SDK contains language model support for the OpenAI chat and completion APIs and embedding model support for the OpenAI embeddings API.

Setup

The OpenAI provider is available in the @ai-sdk/openai module. You can install it with

npm i @ai-sdk/openai

Provider Instance

You can import the default provider instance openai from @ai-sdk/openai:

import { openai } from '@ai-sdk/openai';

Example

import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';

const { text } = await generateText({
  model: openai('gpt-4-turbo'),
  prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});

Documentation

Please check out the OpenAI provider documentation for more information.

changelog

@ai-sdk/openai

1.1.0

Minor Changes

  • 62ba5ad: release: AI SDK 4.1

Patch Changes

1.0.20

Patch Changes

1.0.19

Patch Changes

  • 218d001: feat (provider): Add maxImagesPerCall setting to all image providers.

1.0.18

Patch Changes

  • fe816e4: fix (provider/openai): streamObject with o1

1.0.17

Patch Changes

  • ba62cf2: feat (provider/openai): automatically map maxTokens to max_completion_tokens for reasoning models
  • 3c3fae8: fix (provider/openai): add o1-mini-2024-09-12 and o1-preview-2024-09-12 configurations

1.0.16

Patch Changes

1.0.15

Patch Changes

  • f8c6acb: feat (provider/openai): automatically simulate streaming for reasoning models
  • d0041f7: feat (provider/openai): improved system message support for reasoning models
  • 4d2f97b: feat (provider/openai): improve automatic setting removal for reasoning models

1.0.14

Patch Changes

  • 19a2ce7: feat (ai/core): add aspectRatio and seed options to generateImage
  • 6337688: feat: change image generation errors to warnings
  • Updated dependencies [19a2ce7]
  • Updated dependencies [19a2ce7]
  • Updated dependencies [6337688]

1.0.13

Patch Changes

  • b19aa82: feat (provider/openai): add predicted outputs token usage

1.0.12

Patch Changes

  • a4241ff: feat (provider/openai): add o3 reasoning model support

1.0.11

Patch Changes

1.0.10

Patch Changes

  • d4fad4e: fix (provider/openai): fix reasoning model detection

1.0.9

Patch Changes

  • 3fab0fb: feat (provider/openai): support reasoning_effort setting
  • e956eed: feat (provider/openai): update model list and add o1
  • 6faab13: feat (provider/openai): simulated streaming setting

1.0.8

Patch Changes

1.0.7

Patch Changes

1.0.6

Patch Changes

  • a9a19cb: fix (provider/openai,groq): prevent sending duplicate tool calls

1.0.5

Patch Changes

  • fc18132: feat (ai/core): experimental output for generateText

1.0.4

Patch Changes

1.0.3

Patch Changes

  • b748dfb: feat (providers): update model lists

1.0.2

Patch Changes

1.0.1

Patch Changes

  • 5e6419a: feat (provider/openai): support streaming for reasoning models

1.0.0

Major Changes

  • 66060f7: chore (release): bump major version to 4.0
  • 79644e9: chore (provider/openai): remove OpenAI facade
  • 0d3d3f5: chore (providers): remove baseUrl option

Patch Changes

  • Updated dependencies [b469a7e]
  • Updated dependencies [dce4158]
  • Updated dependencies [c0ddc24]
  • Updated dependencies [b1da952]
  • Updated dependencies [dce4158]
  • Updated dependencies [8426f55]
  • Updated dependencies [db46ce5]

1.0.0-canary.3

Patch Changes

1.0.0-canary.2

Patch Changes

1.0.0-canary.1

Major Changes

  • 79644e9: chore (provider/openai): remove OpenAI facade
  • 0d3d3f5: chore (providers): remove baseUrl option

Patch Changes

1.0.0-canary.0

Major Changes

  • 66060f7: chore (release): bump major version to 4.0

Patch Changes

0.0.72

Patch Changes

  • 0bc4115: feat (provider/openai): support predicted outputs

0.0.71

Patch Changes

  • 54a3a59: fix (provider/openai): support object-json mode without schema

0.0.70

Patch Changes

0.0.69

Patch Changes

0.0.68

Patch Changes

  • 741ca51: feat (provider/openai): support mp3 and wav audio inputs

0.0.67

Patch Changes

  • 39fccee: feat (provider/openai): provider name can be changed for 3rd party openai compatible providers

0.0.66

Patch Changes

  • 3f29c10: feat (provider/openai): support metadata field for distillation

0.0.65

Patch Changes

  • e8aed44: Add OpenAI cached prompt tokens to experimental_providerMetadata for generateText and streamText

0.0.64

Patch Changes

  • 5aa576d: feat (provider/openai): support store parameter for distillation

0.0.63

Patch Changes

0.0.62

Patch Changes

  • 7efa867: feat (provider/openai): simulated streaming for reasoning models

0.0.61

Patch Changes

  • 8132a60: feat (provider/openai): support reasoning token usage and max_completion_tokens

0.0.60

Patch Changes

0.0.59

Patch Changes

  • a0991ec: feat (provider/openai): add o1-preview and o1-mini models

0.0.58

Patch Changes

  • e0c36bd: feat (provider/openai): support image detail

0.0.57

Patch Changes

  • d1aaeae: feat (provider/openai): support ai sdk image download

0.0.56

Patch Changes

0.0.55

Patch Changes

  • 28cbf2e: fix (provider/openai): support tool call deltas when arguments are sent in the first chunk

0.0.54

Patch Changes

0.0.53

Patch Changes

0.0.52

Patch Changes

  • d5b6a15: feat (provider/openai): support partial usage information

0.0.51

Patch Changes

0.0.50

Patch Changes

0.0.49

Patch Changes

  • f42d9bd: fix (provider/openai): support OpenRouter streaming errors

0.0.48

Patch Changes

0.0.47

Patch Changes

  • 4ffbaee: fix (provider/openai): fix strict flag for structured outputs with tools
  • dd712ac: fix: use FetchFunction type to prevent self-reference
  • Updated dependencies [dd712ac]

0.0.46

Patch Changes

0.0.45

Patch Changes

0.0.44

Patch Changes

0.0.43

Patch Changes

0.0.42

Patch Changes

0.0.41

Patch Changes

  • 7a2eb27: feat (provider/openai): make role nullish to enhance provider support
  • Updated dependencies [9614584]
  • Updated dependencies [0762a22]

0.0.40

Patch Changes

0.0.39

Patch Changes

0.0.38

Patch Changes

  • 2b9da0f0: feat (core): support stopSequences setting.
  • 909b9d27: feat (ai/openai): Support legacy function calls
  • a5b58845: feat (core): support topK setting
  • 4aa8deb3: feat (provider): support responseFormat setting in provider api
  • 13b27ec6: chore (ai/core): remove grammar mode
  • Updated dependencies [2b9da0f0]
  • Updated dependencies [a5b58845]
  • Updated dependencies [4aa8deb3]
  • Updated dependencies [13b27ec6]

0.0.37

Patch Changes

  • 89947fc5: chore (provider/openai): update model list for type-ahead support

0.0.36

Patch Changes

0.0.35

Patch Changes

0.0.34

Patch Changes

0.0.33

Patch Changes

0.0.32

Patch Changes

  • 1b37b8b9: fix (@ai-sdk/openai): only send logprobs settings when logprobs are requested

0.0.31

Patch Changes

  • eba071dd: feat (@ai-sdk/azure): add azure openai completion support
  • 1ea890fe: feat (@ai-sdk/azure): add azure openai completion support

0.0.30

Patch Changes

0.0.29

Patch Changes

  • 4728c37f: feat (core): add text embedding model support to provider registry
  • 7910ae84: feat (providers): support custom fetch implementations
  • Updated dependencies [7910ae84]

0.0.28

Patch Changes

  • f9db8fd6: feat (@ai-sdk/openai): add parallelToolCalls setting

0.0.27

Patch Changes

  • fc9552ec: fix (@ai-sdk/azure): allow for nullish delta

0.0.26

Patch Changes

  • 7530f861: fix (@ai-sdk/openai): add internal dist to bundle

0.0.25

Patch Changes

  • 8b1362a7: chore (@ai-sdk/openai): expose models under /internal for reuse in other providers

0.0.24

Patch Changes

  • 0e78960c: fix (@ai-sdk/openai): make function name and arguments nullish

0.0.23

Patch Changes

  • a68fe74a: fix (@ai-sdk/openai): allow null tool_calls value.

0.0.22

Patch Changes

0.0.21

Patch Changes

0.0.20

Patch Changes

  • a1d08f3e: fix (provider/openai): handle error chunks when streaming

0.0.19

Patch Changes

  • beb8b739: fix (provider/openai): return unknown finish reasons as unknown

0.0.18

Patch Changes

  • fb42e760: feat (provider/openai): send user message content as text when possible

0.0.17

Patch Changes

0.0.16

Patch Changes

  • 2b18fa11: fix (provider/openai): remove object type validation

0.0.15

Patch Changes

0.0.14

Patch Changes

0.0.13

Patch Changes

  • 4e3c922: fix (provider/openai): introduce compatibility mode in which "stream_options" are not sent

0.0.12

Patch Changes

  • 6f48839: feat (provider/openai): add gpt-4o to the list of supported models
  • 1009594: feat (provider/openai): set stream_options/include_usage to true when streaming
  • 0f6bc4e: feat (ai/core): add embed function
  • Updated dependencies [0f6bc4e]

0.0.11

Patch Changes

0.0.10

Patch Changes

0.0.9

Patch Changes

0.0.8

Patch Changes

0.0.7

Patch Changes

  • 0833e19: Allow optional content to support Fireworks function calling.

0.0.6

Patch Changes

  • d6431ae: ai/core: add logprobs support (thanks @SamStenner for the contribution)
  • 25f3350: ai/core: add support for getting raw response headers.
  • Updated dependencies [d6431ae]
  • Updated dependencies [25f3350]

0.0.5

Patch Changes

  • eb150a6: ai/core: remove scaling of setting values (breaking change). If you were using the temperature, frequency penalty, or presence penalty settings, you need to update the providers and adjust the setting values.
  • Updated dependencies [eb150a6]

0.0.4

Patch Changes

  • c6fc35b: Add custom header and OpenAI project support.

0.0.3

Patch Changes

  • ab60b18: Simplified model construction by directly calling provider functions. Add create... functions to create provider instances.

0.0.2

Patch Changes

  • 2bff460: Fix build for release.

0.0.1

Patch Changes

  • 7b8791d: Support streams with 'chat.completion' objects.
  • 7b8791d: Rename baseUrl to baseURL. Automatically remove trailing slashes.
  • Updated dependencies [7b8791d]