Skip to content

Commit dbd0f24

Browse files
authored
Merge branch 'master' into speech-voice-types
2 parents 233830a + 0b33959 commit dbd0f24

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

49 files changed

+5254
-462
lines changed

.release-please-manifest.json

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
{
2-
".": "4.86.2"
2+
".": "4.87.2"
33
}

.stats.yml

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,2 @@
1-
configured_endpoints: 74
2-
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-5d30684c3118d049682ea30cdb4dbef39b97d51667da484689193dc40162af32.yml
1+
configured_endpoints: 81
2+
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-be834d63e326a82494e819085137f5eb15866f3fc787db1f3afe7168d419e18a.yml

CHANGELOG.md

+24
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,29 @@
11
# Changelog
22

3+
## 4.87.2 (2025-03-11)
4+
5+
Full Changelog: [v4.87.1...v4.87.2](https://github.com/openai/openai-node/compare/v4.87.1...v4.87.2)
6+
7+
### Bug Fixes
8+
9+
* **responses:** correctly add output_text ([4ceb5cc](https://github.com/openai/openai-node/commit/4ceb5cc516b8c75d46f0042534d7658796a8cd71))
10+
11+
## 4.87.1 (2025-03-11)
12+
13+
Full Changelog: [v4.87.0...v4.87.1](https://github.com/openai/openai-node/compare/v4.87.0...v4.87.1)
14+
15+
### Bug Fixes
16+
17+
* correct imports ([5cdf17c](https://github.com/openai/openai-node/commit/5cdf17cec33da7cf540b8bdbcfa30c0c52842dd1))
18+
19+
## 4.87.0 (2025-03-11)
20+
21+
Full Changelog: [v4.86.2...v4.87.0](https://github.com/openai/openai-node/compare/v4.86.2...v4.87.0)
22+
23+
### Features
24+
25+
* **api:** add /v1/responses and built-in tools ([119b584](https://github.com/openai/openai-node/commit/119b5843a18b8014167c8d2031d75c08dbf400a3))
26+
327
## 4.86.2 (2025-03-05)
428

529
Full Changelog: [v4.86.1...v4.86.2](https://github.com/openai/openai-node/compare/v4.86.1...v4.86.2)

README.md

+50-101
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,3 @@
1-
> [!IMPORTANT]
2-
> We're actively working on a new alpha version that migrates from `node-fetch` to builtin fetch.
3-
>
4-
> Please try it out and let us know if you run into any issues!
5-
> https://community.openai.com/t/your-feedback-requested-node-js-sdk-5-0-0-alpha/1063774
6-
71
# OpenAI TypeScript and JavaScript API Library
82

93
[![NPM version](https://img.shields.io/npm/v/openai.svg)](https://npmjs.org/package/openai) ![npm bundle size](https://img.shields.io/bundlephobia/minzip/openai) [![JSR Version](https://jsr.io/badges/@openai/openai)](https://jsr.io/@openai/openai)
@@ -27,120 +21,74 @@ deno add jsr:@openai/openai
2721
npx jsr add @openai/openai
2822
```
2923

30-
These commands will make the module importable from the `@openai/openai` scope:
31-
32-
You can also [import directly from JSR](https://jsr.io/docs/using-packages#importing-with-jsr-specifiers) without an install step if you're using the Deno JavaScript runtime:
24+
These commands will make the module importable from the `@openai/openai` scope. You can also [import directly from JSR](https://jsr.io/docs/using-packages#importing-with-jsr-specifiers) without an install step if you're using the Deno JavaScript runtime:
3325

3426
```ts
3527
import OpenAI from 'jsr:@openai/openai';
3628
```
3729

3830
## Usage
3931

40-
The full API of this library can be found in [api.md file](api.md) along with many [code examples](https://github.com/openai/openai-node/tree/master/examples). The code below shows how to get started using the chat completions API.
32+
The full API of this library can be found in [api.md file](api.md) along with many [code examples](https://github.com/openai/openai-node/tree/master/examples).
33+
34+
The primary API for interacting with OpenAI models is the [Responses API](https://platform.openai.com/docs/api-reference/responses). You can generate text from the model with the code below.
4135

42-
<!-- prettier-ignore -->
4336
```ts
4437
import OpenAI from 'openai';
4538

4639
const client = new OpenAI({
4740
apiKey: process.env['OPENAI_API_KEY'], // This is the default and can be omitted
4841
});
4942

50-
async function main() {
51-
const chatCompletion = await client.chat.completions.create({
52-
messages: [{ role: 'user', content: 'Say this is a test' }],
53-
model: 'gpt-4o',
54-
});
55-
}
43+
const response = await client.responses.create({
44+
model: 'gpt-4o',
45+
instructions: 'You are a coding assistant that talks like a pirate',
46+
input: 'Are semicolons optional in JavaScript?',
47+
});
5648

57-
main();
49+
console.log(response.output_text);
5850
```
5951

60-
## Streaming responses
61-
62-
We provide support for streaming responses using Server Sent Events (SSE).
52+
The previous standard (supported indefinitely) for generating text is the [Chat Completions API](https://platform.openai.com/docs/api-reference/chat). You can use that API to generate text from the model with the code below.
6353

6454
```ts
6555
import OpenAI from 'openai';
6656

67-
const client = new OpenAI();
57+
const client = new OpenAI({
58+
apiKey: process.env['OPENAI_API_KEY'], // This is the default and can be omitted
59+
});
6860

69-
async function main() {
70-
const stream = await client.chat.completions.create({
71-
model: 'gpt-4o',
72-
messages: [{ role: 'user', content: 'Say this is a test' }],
73-
stream: true,
74-
});
75-
for await (const chunk of stream) {
76-
process.stdout.write(chunk.choices[0]?.delta?.content || '');
77-
}
78-
}
61+
const completion = await client.chat.completions.create({
62+
model: 'gpt-4o',
63+
messages: [
64+
{ role: 'developer', content: 'Talk like a pirate.' },
65+
{ role: 'user', content: 'Are semicolons optional in JavaScript?' },
66+
],
67+
});
7968

80-
main();
69+
console.log(completion.choices[0].message.content);
8170
```
8271

83-
If you need to cancel a stream, you can `break` from the loop or call `stream.controller.abort()`.
84-
85-
### Chat Completion streaming helpers
72+
## Streaming responses
8673

87-
This library also provides several conveniences for streaming chat completions, for example:
74+
We provide support for streaming responses using Server Sent Events (SSE).
8875

8976
```ts
9077
import OpenAI from 'openai';
9178

92-
const openai = new OpenAI();
93-
94-
async function main() {
95-
const stream = await openai.beta.chat.completions.stream({
96-
model: 'gpt-4o',
97-
messages: [{ role: 'user', content: 'Say this is a test' }],
98-
stream: true,
99-
});
100-
101-
stream.on('content', (delta, snapshot) => {
102-
process.stdout.write(delta);
103-
});
104-
105-
// or, equivalently:
106-
for await (const chunk of stream) {
107-
process.stdout.write(chunk.choices[0]?.delta?.content || '');
108-
}
109-
110-
const chatCompletion = await stream.finalChatCompletion();
111-
console.log(chatCompletion); // {id: "…", choices: […], …}
112-
}
113-
114-
main();
115-
```
116-
117-
See [helpers.md](helpers.md#chat-events) for more details.
118-
119-
### Request & Response types
120-
121-
This library includes TypeScript definitions for all request params and response fields. You may import and use them like so:
122-
123-
<!-- prettier-ignore -->
124-
```ts
125-
import OpenAI from 'openai';
79+
const client = new OpenAI();
12680

127-
const client = new OpenAI({
128-
apiKey: process.env['OPENAI_API_KEY'], // This is the default and can be omitted
81+
const stream = await client.responses.create({
82+
model: 'gpt-4o',
83+
input: 'Say "Sheep sleep deep" ten times fast!',
84+
stream: true,
12985
});
13086

131-
async function main() {
132-
const params: OpenAI.Chat.ChatCompletionCreateParams = {
133-
messages: [{ role: 'user', content: 'Say this is a test' }],
134-
model: 'gpt-4o',
135-
};
136-
const chatCompletion: OpenAI.Chat.ChatCompletion = await client.chat.completions.create(params);
87+
for await (const event of stream) {
88+
console.log(event);
13789
}
138-
139-
main();
14090
```
14191

142-
Documentation for each method, request param, and response field are available in docstrings and will appear on hover in most modern editors.
143-
14492
## File uploads
14593

14694
Request parameters that correspond to file uploads can be passed in many different forms:
@@ -265,17 +213,17 @@ Note that requests which time out will be [retried twice by default](#retries).
265213
All object responses in the SDK provide a `_request_id` property which is added from the `x-request-id` response header so that you can quickly log failing requests and report them back to OpenAI.
266214

267215
```ts
268-
const completion = await client.chat.completions.create({ messages: [{ role: 'user', content: 'Say this is a test' }], model: 'gpt-4o' });
269-
console.log(completion._request_id) // req_123
216+
const response = await client.responses.create({ model: 'gpt-4o', input: 'testing 123' });
217+
console.log(response._request_id) // req_123
270218
```
271219

272220
You can also access the Request ID using the `.withResponse()` method:
273221

274222
```ts
275-
const { data: stream, request_id } = await openai.chat.completions
223+
const { data: stream, request_id } = await openai.responses
276224
.create({
277-
model: 'gpt-4',
278-
messages: [{ role: 'user', content: 'Say this is a test' }],
225+
model: 'gpt-4o',
226+
input: 'Say this is a test',
279227
stream: true,
280228
})
281229
.withResponse();
@@ -355,12 +303,6 @@ console.log(result.choices[0]!.message?.content);
355303

356304
For more information on support for the Azure API, see [azure.md](azure.md).
357305

358-
## Automated function calls
359-
360-
We provide the `openai.beta.chat.completions.runTools({…})` convenience helper for using function tool calls with the `/chat/completions` endpoint which automatically call the JavaScript functions you provide and sends their results back to the `/chat/completions` endpoint, looping as long as the model requests tool calls.
361-
362-
For more information see [helpers.md](helpers.md#automated-function-calls).
363-
364306
## Advanced Usage
365307

366308
### Accessing raw Response data (e.g., headers)
@@ -373,17 +315,19 @@ You can also use the `.withResponse()` method to get the raw `Response` along wi
373315
```ts
374316
const client = new OpenAI();
375317

376-
const response = await client.chat.completions
377-
.create({ messages: [{ role: 'user', content: 'Say this is a test' }], model: 'gpt-4o' })
318+
const httpResponse = await client.responses
319+
.create({ model: 'gpt-4o', input: 'say this is a test.' })
378320
.asResponse();
379-
console.log(response.headers.get('X-My-Header'));
380-
console.log(response.statusText); // access the underlying Response object
381321

382-
const { data: chatCompletion, response: raw } = await client.chat.completions
383-
.create({ messages: [{ role: 'user', content: 'Say this is a test' }], model: 'gpt-4o' })
322+
// access the underlying web standard Response object
323+
console.log(httpResponse.headers.get('X-My-Header'));
324+
console.log(httpResponse.statusText);
325+
326+
const { data: modelResponse, response: raw } = await client.responses
327+
.create({ model: 'gpt-4o', input: 'say this is a test.' })
384328
.withResponse();
385329
console.log(raw.headers.get('X-My-Header'));
386-
console.log(chatCompletion);
330+
console.log(modelResponse);
387331
```
388332

389333
### Making custom/undocumented requests
@@ -432,6 +376,11 @@ validate or strip extra properties from the response from the API.
432376

433377
### Customizing the fetch client
434378

379+
> We're actively working on a new alpha version that migrates from `node-fetch` to builtin fetch.
380+
>
381+
> Please try it out and let us know if you run into any issues!
382+
> https://community.openai.com/t/your-feedback-requested-node-js-sdk-5-0-0-alpha/1063774
383+
435384
By default, this library uses `node-fetch` in Node, and expects a global `fetch` function in other environments.
436385

437386
If you would prefer to use a global, web-standards-compliant `fetch` function even in a Node environment,

0 commit comments

Comments
 (0)