You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
These commands will make the module importable from the `@openai/openai` scope:
31
-
32
-
You can also [import directly from JSR](https://jsr.io/docs/using-packages#importing-with-jsr-specifiers) without an install step if you're using the Deno JavaScript runtime:
24
+
These commands will make the module importable from the `@openai/openai` scope. You can also [import directly from JSR](https://jsr.io/docs/using-packages#importing-with-jsr-specifiers) without an install step if you're using the Deno JavaScript runtime:
33
25
34
26
```ts
35
27
importOpenAIfrom'jsr:@openai/openai';
36
28
```
37
29
38
30
## Usage
39
31
40
-
The full API of this library can be found in [api.md file](api.md) along with many [code examples](https://github.com/openai/openai-node/tree/master/examples). The code below shows how to get started using the chat completions API.
32
+
The full API of this library can be found in [api.md file](api.md) along with many [code examples](https://github.com/openai/openai-node/tree/master/examples).
33
+
34
+
The primary API for interacting with OpenAI models is the [Responses API](https://platform.openai.com/docs/api-reference/responses). You can generate text from the model with the code below.
41
35
42
-
<!-- prettier-ignore -->
43
36
```ts
44
37
importOpenAIfrom'openai';
45
38
46
39
const client =newOpenAI({
47
40
apiKey: process.env['OPENAI_API_KEY'], // This is the default and can be omitted
messages: [{ role: 'user', content: 'Say this is a test' }],
53
-
model: 'gpt-4o',
54
-
});
55
-
}
43
+
const response =awaitclient.responses.create({
44
+
model: 'gpt-4o',
45
+
instructions: 'You are a coding assistant that talks like a pirate',
46
+
input: 'Are semicolons optional in JavaScript?',
47
+
});
56
48
57
-
main();
49
+
console.log(response.output_text);
58
50
```
59
51
60
-
## Streaming responses
61
-
62
-
We provide support for streaming responses using Server Sent Events (SSE).
52
+
The previous standard (supported indefinitely) for generating text is the [Chat Completions API](https://platform.openai.com/docs/api-reference/chat). You can use that API to generate text from the model with the code below.
63
53
64
54
```ts
65
55
importOpenAIfrom'openai';
66
56
67
-
const client =newOpenAI();
57
+
const client =newOpenAI({
58
+
apiKey: process.env['OPENAI_API_KEY'], // This is the default and can be omitted
Documentation for each method, request param, and response field are available in docstrings and will appear on hover in most modern editors.
143
-
144
92
## File uploads
145
93
146
94
Request parameters that correspond to file uploads can be passed in many different forms:
@@ -265,17 +213,17 @@ Note that requests which time out will be [retried twice by default](#retries).
265
213
All object responses in the SDK provide a `_request_id` property which is added from the `x-request-id` response header so that you can quickly log failing requests and report them back to OpenAI.
266
214
267
215
```ts
268
-
constcompletion=awaitclient.chat.completions.create({ messages: [{ role: 'user', content: 'Say this is a test' }], model: 'gpt-4o' });
For more information on support for the Azure API, see [azure.md](azure.md).
357
305
358
-
## Automated function calls
359
-
360
-
We provide the `openai.beta.chat.completions.runTools({…})` convenience helper for using function tool calls with the `/chat/completions` endpoint which automatically call the JavaScript functions you provide and sends their results back to the `/chat/completions` endpoint, looping as long as the model requests tool calls.
361
-
362
-
For more information see [helpers.md](helpers.md#automated-function-calls).
363
-
364
306
## Advanced Usage
365
307
366
308
### Accessing raw Response data (e.g., headers)
@@ -373,17 +315,19 @@ You can also use the `.withResponse()` method to get the raw `Response` along wi
373
315
```ts
374
316
const client =newOpenAI();
375
317
376
-
constresponse=awaitclient.chat.completions
377
-
.create({ messages: [{ role: 'user', content: 'Say this is a test' }], model: 'gpt-4o' })
318
+
consthttpResponse=awaitclient.responses
319
+
.create({ model: 'gpt-4o', input: 'say this is a test.' })
378
320
.asResponse();
379
-
console.log(response.headers.get('X-My-Header'));
380
-
console.log(response.statusText); // access the underlying Response object
381
321
382
-
const { data: chatCompletion, response: raw } =awaitclient.chat.completions
383
-
.create({ messages: [{ role: 'user', content: 'Say this is a test' }], model: 'gpt-4o' })
322
+
// access the underlying web standard Response object
0 commit comments