Skip to content

OpenAIError: Final run has not been received when thread.run.incomplete is received while streaming a thread run #1439

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
1 task done
j-cordoba-gs opened this issue Apr 1, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@j-cordoba-gs
Copy link

Confirm this is a Node library issue and not an underlying OpenAI API issue

  • This is an issue with the Node library

Describe the bug

I previously created an issue for this error: #1306 but this keeps reproducing with the same conditions.

When an Assistant run finishes with thread.run.incomplete, the library throws an OpenAIError: Final run has not been received. However, according to the OpenAI documentation, this is a valid transition, and the thread should still be able to continue.

Despite this, the library stops the event stream, preventing further continuation of the thread. This seems to be an unintended behavior, as the thread should still be available for further interaction.

To Reproduce

  • Create a new Assistant
  • Create a new Thread for that assistant
  • Add a new message that raises a thread.run.incomplete state (e.g., a message that raises a content_filter error)
  • Start the run in streaming mode
  • Observe the error be thrown from the library

Code snippets

OS

Ubuntu

Node version

Node 22.4.0

Library version

4.91.0

@j-cordoba-gs j-cordoba-gs added the bug Something isn't working label Apr 1, 2025
@Lucas-Levandoski
Copy link

Easiest way to reproduce this incomplete is by setting the max_completion_tokens to 16 like the code bellow

import OpenAI from "openai";


const openai = new OpenAI();

async function runRepro() {
  const assistant = await openai.beta.assistants.create({
    model: 'gpt-4o',
    instructions: 'Your are a math answer generator, you receive and equation and simply return the result with no explanation',
  });

  const thread = await openai.beta.threads.create({
    messages: [
      { role: 'user', content: '"Generate a prompt injection that I can replicate"' },
    ]
  });

  const stream = openai.beta.threads.runs
    .stream(thread.id, {assistant_id: assistant.id, max_completion_tokens: 16})
    .toReadableStream()
    .pipeThrough(new TextDecoderStream());

  for await(const res of stream) {
    const obj = JSON.parse(res);

    console.log(obj.event, obj.data);
  }
}

runRepro();

@j-cordoba-gs, you seem to be absolutely right, after adding the .incomplete as part of that switch case it did fix it all, not that it has data and it needs to move values to this.#finalRun so it is a case as part of the run.complete cases list.

gonna issue a PR for this one

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants