You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We should add a LanguageModelThinkingPart to handle thinking tokens. This would stream out of LanguageModelChat, and then stream out of chat participants, show up in the chat response history, and the LanguageModelChat request. They could be displayed in the UI, so they get their own part, rather than building on LanguageModelExtraDataPart.
DeepSeek/other models (?): reasoning tokens are streamed in a simple format, and do not need to be included in the next request
// Redacted thinking tokens in Anthropic could use LanguageModelExtraDataPart, since they won't be displayed.
// "thinking" or "reasoning"? Different models use one or the other.
export class LanguageModelThinkingPart {
thinking: string;
// for Anthropic, to track signature
metadata?: any;
constructor(thinking: string, metadata?: any);
}
The text was updated successfully, but these errors were encountered:
We should add a
LanguageModelThinkingPart
to handle thinking tokens. This would stream out of LanguageModelChat, and then stream out of chat participants, show up in the chat response history, and the LanguageModelChat request. They could be displayed in the UI, so they get their own part, rather than building on LanguageModelExtraDataPart.The text was updated successfully, but these errors were encountered: