interface WatsonxCallOptionsChat {
    callbacks?: Callbacks;
    configurable?: Record<string, any>;
    frequencyPenalty?: number;
    headers?: OutgoingHttpHeaders;
    logprobs?: boolean;
    maxConcurrency?: number;
    maxRetries?: number;
    maxTokens?: number;
    messages?: TextChatMessages[];
    metadata?: Record<string, unknown>;
    n?: number;
    presencePenalty?: number;
    projectId?: string;
    promptIndex?: number;
    recursionLimit?: number;
    responseFormat?: TextChatResponseFormat;
    runId?: string;
    runName?: string;
    signal?: AbortSignal;
    spaceId?: string;
    tags?: string[];
    temperature?: number;
    timeLimit?: number;
    timeout?: number;
    toolChoice?: TextChatToolChoiceTool;
    toolChoiceOption?: string;
    tool_choice?: TextChatToolChoiceTool;
    tools?: TextChatParameterTools[];
    topLogprobs?: number;
    topP?: number;
}

Hierarchy (view full)

Properties

callbacks?: Callbacks

Callbacks for this call and any sub-calls (eg. a Chain calling an LLM). Tags are passed to all callbacks, metadata is passed to handle*Start callbacks.

configurable?: Record<string, any>

Runtime values for attributes previously made configurable on this Runnable, or sub-Runnables.

frequencyPenalty?: number

Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.

headers?: OutgoingHttpHeaders
logprobs?: boolean

Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message.

maxConcurrency?: number

Maximum number of parallel calls to make.

maxRetries?: number
maxTokens?: number

The maximum number of tokens that can be generated in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length.

messages?: TextChatMessages[]

The messages for this chat session.

metadata?: Record<string, unknown>

Metadata for this call and any sub-calls (eg. a Chain calling an LLM). Keys should be strings, values should be JSON-serializable.

n?: number

How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs.

presencePenalty?: number

Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.

projectId?: string

The project that contains the resource. Either space_id or project_id has to be given.

promptIndex?: number
recursionLimit?: number

Maximum number of times a call can recurse. If not provided, defaults to 25.

responseFormat?: TextChatResponseFormat

The chat response format parameters.

runId?: string

Unique identifier for the tracer run for this call. If not provided, a new UUID will be generated.

runName?: string

Name for the tracer run for this call. Defaults to the name of the class.

signal?: AbortSignal

Abort signal for this call. If provided, the call will be aborted when the signal is aborted.

spaceId?: string

The space that contains the resource. Either space_id or project_id has to be given.

tags?: string[]

Tags for this call and any sub-calls (eg. a Chain calling an LLM). You can use these to filter calls.

temperature?: number

What sampling temperature to use,. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

We generally recommend altering this or top_p but not both.

timeLimit?: number

Time limit in milliseconds - if not completed within this time, generation will stop. The text generated so far will be returned along with the `TIME_LIMIT`` stop reason. Depending on the users plan, and on the model being used, there may be an enforced maximum time limit.

timeout?: number

Timeout for this call in milliseconds.

toolChoice?: TextChatToolChoiceTool

Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.

Only one of tool_choice_option or tool_choice must be present.

toolChoiceOption?: string

Using none means the model will not call any tool and instead generates a message.

The following options (auto and required) are not yet supported.

Using auto means the model can pick between generating a message or calling one or more tools. Using required means the model must call one or more tools.

Only one of tool_choice_option or tool_choice must be present.

tool_choice?: TextChatToolChoiceTool
tools?: TextChatParameterTools[]

Tool functions that can be called with the response.

topLogprobs?: number

An integer specifying the number of most likely tokens to return at each token position, each with an associated log probability. The option logprobs must be set to true if this parameter is used.

topP?: number

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

We generally recommend altering this or temperature but not both.