interface ChatOpenAICallOptions {
    audio?: ChatCompletionAudioParam;
    callbacks?: Callbacks;
    configurable?: Record<string, any>;
    function_call?: FunctionCallOption;
    functions?: FunctionDefinition[];
    maxConcurrency?: number;
    metadata?: Record<string, unknown>;
    modalities?: ChatCompletionModality[];
    options?: OpenAICoreRequestOptions<Record<string, unknown>>;
    parallel_tool_calls?: boolean;
    prediction?: ChatCompletionPredictionContent;
    promptIndex?: number;
    recursionLimit?: number;
    response_format?: ChatOpenAIResponseFormat;
    runId?: string;
    runName?: string;
    seed?: number;
    signal?: AbortSignal;
    stop?: string[];
    stream_options?: {
        include_usage: boolean;
    };
    strict?: boolean;
    tags?: string[];
    timeout?: number;
    tool_choice?: OpenAIToolChoice;
    tools?: ChatOpenAIToolType[];
}

Hierarchy (view full)

Properties

Parameters for audio output. Required when audio output is requested with modalities: ["audio"]. Learn more.

callbacks?: Callbacks

Callbacks for this call and any sub-calls (eg. a Chain calling an LLM). Tags are passed to all callbacks, metadata is passed to handle*Start callbacks.

configurable?: Record<string, any>

Runtime values for attributes previously made configurable on this Runnable, or sub-Runnables.

function_call?: FunctionCallOption
functions?: FunctionDefinition[]
maxConcurrency?: number

Maximum number of parallel calls to make.

metadata?: Record<string, unknown>

Metadata for this call and any sub-calls (eg. a Chain calling an LLM). Keys should be strings, values should be JSON-serializable.

modalities?: ChatCompletionModality[]

Output types that you would like the model to generate for this request. Most models are capable of generating text, which is the default:

["text"]

The gpt-4o-audio-preview model can also be used to generate audio. To request that this model generate both text and audio responses, you can use:

["text", "audio"]

options?: OpenAICoreRequestOptions<Record<string, unknown>>

Additional options to pass to the underlying axios request.

parallel_tool_calls?: boolean

Whether or not to restrict the ability to call multiple tools in one response.

Static predicted output content, such as the content of a text file that is being regenerated. Learn more.

promptIndex?: number
recursionLimit?: number

Maximum number of times a call can recurse. If not provided, defaults to 25.

response_format?: ChatOpenAIResponseFormat
runId?: string

Unique identifier for the tracer run for this call. If not provided, a new UUID will be generated.

runName?: string

Name for the tracer run for this call. Defaults to the name of the class.

seed?: number
signal?: AbortSignal

Abort signal for this call. If provided, the call will be aborted when the signal is aborted.

stop?: string[]

Stop tokens to use for this call. If not provided, the default stop tokens for the model will be used.

stream_options?: {
    include_usage: boolean;
}

Additional options to pass to streamed completions. If provided takes precedence over "streamUsage" set at initialization time.

Type declaration

  • include_usage: boolean

    Whether or not to include token usage in the stream. If set to true, this will include an additional chunk at the end of the stream with the token usage.

strict?: boolean

If true, model output is guaranteed to exactly match the JSON Schema provided in the tool definition. If true, the input schema will also be validated according to https://platform.openai.com/docs/guides/structured-outputs/supported-schemas.

If false, input schema will not be validated and model output will not be validated.

If undefined, strict argument will not be passed to the model.

0.2.6

tags?: string[]

Tags for this call and any sub-calls (eg. a Chain calling an LLM). You can use these to filter calls.

timeout?: number

Timeout for this call in milliseconds.

tool_choice?: OpenAIToolChoice
tools?: ChatOpenAIToolType[]