Optional
audioOptional
callbacksCallbacks for this call and any sub-calls (eg. a Chain calling an LLM). Tags are passed to all callbacks, metadata is passed to handle*Start callbacks.
Optional
configurableRuntime values for attributes previously made configurable on this Runnable, or sub-Runnables.
Optional
includeSpecify additional output data to include in the model response.
Optional
ls_Describes the format of structured outputs. This should be provided if an output is considered to be structured
An object containing the method used for structured output (e.g., "jsonMode").
Optional
schema?: JsonSchema7TypeThe JSON schema describing the expected output structure.
Optional
maxMaximum number of parallel calls to make.
Optional
metadataMetadata for this call and any sub-calls (eg. a Chain calling an LLM). Keys should be strings, values should be JSON-serializable.
Optional
modalitiesOutput types that you would like the model to generate for this request. Most models are capable of generating text, which is the default:
["text"]
The gpt-4o-audio-preview
model can also be used to
generate audio. To request that
this model generate both text and audio responses, you can use:
["text", "audio"]
Optional
optionsAdditional options to pass to the underlying axios request.
Optional
parallel_The model may choose to call multiple functions in a single turn. You can set parallel_tool_calls to false which ensures only one tool is called at most. Learn more
Optional
predictionStatic predicted output content, such as the content of a text file that is being regenerated. Learn more.
Optional
previous_The unique ID of the previous response to the model. Use this to create multi-turn conversations.
Optional
promptAdds a prompt index to prompts passed to the model to track what prompt is being used for a given generation.
Optional
reasoningOptions for reasoning models.
Note that some options, like reasoning summaries, are only available when using the responses API. If these options are set, the responses API will be used to fulfill the request.
These options will be ignored when not using a reasoning model.
Optional
recursionMaximum number of times a call can recurse. If not provided, defaults to 25.
An object specifying the format that the model must output.
Optional
runUnique identifier for the tracer run for this call. If not provided, a new UUID will be generated.
Optional
runName for the tracer run for this call. Defaults to the name of the class.
Optional
seedWhen provided, the completions API will make a best effort to sample
deterministically, such that repeated requests with the same seed
and parameters should return the same result.
Optional
service_Service tier to use for this request. Can be "auto", "default", or "flex" Specifies the service tier for prioritization and latency optimization.
Optional
signalAbort signal for this call. If provided, the call will be aborted when the signal is aborted.
Optional
stopStop tokens to use for this call. If not provided, the default stop tokens for the model will be used.
Optional
stream_Additional options to pass to streamed completions. If provided, this takes precedence over "streamUsage" set at initialization time.
Optional
strictIf true
, model output is guaranteed to exactly match the JSON Schema
provided in the tool definition. If true
, the input schema will also be
validated according to
https://platform.openai.com/docs/guides/structured-outputs/supported-schemas.
If false
, input schema will not be validated and model output will not
be validated.
If undefined
, strict
argument will not be passed to the model.
Optional
tagsTags for this call and any sub-calls (eg. a Chain calling an LLM). You can use these to filter calls.
Optional
textConfiguration options for a text response from the model. Can be plain text or structured JSON data.
Optional
timeoutTimeout for this call in milliseconds.
Optional
tool_Specifies which tool the model should use to respond. Can be an OpenAIToolChoice or a ResponsesToolChoice. If not set, the model will decide which tool to use automatically.
Optional
toolsA list of tools that the model may use to generate responses. Each tool can be a function, a built-in tool, or a custom tool definition. If not provided, the model will not use any tools.
Optional
truncationThe truncation strategy to use for the model response.
Parameters for audio output. Required when audio output is requested with
modalities: ["audio"]
. Learn more.