Optionalaudio?: undefined | ChatCompletionAudioParamOptionalcallbacks?: undefined | CallbacksCallbacks for this call and any sub-calls (eg. a Chain calling an LLM). Tags are passed to all callbacks, metadata is passed to handle*Start callbacks.
Optionalfunction_Optionalfunctions?: undefined | FunctionDefinition[]Optionalls_Describes the format of structured outputs. This should be provided if an output is considered to be structured
OptionalmaxMaximum number of parallel calls to make.
Optionalmetadata?: undefined | Record<string, unknown>Metadata for this call and any sub-calls (eg. a Chain calling an LLM). Keys should be strings, values should be JSON-serializable.
Optionalmodalities?: undefined | ChatCompletionModality[]Output types that you would like the model to generate for this request. Most models are capable of generating text, which is the default:
["text"]
The gpt-4o-audio-preview model can also be used to
generate audio. To request that
this model generate both text and audio responses, you can use:
["text", "audio"]
Optionaloptions?: undefined | RequestOptionsAdditional options to pass to the underlying axios request.
Optionalparallel_The model may choose to call multiple functions in a single turn. You can set parallel_tool_calls to false which ensures only one tool is called at most. Learn more
Optionalprediction?: undefined | ChatCompletionPredictionContentStatic predicted output content, such as the content of a text file that is being regenerated. Learn more.
OptionalpromptUsed by OpenAI to cache responses for similar requests to optimize your cache
hit rates. Replaces the user field.
Learn more.
OptionalpromptAdds a prompt index to prompts passed to the model to track what prompt is being used for a given generation.
Optionalreasoning?: undefined | ReasoningOptions for reasoning models.
Note that some options, like reasoning summaries, are only available when using the responses API. If these options are set, the responses API will be used to fulfill the request.
These options will be ignored when not using a reasoning model.
OptionalrecursionMaximum number of times a call can recurse. If not provided, defaults to 25.
Optionalresponse_An object specifying the format that the model must output.
OptionalrunUnique identifier for the tracer run for this call. If not provided, a new UUID will be generated.
OptionalrunName for the tracer run for this call. Defaults to the name of the class.
Optionalseed?: undefined | numberWhen provided, the completions API will make a best effort to sample
deterministically, such that repeated requests with the same seed
and parameters should return the same result.
Optionalsignal?: undefined | AbortSignalAbort signal for this call. If provided, the call will be aborted when the signal is aborted.
Optionalstop?: undefined | string[]Stop tokens to use for this call. If not provided, the default stop tokens for the model will be used.
Optionalstream_Additional options to pass to streamed completions. If provided, this takes precedence over "streamUsage" set at initialization time.
Optionalstrict?: undefined | booleanIf true, model output is guaranteed to exactly match the JSON Schema
provided in the tool definition. If true, the input schema will also be
validated according to
https://platform.openai.com/docs/guides/structured-outputs/supported-schemas.
If false, input schema will not be validated and model output will not
be validated.
If undefined, strict argument will not be passed to the model.
Optionaltags?: undefined | string[]Tags for this call and any sub-calls (eg. a Chain calling an LLM). You can use these to filter calls.
Optionaltimeout?: undefined | numberTimeout for this call in milliseconds.
Optionaltools?: undefined | ChatOpenAIToolType[]A list of tools that the model may use to generate responses. Each tool can be a function, a built-in tool, or a custom tool definition. If not provided, the model will not use any tools.
Optionalverbosity?: undefined | OpenAIVerbosityParamThe verbosity of the model's response.
Parameters for audio output. Required when audio output is requested with
modalities: ["audio"]. Learn more.