The params which can be passed to the API at request time.

interface GoogleAIBaseLanguageModelCallOptions {
    allowed_function_names?: string[];
    callbacks?: Callbacks;
    configurable?: Record<string, any>;
    convertSystemMessageToHumanContent?: boolean;
    frequencyPenalty?: number;
    logprobs?: boolean;
    maxConcurrency?: number;
    maxOutputTokens?: number;
    metadata?: Record<string, unknown>;
    model?: string;
    modelName?: string;
    presencePenalty?: number;
    recursionLimit?: number;
    responseMimeType?: GoogleAIResponseMimeType;
    runId?: string;
    runName?: string;
    safetyHandler?: GoogleAISafetyHandler;
    safetySettings?: GoogleAISafetySetting[];
    signal?: AbortSignal;
    stop?: string[];
    stopSequences?: string[];
    streamUsage?: boolean;
    streaming?: boolean;
    tags?: string[];
    temperature?: number;
    timeout?: number;
    tool_choice?: ToolChoice;
    tools?: GoogleAIToolType[];
    topK?: number;
    topLogprobs?: number;
    topP?: number;
}

Hierarchy (view full)

Properties

allowed_function_names?: string[]

Allowed functions to call when the mode is "any". If empty, any one of the provided functions are called.

callbacks?: Callbacks

Callbacks for this call and any sub-calls (eg. a Chain calling an LLM). Tags are passed to all callbacks, metadata is passed to handle*Start callbacks.

configurable?: Record<string, any>

Runtime values for attributes previously made configurable on this Runnable, or sub-Runnables.

convertSystemMessageToHumanContent?: boolean
frequencyPenalty?: number

Frequency penalty applied to the next token's logprobs, multiplied by the number of times each token has been seen in the respponse so far. A positive penalty will discourage the use of tokens that have already been used, proportional to the number of times the token has been used: The more a token is used, the more dificult it is for the model to use that token again increasing the vocabulary of responses. Caution: A negative penalty will encourage the model to reuse tokens proportional to the number of times the token has been used. Small negative values will reduce the vocabulary of a response. Larger negative values will cause the model to start repeating a common token until it hits the maxOutputTokens limit.

logprobs?: boolean

Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message.

maxConcurrency?: number

Maximum number of parallel calls to make.

maxOutputTokens?: number

Maximum number of tokens to generate in the completion.

metadata?: Record<string, unknown>

Metadata for this call and any sub-calls (eg. a Chain calling an LLM). Keys should be strings, values should be JSON-serializable.

model?: string

Model to use

modelName?: string

Model to use Alias for model

presencePenalty?: number

Presence penalty applied to the next token's logprobs if the token has already been seen in the response. This penalty is binary on/off and not dependant on the number of times the token is used (after the first). Use frequencyPenalty for a penalty that increases with each use. A positive penalty will discourage the use of tokens that have already been used in the response, increasing the vocabulary. A negative penalty will encourage the use of tokens that have already been used in the response, decreasing the vocabulary.

recursionLimit?: number

Maximum number of times a call can recurse. If not provided, defaults to 25.

responseMimeType?: GoogleAIResponseMimeType

Available for gemini-1.5-pro. The output format of the generated candidate text. Supported MIME types:

  • text/plain: Text output.
  • application/json: JSON response in the candidates.
"text/plain"
runId?: string

Unique identifier for the tracer run for this call. If not provided, a new UUID will be generated.

runName?: string

Name for the tracer run for this call. Defaults to the name of the class.

safetyHandler?: GoogleAISafetyHandler
safetySettings?: GoogleAISafetySetting[]
signal?: AbortSignal

Abort signal for this call. If provided, the call will be aborted when the signal is aborted.

stop?: string[]

Stop tokens to use for this call. If not provided, the default stop tokens for the model will be used.

stopSequences?: string[]
streamUsage?: boolean

Whether or not to include usage data, like token counts in the streamed response chunks.

true
streaming?: boolean

Whether or not to stream.

false
tags?: string[]

Tags for this call and any sub-calls (eg. a Chain calling an LLM). You can use these to filter calls.

temperature?: number

Sampling temperature to use

timeout?: number

Timeout for this call in milliseconds.

tool_choice?: ToolChoice

Specifies how the chat model should use tools.

undefined

Possible values:
- "auto": The model may choose to use any of the provided tools, or none.
- "any": The model must use one of the provided tools.
- "none": The model must not use any tools.
- A string (not "auto", "any", or "none"): The name of a specific tool the model must use.
- An object: A custom schema specifying tool choice parameters. Specific to the provider.

Note: Not all providers support tool_choice. An error will be thrown
if used with an unsupported model.
topK?: number

Top-k changes how the model selects tokens for output.

A top-k of 1 means the selected token is the most probable among all tokens in the model’s vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature).

topLogprobs?: number

An integer between 0 and 5 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to true if this parameter is used.

topP?: number

Top-p changes how the model selects tokens for output.

Tokens are selected from most probable to least until the sum of their probabilities equals the top-p value.

For example, if tokens A, B, and C have a probability of .3, .2, and .1 and the top-p value is .5, then the model will select either A or B as the next token (using temperature).