Input interface for ChatCohere

interface BaseChatCohereInput {
    apiKey?: string;
    cache?: boolean | BaseCache<Generation[]>;
    callbackManager?: CallbackManager;
    callbacks?: Callbacks;
    maxConcurrency?: number;
    maxRetries?: number;
    metadata?: Record<string, unknown>;
    model?: string;
    onFailedAttempt?: FailedAttemptHandler;
    streamUsage?: boolean;
    streaming?: boolean;
    tags?: string[];
    temperature?: number;
    verbose?: boolean;
}

Hierarchy

  • BaseChatModelParams
    • BaseChatCohereInput

Properties

apiKey?: string

The API key to use.

{process.env.COHERE_API_KEY}
cache?: boolean | BaseCache<Generation[]>
callbackManager?: CallbackManager

Use callbacks instead

callbacks?: Callbacks
maxConcurrency?: number

The maximum number of concurrent calls that can be made. Defaults to Infinity, which means no limit.

maxRetries?: number

The maximum number of retries that can be made for a single call, with an exponential backoff between each attempt. Defaults to 6.

metadata?: Record<string, unknown>
model?: string

The name of the model to use.

{"command"}
onFailedAttempt?: FailedAttemptHandler

Custom handler to handle failed attempts. Takes the originally thrown error object as input, and should itself throw an error if the input error is not retryable.

streamUsage?: boolean

Whether or not to include token usage when streaming. This will include an extra chunk at the end of the stream with eventType: "stream-end" and the token usage in usage_metadata.

{true}
streaming?: boolean

Whether or not to stream the response.

{false}
tags?: string[]
temperature?: number

What sampling temperature to use, between 0.0 and 2.0. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

{0.3}
verbose?: boolean