Input to chat model class.

interface ChatOllamaInput {
    baseUrl?: string;
    cache?: boolean | BaseCache<Generation[]>;
    callbackManager?: CallbackManager;
    callbacks?: Callbacks;
    checkOrPullModel?: boolean;
    embeddingOnly?: boolean;
    f16Kv?: boolean;
    format?: string;
    frequencyPenalty?: number;
    headers?: Headers;
    keepAlive?: string | number;
    logitsAll?: boolean;
    lowVram?: boolean;
    mainGpu?: number;
    maxConcurrency?: number;
    maxRetries?: number;
    metadata?: Record<string, unknown>;
    mirostat?: number;
    mirostatEta?: number;
    mirostatTau?: number;
    model?: string;
    numBatch?: number;
    numCtx?: number;
    numGpu?: number;
    numKeep?: number;
    numPredict?: number;
    numThread?: number;
    numa?: boolean;
    onFailedAttempt?: FailedAttemptHandler;
    penalizeNewline?: boolean;
    presencePenalty?: number;
    repeatLastN?: number;
    repeatPenalty?: number;
    seed?: number;
    stop?: string[];
    streaming?: boolean;
    tags?: string[];
    temperature?: number;
    tfsZ?: number;
    topK?: number;
    topP?: number;
    typicalP?: number;
    useMlock?: boolean;
    useMmap?: boolean;
    verbose?: boolean;
    vocabOnly?: boolean;
}

Hierarchy (view full)

Implemented by

Properties

baseUrl?: string

The host URL of the Ollama server.

"http://127.0.0.1:11434"
cache?: boolean | BaseCache<Generation[]>
callbackManager?: CallbackManager

Use callbacks instead

callbacks?: Callbacks
checkOrPullModel?: boolean

Whether or not to check the model exists on the local machine before invoking it. If set to true, the model will be pulled if it does not exist.

false
embeddingOnly?: boolean
f16Kv?: boolean
format?: string
frequencyPenalty?: number
headers?: Headers

Optional HTTP Headers to include in the request.

keepAlive?: string | number
"5m"
logitsAll?: boolean
lowVram?: boolean
mainGpu?: number
maxConcurrency?: number

The maximum number of concurrent calls that can be made. Defaults to Infinity, which means no limit.

maxRetries?: number

The maximum number of retries that can be made for a single call, with an exponential backoff between each attempt. Defaults to 6.

metadata?: Record<string, unknown>
mirostat?: number
mirostatEta?: number
mirostatTau?: number
model?: string

The model to invoke. If the model does not exist, it will be pulled.

"llama3"
numBatch?: number
numCtx?: number
numGpu?: number
numKeep?: number
numPredict?: number
numThread?: number
numa?: boolean
onFailedAttempt?: FailedAttemptHandler

Custom handler to handle failed attempts. Takes the originally thrown error object as input, and should itself throw an error if the input error is not retryable.

penalizeNewline?: boolean
presencePenalty?: number
repeatLastN?: number
repeatPenalty?: number
seed?: number
stop?: string[]
streaming?: boolean
tags?: string[]
temperature?: number
tfsZ?: number
topK?: number
topP?: number
typicalP?: number
useMlock?: boolean
useMmap?: boolean
verbose?: boolean
vocabOnly?: boolean