Optional
baseThe host URL of the Ollama server.
Optional
cacheOptional
callbackOptional
callbacksOptional
checkWhether or not to check the model exists on the local machine before
invoking it. If set to true
, the model will be pulled if it does not
exist.
Optional
embeddingOptional
f16Optional
formatOptional
frequencyOptional
headersOptional HTTP Headers to include in the request.
Optional
keepOptional
logitsOptional
lowOptional
mainOptional
maxThe maximum number of concurrent calls that can be made.
Defaults to Infinity
, which means no limit.
Optional
maxThe maximum number of retries that can be made for a single call, with an exponential backoff between each attempt. Defaults to 6.
Optional
metadataOptional
mirostatOptional
mirostatOptional
mirostatOptional
modelThe model to invoke. If the model does not exist, it will be pulled.
Optional
numOptional
numOptional
numOptional
numOptional
numOptional
numOptional
numaOptional
onCustom handler to handle failed attempts. Takes the originally thrown error object as input, and should itself throw an error if the input error is not retryable.
Optional
penalizeOptional
presenceOptional
repeatOptional
repeatOptional
seedOptional
stopOptional
streamingOptional
tagsOptional
temperatureOptional
tfsZOptional
topKOptional
topPOptional
typicalPOptional
useOptional
useOptional
verboseOptional
vocab
Input to chat model class.