OptionalbaseThe host URL of the Ollama server.
OptionalcacheOptionalcallbackOptionalcallbacksOptionalcheckWhether or not to check the model exists on the local machine before
invoking it. If set to true, the model will be pulled if it does not
exist.
OptionaldisableWhether to disable streaming.
If streaming is bypassed, then stream() will defer to
invoke().
OptionalembeddingOptionalf16OptionalfetchThe fetch function to use.
Optionalinit: RequestInitOptionalinit: RequestInitOptionalformatOptionalfrequencyOptionalheadersOptional HTTP Headers to include in the request.
OptionalkeepOptionallogitsOptionallowOptionalmainOptionalmaxThe maximum number of concurrent calls that can be made.
Defaults to Infinity, which means no limit.
OptionalmaxThe maximum number of retries that can be made for a single call, with an exponential backoff between each attempt. Defaults to 6.
OptionalmetadataOptionalmirostatOptionalmirostatOptionalmirostatOptionalmodelThe model to invoke. If the model does not exist, it will be pulled.
OptionalnumOptionalnumOptionalnumOptionalnumOptionalnumOptionalnumOptionalnumaOptionalonCustom handler to handle failed attempts. Takes the originally thrown error object as input, and should itself throw an error if the input error is not retryable.
OptionalpenalizeOptionalpresenceOptionalrepeatOptionalrepeatOptionalseedOptionalstopOptionalstreamingOptionaltagsOptionaltemperatureOptionaltfsZOptionalthinkOptionaltopKOptionaltopPOptionaltypicalPOptionaluseOptionaluseOptionalverboseOptionalvocab
Input to chat model class.