Optional
cacheOptional
callbackOptional
callbacksOptional
concurrencyOptional
endpointWatsonX AI Complete Endpoint. Can be used if you want a fully custom endpoint.
Optional
ibmWatsonX AI Key. Provide API Key if you do not wish to automatically pull from env.
Optional
maxThe maximum number of concurrent calls that can be made.
Defaults to Infinity
, which means no limit.
Optional
maxThe maximum number of retries that can be made for a single call, with an exponential backoff between each attempt. Defaults to 6.
Optional
metadataOptional
modelWatsonX AI Model ID.
Optional
modelParameters accepted by the WatsonX AI Endpoint.
Optional
onCustom handler to handle failed attempts. Takes the originally thrown error object as input, and should itself throw an error if the input error is not retryable.
Optional
projectWatsonX AI Key. Provide API Key if you do not wish to automatically pull from env.
Optional
regionIBM Cloud Compute Region. eg. us-south, us-east, etc.
Optional
tagsOptional
verboseOptional
versionWatsonX AI Version. Date representing the WatsonX AI Version. eg. 2023-05-29
Deprecated
Please use newer implementation @langchain/community/llms/ibm instead