Input to AnthropicChat class.

interface AnthropicInput {
    anthropicApiKey?: string;
    anthropicApiUrl?: string;
    apiKey?: string;
    clientOptions?: ClientOptions;
    createClient?: ((options: ClientOptions) => any);
    invocationKwargs?: Kwargs;
    maxTokens?: number;
    maxTokensToSample?: number;
    model?: string;
    modelName?: string;
    stopSequences?: string[];
    streamUsage?: boolean;
    streaming?: boolean;
    temperature?: number;
    topK?: number;
    topP?: number;
}

Implemented by

Properties

anthropicApiKey?: string

Anthropic API key

anthropicApiUrl?: string

Anthropic API URL

apiKey?: string

Anthropic API key

clientOptions?: ClientOptions

Overridable Anthropic ClientOptions

createClient?: ((options: ClientOptions) => any)

Optional method that returns an initialized underlying Anthropic client. Useful for accessing Anthropic models hosted on other cloud services such as Google Vertex.

invocationKwargs?: Kwargs

Holds any additional parameters that are valid to pass to anthropic.messages that are not explicitly specified on this class.

maxTokens?: number

A maximum number of tokens to generate before stopping.

maxTokensToSample?: number

A maximum number of tokens to generate before stopping.

Use "maxTokens" instead.

model?: string

Model name to use

modelName?: string

Use "model" instead

stopSequences?: string[]

A list of strings upon which to stop generating. You probably want ["\n\nHuman:"], as that's the cue for the next turn in the dialog agent.

streamUsage?: boolean

Whether or not to include token usage data in streamed chunks.

true
streaming?: boolean

Whether to stream the results or not

temperature?: number

Amount of randomness injected into the response. Ranges from 0 to 1. Use temp closer to 0 for analytical / multiple choice, and temp closer to 1 for creative and generative tasks.

topK?: number

Only sample from the top K options for each subsequent token. Used to remove "long tail" low probability responses. Defaults to -1, which disables it.

topP?: number

Does nucleus sampling, in which we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by top_p. Defaults to -1, which disables it. Note that you should either alter temperature or top_p, but not both.