constcache = newUpstashRedisCache({ config: { url:"UPSTASH_REDIS_REST_URL", token:"UPSTASH_REDIS_REST_TOKEN", }, ttl:3600, // Optional: Cache entries will expire after 1 hour }); // Initialize the OpenAI model with Upstash Redis cache for caching responses constmodel = newChatOpenAI({ cache, }); awaitmodel.invoke("How are you today?"); constcachedValues = awaitcache.lookup("How are you today?", "llmKey");
Lookup LLM generations in cache by prompt and associated LLM key.
Parameters
prompt: string
llmKey: string
Returns Promise<null | Generation[]>
makeDefaultKeyEncoder
makeDefaultKeyEncoder(keyEncoderFn): void
Sets a custom key encoder function for the cache.
This function should take a prompt and an LLM key and return a string
that will be used as the cache key.
Parameters
keyEncoderFn: HashKeyEncoder
The custom key encoder function.
Returns void
update
update(prompt, llmKey, value): Promise<void>
Update the cache with the given generations.
Note this overwrites any existing generations for the given prompt and LLM key.
A cache that uses Upstash as the backing store. See https://docs.upstash.com/redis.
Example