constcache = newVercelKVCache({ ttl:3600, // Optional: Cache entries will expire after 1 hour });
// Initialize the OpenAI model with Vercel KV cache for caching responses constmodel = newChatOpenAI({ cache, }); awaitmodel.invoke("How are you today?"); constcachedValues = awaitcache.lookup("How are you today?", "llmKey");
Lookup LLM generations in cache by prompt and associated LLM key.
Parameters
prompt: string
llmKey: string
Returns Promise<null | Generation[]>
makeDefaultKeyEncoder
makeDefaultKeyEncoder(keyEncoderFn): void
Sets a custom key encoder function for the cache.
This function should take a prompt and an LLM key and return a string
that will be used as the cache key.
Parameters
keyEncoderFn: HashKeyEncoder
The custom key encoder function.
Returns void
update
update(prompt, llmKey, value): Promise<void>
Update the cache with the given generations.
Note this overwrites any existing generations for the given prompt and LLM key.
A cache that uses Vercel KV as the backing store.
Example