Represents a Semantic Cache that uses CosmosDB MongoDB backend as the underlying storage system.
const embeddings = new OpenAIEmbeddings();const cache = new AzureCosmosDBMongoDBSemanticCache(embeddings, { client?: MongoClient});const model = new ChatOpenAI({cache});// Invoke the model to perform an actionconst response = await model.invoke("Do something random!");console.log(response); Copy
const embeddings = new OpenAIEmbeddings();const cache = new AzureCosmosDBMongoDBSemanticCache(embeddings, { client?: MongoClient});const model = new ChatOpenAI({cache});// Invoke the model to perform an actionconst response = await model.invoke("Do something random!");console.log(response);
Protected
deletes the semantic cache for a given llmKey
Retrieves data from the cache.
The prompt for lookup.
The LLM key used to construct the cache key.
An array of Generations if found, null otherwise.
Sets a custom key encoder function for the cache. This function should take a prompt and an LLM key and return a string that will be used as the cache key.
The custom key encoder function.
Updates the cache with new data.
The prompt for update.
Represents a Semantic Cache that uses CosmosDB MongoDB backend as the underlying storage system.
Example