Represents a Semantic Cache that uses CosmosDB NoSQL backend as the underlying storage system.
const embeddings = new OpenAIEmbeddings();const cache = new AzureCosmosDBNoSQLSemanticCache(embeddings, { databaseName: DATABASE_NAME, containerName: CONTAINER_NAME});const model = new ChatOpenAI({cache});// Invoke the model to perform an actionconst response = await model.invoke("Do something random!");console.log(response); Copy
const embeddings = new OpenAIEmbeddings();const cache = new AzureCosmosDBNoSQLSemanticCache(embeddings, { databaseName: DATABASE_NAME, containerName: CONTAINER_NAME});const model = new ChatOpenAI({cache});// Invoke the model to perform an actionconst response = await model.invoke("Do something random!");console.log(response);
deletes the semantic cache for a given llmKey
Retrieves data from the cache.
The prompt for lookup.
The LLM key used to construct the cache key.
An array of Generations if found, null otherwise.
Updates the cache with new data.
The prompt for update.
Represents a Semantic Cache that uses CosmosDB NoSQL backend as the underlying storage system.
Example