interface EvalCreateParams {
    data_source_config: Custom | OpenAIClient.Evals.EvalCreateParams.Logs | OpenAIClient.Evals.EvalCreateParams.StoredCompletions;
    metadata?: null | Metadata;
    name?: string;
    testing_criteria: (
        | StringCheckGrader
        | LabelModel
        | TextSimilarity
        | Python
        | ScoreModel)[];
}

Properties

The configuration for the data source used for the evaluation runs. Dictates the schema of the data used in the evaluation.

metadata?: null | Metadata

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

name?: string

The name of the evaluation.

testing_criteria: (
    | StringCheckGrader
    | LabelModel
    | TextSimilarity
    | Python
    | ScoreModel)[]

A list of graders for all eval runs in this group. Graders can reference variables in the data source using double curly braces notation, like {{item.variable_name}}. To reference the model's output, use the sample namespace (ie, {{sample.output_text}}).