Represents a chat completion response returned by model, based on the provided input.

interface ChatCompletion {
    choices: OpenAIClient.Chat.Completions.ChatCompletion.Choice[];
    created: number;
    id: string;
    model: string;
    object: "chat.completion";
    service_tier?:
        | null
        | "auto"
        | "default"
        | "flex"
        | "scale"
        | "priority";
    system_fingerprint?: string;
    usage?: CompletionUsage;
}

Properties

A list of chat completion choices. Can be more than one if n is greater than 1.

created: number

The Unix timestamp (in seconds) of when the chat completion was created.

id: string

A unique identifier for the chat completion.

model: string

The model used for the chat completion.

object: "chat.completion"

The object type, which is always chat.completion.

service_tier?:
    | null
    | "auto"
    | "default"
    | "flex"
    | "scale"
    | "priority"

Specifies the processing type used for serving the request.

  • If set to 'auto', then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use 'default'.
  • If set to 'default', then the request will be processed with the standard pricing and performance for the selected model.
  • If set to 'flex' or 'priority', then the request will be processed with the corresponding service tier. Contact sales to learn more about Priority processing.
  • When not set, the default behavior is 'auto'.

When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.

system_fingerprint?: string

This fingerprint represents the backend configuration that the model runs with.

Can be used in conjunction with the seed request parameter to understand when backend changes have been made that might impact determinism.

Usage statistics for the completion request.