A list of chat completion choices. Can contain more than one elements if n
is
greater than 1. Can also be empty for the last chunk if you set
stream_options: {"include_usage": true}
.
The Unix timestamp (in seconds) of when the chat completion was created. Each chunk has the same timestamp.
A unique identifier for the chat completion. Each chunk has the same ID.
The model to generate the completion.
The object type, which is always chat.completion.chunk
.
Optional
service_Specifies the processing type used for serving the request.
When the service_tier
parameter is set, the response body will include the
service_tier
value based on the processing mode actually used to serve the
request. This response value may be different from the value set in the
parameter.
Optional
system_This fingerprint represents the backend configuration that the model runs with.
Can be used in conjunction with the seed
request parameter to understand when
backend changes have been made that might impact determinism.
Optional
usageAn optional field that will only be present when you set
stream_options: {"include_usage": true}
in your request. When present, it
contains a null value except for the last chunk which contains the token
usage statistics for the entire request.
NOTE: If the stream is interrupted or cancelled, you may not receive the final usage chunk which contains the total token usage for the request.
Represents a streamed chunk of a chat completion response returned by the model, based on the provided input. Learn more.