- Categories:
String & binary functions (AI Functions)
COMPLETE (SNOWFLAKE.CORTEX)¶
Notice
This page is provided for backward compatibility. For new use cases, start with %aisql-new-func-link%, which is the canonical surface going forward. This legacy function will be deprecated by the end of 2026.
Given a prompt, generates a response (completion) using your choice of supported language model.
Note
A variant of this function allows COMPLETE to produce responses to images, including:
- Comparing images
- Captioning images
- Classifying images
- Extracting entities from images
- Answering questions using data in graphs and charts
See COMPLETE (SNOWFLAKE.CORTEX) (multimodal) for more information.
Syntax¶
Arguments¶
Required:
modelA string specifying the model to be used. See supported models.
Supported models might have different costs.
prompt_or_historyThe prompt or conversation history to be used to generate a completion.
If
optionsis not present, the prompt given must be a string.If
optionsis present, the argument must be an array of objects representing a conversation in chronological order. Each object must contain arolekey and acontentkey. Thecontentvalue is a prompt or a response, depending on the role. The role must be one of the following.rolevaluecontentvalue'system'An initial plain-English prompt to the language model to provide it with background information and instructions for a response style. For example, “Respond in the style of a pirate.” The model does not generate a response to a system prompt. Only one system prompt may be provided, and if it is present, it must be the first in the array. 'user'A prompt provided by the user. Must follow the system prompt (if there is one) or an assistant response. 'assistant'A response previously provided by the language model. Must follow a user prompt. Past responses can be used to provide a stateful conversational experience; see Usage notes.
Optional:
optionsAn object containing zero or more of the following options that affect the model’s hyperparameters. See LLM Settings.
-
temperature: A value from 0 to 1 (inclusive) that controls the randomness of the output of the language model. A higher temperature (for example, 0.7) results in more diverse and random output, while a lower temperature (such as 0.2) makes the output more deterministic and focused.Default: 0
-
top_p: A value from 0 to 1 (inclusive) that controls the randomness and diversity of the language model, generally used as an alternative totemperature. The difference is thattop_prestricts the set of possible tokens that the model outputs, whiletemperatureinfluences which tokens are chosen at each step.Default: 0
-
max_tokens: Sets the maximum number of output tokens in the response. Small values can result in truncated responses.Default: 4096 Maximum allowed value: 8192
-
guardrails: Filters potentially unsafe and harmful responses from a language model using Cortex Guard. Either TRUE or FALSE.Default: FALSE
-
response_format: A JSON schema that the response should follow. This is a SQL sub-object, not a string. Ifresponse_formatis not specified, the response is a string containing either the response or a serialized JSON object containing the response and information about it.For more information, see AI_COMPLETE structured outputs.
Specifying the
optionsargument, even if it is an empty object ({}), affects how thepromptargument is interpreted and how the response is formatted.-
Returns¶
When the options argument is not specified, returns a string containing the response.
When the options argument is given, and this object contains the response_format key, returns a string
representation of a JSON object adhering to the specified JSON schema.
When the options argument is given, and this object does not contain the response_format key, returns a
string representation of a JSON object containing the following keys.
"choices": An array of the model’s responses. (Currently, only one response is provided.) Each response is an object containing a"messages"key whose value is the model’s response to the latest prompt."created": UNIX timestamp (seconds since midnight, January 1, 1970) when the response was generated."model": The name of the model that created the response."usage": An object recording the number of tokens consumed and generated by this completion. Includes the following sub-keys:"completion_tokens": The number of tokens in the generated response."prompt_tokens": The number of tokens in the prompt."total_tokens": The total number of tokens consumed, which is the sum of the other two values.
Access control requirements¶
Users must use a role that has been granted the SNOWFLAKE.CORTEX_USER database role. See Cortex LLM privileges for more information on this privilege.
Usage notes¶
COMPLETE does not retain any state from one call to the next. To use the COMPLETE function to provide a stateful,
conversational experience, pass all previous user prompts and model responses in the conversation as part of the prompt_or_history
array (see Templates for Chat Models).
Keep in mind that the number of tokens processed increases for each “round,” and costs increase proportionally.
Legal notices¶
The following notice applies to Cortex COMPLETE Structured Output functionality only:
Use of models provided on the Snowflake Model and Service Flow-Down Terms page are subject to the terms specified therein. The data classification of inputs and outputs are as set forth in the following table.
| Input data classification | Output data classification | Designation |
|---|---|---|
| Usage Data | Customer Data | Covered AI Feature |
For the rest of COMPLETE functionality, refer to Snowflake AI and ML for legal notices.