Prompty 输出格式#

实验性功能

这是一个实验性功能,随时可能更改。了解更多

本文档中,您将学习

  • 了解如何处理 Prompty 的输出格式,例如:textjson_object

  • 了解如何使用 Prompty 的流式输出

格式化 Prompty 输出#

文本输出#

默认情况下,Prompty 返回响应中第一个选择的消息。以下是用于文本输出的 Prompty 格式示例

---
name: Text Format Prompt
description: A basic prompt that uses the GPT-3 chat API to answer questions
model:
  api: chat
  configuration:
    type: azure_openai
    connection: open_ai_connection
    azure_deployment: gpt-35-turbo-0125
  parameters:
    max_tokens: 128
    temperature: 0.2
inputs:
  first_name:
    type: string
  last_name:
    type: string
  question:
    type: string
sample:
  first_name: John
  last_name: Doe
  question: what is the meaning of life?
---
system:
You are an AI assistant who helps people find information.
As the assistant, you answer questions briefly, succinctly,
and in a personable manner using markdown and even add some personal flair with appropriate emojis.

# Safety
- You **should always** reference factual statements to search results based on [relevant documents]
- Search results based on [relevant documents] may be incomplete or irrelevant. You do not make assumptions
# Customer
You are helping {{first_name}} {{last_name}} to find answers to their questions.
Use their name to address them in your responses.

user:
{{question}}

Prompty 的输出是一个字符串内容,如下例所示

Ah, the age-old question about the meaning of life! 🌍🤔 The meaning of life is a deeply philosophical and subjective topic. Different people have different perspectives on it. Some believe that the meaning of life is to seek happiness and fulfillment, while others find meaning in personal relationships, accomplishments, or spiritual beliefs. Ultimately, it's up to each individual to explore and discover their own purpose and meaning in life. 🌟

Json 对象输出#

当满足以下条件时,Prompty 可以将第一个选择的内容作为字典对象返回

  • 在参数中将 response_format 定义为 type: json_object

  • 模板指定返回值的 JSON 格式。

注意json_object response_format 兼容 GPT-4 Turbo 和所有比 gpt-3.5-turbo-1106 更新的 GPT-3.5 Turbo 模型。更多详细信息,请参阅此文档

以下是如何配置 Prompty 以进行 JSON 对象输出

---
name: Json Format Prompt
description: A basic prompt that uses the GPT-3 chat API to answer questions
model:
  api: chat
  configuration:
    type: azure_openai
    azure_deployment: gpt-35-turbo-0125
    connection: open_ai_connection
  parameters:
    max_tokens: 128
    temperature: 0.2
    response_format:
      type: json_object
inputs:
  first_name:
    type: string
  last_name:
    type: string
  question:
    type: string
sample:
  first_name: John
  last_name: Doe
  question: what is the meaning of life?
---
system:
You are an AI assistant who helps people find information.
As the assistant, you answer questions briefly, succinctly. Your structured response. Only accepts JSON format, likes below:
{"name": customer_name, "answer": the answer content}

# Customer
You are helping {{first_name}} {{last_name}} to find answers to their questions.
Use their name to address them in your responses.

user:
{{question}}

Prompty 的输出是一个 JSON 对象,包含第一个选择的内容

{
    "name": "John",
    "answer": "The meaning of life is a philosophical question that varies depending on individual beliefs and perspectives."
}

用户还可以通过配置输出部分来指定要返回的字段

---
name: Json Format Prompt
description: A basic prompt that uses the GPT-3 chat API to answer questions
model:
  api: chat
  configuration:
    type: azure_openai
    azure_deployment: gpt-35-turbo-0125
    connection: open_ai_connection
  parameters:
    max_tokens: 128
    temperature: 0.2
    response_format:
      type: json_object
inputs:
  first_name:
    type: string
  last_name:
    type: string
  question:
    type: string
outputs:
  answer:
    type: string
sample:
  first_name: John
  last_name: Doe
  question: what is the meaning of life?
---
system:
You are an AI assistant who helps people find information.
As the assistant, you answer questions briefly, succinctly. Your structured response. Only accepts JSON format, likes below:
{"name": customer_name, "answer": the answer content}

# Customer
You are helping {{first_name}} {{last_name}} to find answers to their questions.
Use their name to address them in your responses.

user:
{{question}}

然后 Prompty 将返回用户指定的输出

{
  "answer": "The meaning of life is a philosophical question that varies depending on individual beliefs and perspectives."
}

所有选择#

在某些情况下,用户可能需要访问语言模型(LLM)的原始响应以进行进一步处理。这可以通过设置 response=all 来实现,它允许检索原始 LLM 响应。有关详细信息,请参阅 LLM 响应

---
name: All Choices Text Format Prompt
description: A basic prompt that uses the GPT-3 chat API to answer questions
model:
  api: chat
  configuration:
    type: azure_openai
    connection: open_ai_connection
    azure_deployment: gpt-35-turbo-0125
  parameters:
    max_tokens: 128
    temperature: 0.2
    n: 3
  response: all
inputs:
  first_name:
    type: string
  last_name:
    type: string
  question:
    type: string
sample:
  first_name: John
  last_name: Doe
  question: what is the meaning of life?
---
system:
You are an AI assistant who helps people find information.
As the assistant, you answer questions briefly, succinctly,
and in a personable manner using markdown and even add some personal flair with appropriate emojis.

# Safety
- You **should always** reference factual statements to search results based on [relevant documents]
- Search results based on [relevant documents] may be incomplete or irrelevant. You do not make assumptions
# Customer
You are helping {{first_name}} {{last_name}} to find answers to their questions.
Use their name to address them in your responses.

user:
{{question}}

流式输出#

对于 response_formattext 的 Prompty 配置,在参数中设置 stream=true 将导致 Promptflow SDK 返回一个生成器。生成器中的每个项代表一个数据块的内容。

以下是如何配置 Prompty 以进行流式文本输出

---
name: Stream Mode Text Format Prompt
description: A basic prompt that uses the GPT-3 chat API to answer questions
model:
  api: chat
  configuration:
    type: azure_openai
    connection: open_ai_connection
    azure_deployment: gpt-35-turbo-0125
  parameters:
    max_tokens: 512
    temperature: 0.2
    stream: true
inputs:
  first_name:
    type: string
  last_name:
    type: string
  question:
    type: string
sample:
  first_name: John
  last_name: Doe
  question: What's the steps to get rich?
---
system:
You are an AI assistant who helps people find information.
and in a personable manner using markdown and even add some personal flair with appropriate emojis.

# Safety
- You **should always** reference factual statements to search results based on [relevant documents]
- Search results based on [relevant documents] may be incomplete or irrelevant. You do not make assumptions
# Customer
You are helping user to find answers to their questions.

user:
{{question}}

要从生成器结果中检索元素,请使用以下 Python 代码

from promptflow.core import Prompty

# load prompty as a flow
prompty_func = Prompty.load("stream_output.prompty")
# execute the flow as function
question = "What's the steps to get rich?"
result = prompty_func(first_name="John", last_name="Doh", question=question)
# Type of the result is generator
for item in result:
    print(item, end="")