Prompty 输出格式#
学习目标 - 完成本教程后,您应该能够
了解如何处理 prompty 的输出格式,例如:文本、json_object。
了解如何使用 prompty 的流式输出
0. 安装依赖包#
%%capture --no-stderr
%pip install promptflow-devkit
1. 创建必要的连接#
连接有助于安全地存储和管理与 LLM 和其他外部工具(例如 Azure 内容安全)交互所需的密钥或其他敏感凭据。
上述 prompty 内部使用连接 open_ai_connection
,如果之前没有添加,我们需要设置此连接。创建后,它存储在本地数据库中,可以在任何流中使用。
遵循此说明准备您的 Azure OpenAI 资源,如果您没有 api_key
,请获取一个。
from promptflow.client import PFClient
from promptflow.connections import AzureOpenAIConnection, OpenAIConnection
# client can help manage your runs and connections.
pf = PFClient()
try:
conn_name = "open_ai_connection"
conn = pf.connections.get(name=conn_name)
print("using existing connection")
except:
# Follow https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal to create an Azure OpenAI resource.
connection = AzureOpenAIConnection(
name=conn_name,
api_key="<your_AOAI_key>",
api_base="<your_AOAI_endpoint>",
api_type="azure",
)
# use this if you have an existing OpenAI account
# connection = OpenAIConnection(
# name=conn_name,
# api_key="<user-input>",
# )
conn = pf.connections.create_or_update(connection)
print("successfully created connection")
print(conn)
2. 格式化 prompty 输出#
文本输出#
默认情况下,prompty 返回第一个选择的消息。
with open("text_format.prompty") as fin:
print(fin.read())
from promptflow.core import Prompty
# load prompty as a flow
f = Prompty.load("text_format.prompty")
# execute the flow as function
question = "What is the capital of France?"
result = f(first_name="John", last_name="Doe", question=question)
# note: the result is a string
result
Json 对象输出#
当用户满足以下条件时,prompty 将第一个选择的内容作为字典返回。
在参数中将
response_format
定义为type: json_object
在模板中指定返回的 json 格式。
注意:response_format 与 GPT-4 Turbo 和所有比 gpt-3.5-turbo-1106 更新的 GPT-3.5 Turbo 模型兼容。有关更多详细信息,请参阅此文档。
with open("json_format.prompty") as fin:
print(fin.read())
from promptflow.core import Prompty
# load prompty as a flow
f = Prompty.load("json_format.prompty")
# execute the flow as function
question = "What is the capital of France?"
result = f(first_name="John", last_name="Doe", question=question)
# note: the result is a dict
result
所有选择#
当用户将响应配置为 all
时,prompty 将返回包含所有选择的原始 LLM 响应。
with open("all_response.prompty") as fin:
print(fin.read())
from promptflow.core import Prompty
# load prompty as a flow
f = Prompty.load("all_response.prompty")
# execute the flow as function
question = "What is the capital of France?"
result = f(first_name="John", last_name="Doe", question=question)
# note: the result is a ChatCompletion object
print(result.choices[0])
流式输出#
当在输出格式为文本的提示的参数中配置 stream=true
时,promptflow sdk 将返回一个生成器类型,其项是每个块的内容。
with open("stream_output.prompty") as fin:
print(fin.read())
from promptflow.core import Prompty
# load prompty as a flow
f = Prompty.load("stream_output.prompty")
# execute the flow as function
question = "What's the steps to get rich?"
result = f(question=question)
for item in result:
print(item, end="")
注意:当 stream=True
时,如果响应格式是 json_object
或响应是 all
,将直接返回 LLM 响应。有关处理流式响应的更多详细信息,请参阅此文档。
使用文本输出进行批处理运行#
from promptflow.client import PFClient
data = "./data.jsonl" # path to the data file
# create run with the flow and data
pf = PFClient()
base_run = pf.run(
flow="text_format.prompty",
data=data,
column_mapping={
"question": "${data.question}",
},
stream=True,
)
details = pf.get_details(base_run)
details.head(10)
使用流式输出进行批处理运行#
from promptflow.client import PFClient
data = "./data.jsonl" # path to the data file
# create run with the flow and data
pf = PFClient()
base_run = pf.run(
flow="stream_output.prompty",
data=data,
column_mapping={
"question": "${data.question}",
},
stream=True,
)
details = pf.get_details(base_run)
details.head(10)