使用 AutoGen 进行追踪#

作者:  头像 头像在 GitHub 上打开

AutoGen 提供由 LLM、工具或人类驱动的可对话代理,它们可以通过自动化聊天协同执行任务。该框架通过多代理对话实现工具使用和人类参与。有关此功能的文档请在此处查找。

学习目标 - 完成本教程后,您应该能够

  • 追踪 LLM (OpenAI) 调用并可视化您的应用程序的追踪。

要求#

AutoGen 需要 Python>=3.8。要运行此笔记本示例,请安装所需的依赖项

%%capture --no-stderr
%pip install -r ./requirements.txt

设置您的 API 端点#

您可以从示例文件 OAI_CONFIG_LIST.json.example 创建名为 OAI_CONFIG_LIST.json 的配置文件。

以下代码使用 config_list_from_json 函数从环境变量或 json 文件加载配置列表。

import autogen

# please ensure you have a json config file
env_or_file = "OAI_CONFIG_LIST.json"

# filters the configs by models (you can filter by other keys as well). Only the gpt-4 models are kept in the list based on the filter condition.

# gpt4
# config_list = autogen.config_list_from_json(
#     env_or_file,
#     filter_dict={
#         "model": ["gpt-4", "gpt-4-0314", "gpt4", "gpt-4-32k", "gpt-4-32k-0314", "gpt-4-32k-v0314"],
#     },
# )

# gpt35
config_list = autogen.config_list_from_json(
    env_or_file,
    filter_dict={
        "model": {
            "gpt-35-turbo",
            "gpt-3.5-turbo",
            "gpt-3.5-turbo-16k",
            "gpt-3.5-turbo-0301",
            "chatgpt-35-turbo-0301",
            "gpt-35-turbo-v0301",
        },
    },
)

构建代理#

import os

os.environ["AUTOGEN_USE_DOCKER"] = "False"

llm_config = {"config_list": config_list, "cache_seed": 42}
user_proxy = autogen.UserProxyAgent(
    name="User_proxy",
    system_message="A human admin.",
    code_execution_config={
        "last_n_messages": 2,
        "work_dir": "groupchat",
        "use_docker": False,
    },  # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.
    human_input_mode="TERMINATE",
)
coder = autogen.AssistantAgent(
    name="Coder",
    llm_config=llm_config,
)
pm = autogen.AssistantAgent(
    name="Product_manager",
    system_message="Creative in software product ideas.",
    llm_config=llm_config,
)
groupchat = autogen.GroupChat(agents=[user_proxy, coder, pm], messages=[], max_round=12)
manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)

使用 promptflow 追踪开始聊天#

from promptflow.tracing import start_trace

# start a trace session, and print a url for user to check trace
# traces will be collected into below collection name
start_trace(collection="autogen-groupchat")

打开您在 start_trace 输出中获得的 URL,当运行以下代码时,您将能够在 UI 中看到新的追踪。

from opentelemetry import trace
import json


tracer = trace.get_tracer("my_tracer")
# Create a root span
with tracer.start_as_current_span("autogen") as span:
    message = "Find a latest paper about gpt-4 on arxiv and find its potential applications in software."
    user_proxy.initiate_chat(
        manager,
        message=message,
        clear_history=True,
    )
    span.set_attribute("custom", "custom attribute value")
    # recommend to store inputs and outputs as events
    span.add_event(
        "promptflow.function.inputs", {"payload": json.dumps(dict(message=message))}
    )
    span.add_event(
        "promptflow.function.output", {"payload": json.dumps(user_proxy.last_message())}
    )
# type exit to terminate the chat

后续步骤#

至此,您已成功使用 prompt flow 追踪应用程序中的 LLM 调用。

您可以查看更多示例

  • 追踪您的流:使用 promptflow @trace 以结构化方式追踪您的应用程序,并使用批处理运行对其进行评估。