使用自定义 OpenTelemetry Collector#

作者:  头像 头像在 GitHub 上打开

在某些场景中,您可能希望使用自己的 OpenTelemetry Collector 并保持您的依赖项最小化。

在这种情况下,您可以避免对 promptflow-devkit 的依赖,它提供了来自 promptflow 的默认收集器,并且只依赖于 promptflow-tracing

学习目标 - 完成本教程后,您应该能够

  • 使用自定义 OpenTelemetry Collector 跟踪 LLM (OpenAI) 调用。

0. 安装依赖包#

%%capture --no-stderr
%pip install -r ./requirements.txt

1. 设置 OpenTelemetry 收集器#

实现一个简单的收集器,将跟踪打印到标准输出。

import threading
from http.server import BaseHTTPRequestHandler, HTTPServer

from opentelemetry.proto.collector.trace.v1.trace_service_pb2 import (
    ExportTraceServiceRequest,
)


class OTLPCollector(BaseHTTPRequestHandler):
    def do_POST(self):
        content_length = int(self.headers["Content-Length"])
        post_data = self.rfile.read(content_length)

        traces_request = ExportTraceServiceRequest()
        traces_request.ParseFromString(post_data)

        print("Received a POST request with data:")
        print(traces_request)

        self.send_response(200, "Traces received")
        self.end_headers()
        self.wfile.write(b"Data received and printed to stdout.\n")


def run_server(port: int):
    server_address = ("", port)
    httpd = HTTPServer(server_address, OTLPCollector)
    httpd.serve_forever()


def start_server(port: int):
    server_thread = threading.Thread(target=run_server, args=(port,))
    server_thread.daemon = True
    server_thread.start()
    print(f"Server started on port {port}. Access https://:{port}/")
    return server_thread
# invoke the collector service, serving on OTLP port
start_server(port=4318)

2. 使用跟踪应用程序#

假设我们已经有一个调用 OpenAI API 的 Python 函数。

from llm import my_llm_tool

deployment_name = "gpt-35-turbo-16k"

调用 start_trace(),并将 OTLP 导出器配置到上述收集器。

from promptflow.tracing import start_trace

start_trace()
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace.export import BatchSpanProcessor

tracer_provider = trace.get_tracer_provider()
otlp_span_exporter = OTLPSpanExporter()
tracer_provider.add_span_processor(BatchSpanProcessor(otlp_span_exporter))

在标准输出中可视化跟踪。

result = my_llm_tool(
    prompt="Write a simple Hello, world! python program that displays the greeting message. Output code only.",
    deployment_name=deployment_name,
)
result
# view the traces under this cell