autogen_ext.models.openai#
- class OpenAIChatCompletionClient(**kwargs: Unpack)[source]#
基类:
BaseOpenAIChatCompletionClient,Component[OpenAIClientConfigurationConfigModel]OpenAI 托管模型的聊天补全客户端。
要使用此客户端,您必须安装 openai 额外功能
pip install "autogen-ext[openai]"
您还可以将此客户端用于 OpenAI 兼容的 ChatCompletion 端点。将此客户端用于非 OpenAI 模型未经测试,不作保证。
对于非 OpenAI 模型,请首先查看我们的社区扩展以获取其他模型客户端。
- 参数:
model (str) – 要使用的 OpenAI 模型。
api_key (可选, str) – 要使用的 API 密钥。如果环境变量中未找到 'OPENAI_API_KEY',则为必填项。
organization (可选, str) – 要使用的组织 ID。
base_url (可选, str) – 要使用的基本 URL。如果模型未托管在 OpenAI 上,则为必填项。
timeout – (可选,浮点数):请求的超时时间(秒)。
max_retries (可选, int) – 最大重试次数。
model_info (可选, ModelInfo) – 模型的能力。如果模型名称不是有效的 OpenAI 模型,则为必填项。
frequency_penalty (可选, float)
logit_bias – (可选,dict[str, int])
max_tokens (可选, int)
n (可选, int)
presence_penalty (可选, float)
response_format (可选, Dict[str, Any]) –
响应的格式。可能的选项有
# Text response, this is the default. {"type": "text"}# JSON response, make sure to instruct the model to return JSON. {"type": "json_object"}# Structured output response, with a pre-defined JSON schema. { "type": "json_schema", "json_schema": { "name": "name of the schema, must be an identifier.", "description": "description for the model.", # You can convert a Pydantic (v2) model to JSON schema # using the `model_json_schema()` method. "schema": "<the JSON schema itself>", # Whether to enable strict schema adherence when # generating the output. If set to true, the model will # always follow the exact schema defined in the # `schema` field. Only a subset of JSON Schema is # supported when `strict` is `true`. # To learn more, read # https://platform.openai.com/docs/guides/structured-outputs. "strict": False, # or True }, }建议使用
create()或create_stream()方法中的 json_output 参数而不是 response_format 来进行结构化输出。json_output 参数更灵活,允许您直接指定 Pydantic 模型类。seed (可选, int)
temperature (可选, float)
top_p (可选, float)
parallel_tool_calls (可选, bool) – 是否允许并行工具调用。未设置时,默认为服务器行为。
user (可选, str)
default_headers (可选, dict[str, str]) – 自定义头部;对于身份验证或其他自定义要求很有用。
add_name_prefixes (可选, bool) – 是否将 source 值添加到每个
UserMessage内容的前面。例如,“this is content” 变为 “Reviewer said: this is content.”。这对于不支持消息中 name 字段的模型可能很有用。默认为 False。include_name_in_message (可选, bool) – 是否在发送到 OpenAI API 的用户消息参数中包含 name 字段。默认为 True。对于不支持 name 字段的模型提供商(例如 Groq),请设置为 False。
stream_options (可选, dict) – 流式传输的其他选项。目前仅支持 include_usage。
示例
以下代码片段展示了如何将客户端与 OpenAI 模型一起使用
from autogen_ext.models.openai import OpenAIChatCompletionClient from autogen_core.models import UserMessage openai_client = OpenAIChatCompletionClient( model="gpt-4o-2024-08-06", # api_key="sk-...", # Optional if you have an OPENAI_API_KEY environment variable set. ) result = await openai_client.create([UserMessage(content="What is the capital of France?", source="user")]) # type: ignore print(result) # Close the client when done. # await openai_client.close()
要将客户端与非 OpenAI 模型一起使用,您需要提供模型的基 URL 和模型信息。例如,要使用 Ollama,您可以使用以下代码片段
from autogen_ext.models.openai import OpenAIChatCompletionClient from autogen_core.models import ModelFamily custom_model_client = OpenAIChatCompletionClient( model="deepseek-r1:1.5b", base_url="https://:11434/v1", api_key="placeholder", model_info={ "vision": False, "function_calling": False, "json_output": False, "family": ModelFamily.R1, "structured_output": True, }, ) # Close the client when done. # await custom_model_client.close()
要使用流模式,您可以使用以下代码片段
import asyncio from autogen_core.models import UserMessage from autogen_ext.models.openai import OpenAIChatCompletionClient async def main() -> None: # Similar for AzureOpenAIChatCompletionClient. model_client = OpenAIChatCompletionClient(model="gpt-4o") # assuming OPENAI_API_KEY is set in the environment. messages = [UserMessage(content="Write a very short story about a dragon.", source="user")] # Create a stream. stream = model_client.create_stream(messages=messages) # Iterate over the stream and print the responses. print("Streamed responses:") async for response in stream: if isinstance(response, str): # A partial response is a string. print(response, flush=True, end="") else: # The last response is a CreateResult object with the complete message. print("\n\n------------\n") print("The complete response:", flush=True) print(response.content, flush=True) # Close the client when done. await model_client.close() asyncio.run(main())
要使用结构化输出以及函数调用,您可以使用以下代码片段
import asyncio from typing import Literal from autogen_core.models import ( AssistantMessage, FunctionExecutionResult, FunctionExecutionResultMessage, SystemMessage, UserMessage, ) from autogen_core.tools import FunctionTool from autogen_ext.models.openai import OpenAIChatCompletionClient from pydantic import BaseModel # Define the structured output format. class AgentResponse(BaseModel): thoughts: str response: Literal["happy", "sad", "neutral"] # Define the function to be called as a tool. def sentiment_analysis(text: str) -> str: """Given a text, return the sentiment.""" return "happy" if "happy" in text else "sad" if "sad" in text else "neutral" # Create a FunctionTool instance with `strict=True`, # which is required for structured output mode. tool = FunctionTool(sentiment_analysis, description="Sentiment Analysis", strict=True) async def main() -> None: # Create an OpenAIChatCompletionClient instance. model_client = OpenAIChatCompletionClient(model="gpt-4o-mini") # Generate a response using the tool. response1 = await model_client.create( messages=[ SystemMessage(content="Analyze input text sentiment using the tool provided."), UserMessage(content="I am happy.", source="user"), ], tools=[tool], ) print(response1.content) # Should be a list of tool calls. # [FunctionCall(name="sentiment_analysis", arguments={"text": "I am happy."}, ...)] assert isinstance(response1.content, list) response2 = await model_client.create( messages=[ SystemMessage(content="Analyze input text sentiment using the tool provided."), UserMessage(content="I am happy.", source="user"), AssistantMessage(content=response1.content, source="assistant"), FunctionExecutionResultMessage( content=[FunctionExecutionResult(content="happy", call_id=response1.content[0].id, is_error=False, name="sentiment_analysis")] ), ], # Use the structured output format. json_output=AgentResponse, ) print(response2.content) # Should be a structured output. # {"thoughts": "The user is happy.", "response": "happy"} # Close the client when done. await model_client.close() asyncio.run(main())
要从配置中加载客户端,您可以使用 load_component 方法
from autogen_core.models import ChatCompletionClient config = { "provider": "OpenAIChatCompletionClient", "config": {"model": "gpt-4o", "api_key": "REPLACE_WITH_YOUR_API_KEY"}, } client = ChatCompletionClient.load_component(config)
要查看所有可用配置选项的完整列表,请参阅
OpenAIClientConfigurationConfigModel类。- component_type: ClassVar[ComponentType] = 'model'#
组件的逻辑类型。
- component_config_schema#
- component_provider_override: ClassVar[str | None] = 'autogen_ext.models.openai.OpenAIChatCompletionClient'#
覆盖组件的提供者字符串。这应该用于防止内部模块名称成为模块名称的一部分。
- _to_config() OpenAIClientConfigurationConfigModel[source]#
转储创建与此实例配置匹配的组件新实例所需的配置。
- 返回:
T – 组件的配置。
- classmethod _from_config(config: OpenAIClientConfigurationConfigModel) Self[source]#
从配置对象创建组件的新实例。
- 参数:
config (T) – 配置对象。
- 返回:
Self – 组件的新实例。
- class AzureOpenAIChatCompletionClient(**kwargs: Unpack)[source]#
基类:
BaseOpenAIChatCompletionClient,Component[AzureOpenAIClientConfigurationConfigModel]Azure OpenAI 托管模型的聊天补全客户端。
要使用此客户端,您必须安装 azure 和 openai 扩展
pip install "autogen-ext[openai,azure]"
- 参数:
model (str) – 要使用的 OpenAI 模型。
azure_endpoint (str) – Azure 模型的端点。Azure 模型必需。
azure_deployment (str) – Azure 模型的部署名称。Azure 模型必需。
api_version (str) – 要使用的 API 版本。Azure 模型必需。
azure_ad_token (str) – 要使用的 Azure AD 令牌。提供此项或 azure_ad_token_provider 用于基于令牌的身份验证。
azure_ad_token_provider (可选, Callable[[], Awaitable[str]] | AzureTokenProvider) – 要使用的 Azure AD 令牌提供程序。提供此项或 azure_ad_token 用于基于令牌的身份验证。
api_key (可选, str) – 要使用的 API 密钥,如果使用基于密钥的身份验证,请使用此项。如果使用基于 Azure AD 令牌的身份验证或 AZURE_OPENAI_API_KEY 环境变量,则此项为可选。
timeout – (可选,浮点数):请求的超时时间(秒)。
max_retries (可选, int) – 最大重试次数。
model_info (可选, ModelInfo) – 模型的能力。如果模型名称不是有效的 OpenAI 模型,则为必填项。
frequency_penalty (可选, float)
logit_bias – (可选,dict[str, int])
max_tokens (可选, int)
n (可选, int)
presence_penalty (可选, float)
response_format (可选, Dict[str, Any]) –
响应的格式。可能的选项有
# Text response, this is the default. {"type": "text"}# JSON response, make sure to instruct the model to return JSON. {"type": "json_object"}# Structured output response, with a pre-defined JSON schema. { "type": "json_schema", "json_schema": { "name": "name of the schema, must be an identifier.", "description": "description for the model.", # You can convert a Pydantic (v2) model to JSON schema # using the `model_json_schema()` method. "schema": "<the JSON schema itself>", # Whether to enable strict schema adherence when # generating the output. If set to true, the model will # always follow the exact schema defined in the # `schema` field. Only a subset of JSON Schema is # supported when `strict` is `true`. # To learn more, read # https://platform.openai.com/docs/guides/structured-outputs. "strict": False, # or True }, }建议使用
create()或create_stream()方法中的 json_output 参数而不是 response_format 来进行结构化输出。json_output 参数更灵活,允许您直接指定 Pydantic 模型类。seed (可选, int)
temperature (可选, float)
top_p (可选, float)
parallel_tool_calls (可选, bool) – 是否允许并行工具调用。未设置时,默认为服务器行为。
user (可选, str)
default_headers (可选, dict[str, str]) – 自定义头部;对于身份验证或其他自定义要求很有用。
add_name_prefixes (可选, bool) – 是否将 source 值添加到每个
UserMessage内容的前面。例如,“this is content” 变为 “Reviewer said: this is content.”。这对于不支持消息中 name 字段的模型可能很有用。默认为 False。include_name_in_message (可选, bool) – 是否在发送到 OpenAI API 的用户消息参数中包含 name 字段。默认为 True。对于不支持 name 字段的模型提供商(例如 Groq),请设置为 False。
stream_options (可选, dict) – 流式传输的其他选项。目前仅支持 include_usage。
要使用客户端,您需要提供部署名称、Azure Cognitive Services 端点和 API 版本。对于身份验证,您可以提供 API 密钥或 Azure Active Directory (AAD) 令牌凭据。
以下代码片段展示了如何使用 AAD 身份验证。所使用的身份必须被分配 Cognitive Services OpenAI User 角色。
from autogen_ext.auth.azure import AzureTokenProvider from autogen_ext.models.openai import AzureOpenAIChatCompletionClient from azure.identity import DefaultAzureCredential # Create the token provider token_provider = AzureTokenProvider( DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default", ) az_model_client = AzureOpenAIChatCompletionClient( azure_deployment="{your-azure-deployment}", model="{model-name, such as gpt-4o}", api_version="2024-06-01", azure_endpoint="https://{your-custom-endpoint}.openai.azure.com/", azure_ad_token_provider=token_provider, # Optional if you choose key-based authentication. # api_key="sk-...", # For key-based authentication. )
请参阅
OpenAIChatCompletionClient类中的其他使用示例。要从配置中加载使用基于身份的身份验证的客户端,您可以使用 load_component 方法
from autogen_core.models import ChatCompletionClient config = { "provider": "AzureOpenAIChatCompletionClient", "config": { "model": "gpt-4o-2024-05-13", "azure_endpoint": "https://{your-custom-endpoint}.openai.azure.com/", "azure_deployment": "{your-azure-deployment}", "api_version": "2024-06-01", "azure_ad_token_provider": { "provider": "autogen_ext.auth.azure.AzureTokenProvider", "config": { "provider_kind": "DefaultAzureCredential", "scopes": ["https://cognitiveservices.azure.com/.default"], }, }, }, } client = ChatCompletionClient.load_component(config)
要查看所有可用配置选项的完整列表,请参阅
AzureOpenAIClientConfigurationConfigModel类。注意
目前仅支持 DefaultAzureCredential,不传递任何额外参数。
注意
Azure OpenAI 客户端默认将 User-Agent 头部设置为 autogen-python/{version}。要覆盖此设置,您可以将环境变量 autogen_ext.models.openai.AZURE_OPENAI_USER_AGENT 设置为空字符串。
有关如何直接使用 Azure 客户端或获取更多信息,请参阅此处。
- component_type: ClassVar[ComponentType] = 'model'#
组件的逻辑类型。
- component_config_schema#
- component_provider_override: ClassVar[str | None] = 'autogen_ext.models.openai.AzureOpenAIChatCompletionClient'#
覆盖组件的提供者字符串。这应该用于防止内部模块名称成为模块名称的一部分。
- _to_config() AzureOpenAIClientConfigurationConfigModel[source]#
转储创建与此实例配置匹配的组件新实例所需的配置。
- 返回:
T – 组件的配置。
- classmethod _from_config(config: AzureOpenAIClientConfigurationConfigModel) Self[source]#
从配置对象创建组件的新实例。
- 参数:
config (T) – 配置对象。
- 返回:
Self – 组件的新实例。
- class BaseOpenAIChatCompletionClient(client: AsyncOpenAI | AsyncAzureOpenAI, *, create_args: Dict[str, Any], model_capabilities: ModelCapabilities | None = None, model_info: ModelInfo | None = None, add_name_prefixes: bool = False, include_name_in_message: bool = True)[source]#
-
- async create(messages: Sequence[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[Tool | ToolSchema] = [], tool_choice: Tool | Literal['auto', 'required', 'none'] = 'auto', json_output: bool | type[BaseModel] | None = None, extra_create_args: Mapping[str, Any] = {}, cancellation_token: CancellationToken | None = None) CreateResult[source]#
从模型创建单个响应。
- 参数:
messages (Sequence[LLMMessage]) – 要发送给模型的消息。
tools (Sequence[Tool | ToolSchema], 可选) – 与模型一起使用的工具。默认为 []。
tool_choice (Tool | Literal["auto", "required", "none"], 可选) – 单个 Tool 对象,用于强制模型使用,"auto" 允许模型选择任何可用工具,"required" 强制使用工具,"none" 禁用工具使用。默认为 "auto"。
json_output (Optional[bool | type[BaseModel]], 可选) – 是否使用 JSON 模式、结构化输出或两者都不使用。默认为 None。如果设置为 Pydantic BaseModel 类型,它将用作结构化输出的输出类型。如果设置为布尔值,它将用于确定是否使用 JSON 模式。如果设置为 True,请确保在指令或提示中指示模型生成 JSON 输出。
extra_create_args (Mapping[str, Any], 可选) – 传递给底层客户端的额外参数。默认为 {}。
cancellation_token (Optional[CancellationToken], 可选) – 用于取消的令牌。默认为 None。
- 返回:
CreateResult – 模型调用的结果。
- async create_stream(messages: Sequence[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[Tool | ToolSchema] = [], tool_choice: Tool | Literal['auto', 'required', 'none'] = 'auto', json_output: bool | type[BaseModel] | None = None, extra_create_args: Mapping[str, Any] = {}, cancellation_token: CancellationToken | None = None, max_consecutive_empty_chunk_tolerance: int = 0, include_usage: bool | None = None) AsyncGenerator[str | CreateResult, None][source]#
从模型创建一个字符串块流,以
CreateResult结尾。扩展
autogen_core.models.ChatCompletionClient.create_stream()以支持 OpenAI API。在流式传输中,默认行为是不返回令牌使用计数。请参阅:OpenAI API 参考以获取可能的参数。
您可以将 include_usage 标志设置为 True 或 extra_create_args={“stream_options”: {“include_usage”: True}}。(如果访问的 API 支持)以返回一个最终块,其中 usage 设置为具有提示和补全令牌计数的
RequestUsage对象,所有前面的块的 usage 都为 None。请参阅:OpenAI API 流选项参考。如果同时设置了标志和 stream_options,但值不同,则会引发异常。- extra_create_args 中可以包含的其他支持参数示例
temperature (浮点数):控制输出的随机性。值越高(例如 0.8),输出越随机;值越低(例如 0.2),输出越集中和确定性。
max_tokens (整数):补全中要生成的最大令牌数。
top_p (浮点数):一种替代温度采样的核采样方法,模型考虑具有 top_p 概率质量的令牌结果。
frequency_penalty (浮点数):介于 -2.0 和 2.0 之间的值,根据新令牌在文本中出现的频率来惩罚它们,从而降低重复短语的可能性。
presence_penalty (浮点数):介于 -2.0 和 2.0 之间的值,根据新令牌是否出现在文本中来惩罚它们,鼓励模型讨论新主题。
- actual_usage() RequestUsage[source]#
- total_usage() RequestUsage[source]#
- count_tokens(messages: Sequence[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[Tool | ToolSchema] = []) int[source]#
- remaining_tokens(messages: Sequence[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[Tool | ToolSchema] = []) int[source]#
- property capabilities: ModelCapabilities#
- pydantic model AzureOpenAIClientConfigurationConfigModel[source]#
基类:
BaseOpenAIClientConfigurationConfigModel显示 JSON 模式
{ "title": "AzureOpenAIClientConfigurationConfigModel", "type": "object", "properties": { "frequency_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Frequency Penalty" }, "logit_bias": { "anyOf": [ { "additionalProperties": { "type": "integer" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Logit Bias" }, "max_tokens": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Max Tokens" }, "n": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "N" }, "presence_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Presence Penalty" }, "response_format": { "anyOf": [ { "$ref": "#/$defs/ResponseFormat" }, { "type": "null" } ], "default": null }, "seed": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Seed" }, "stop": { "anyOf": [ { "type": "string" }, { "items": { "type": "string" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Stop" }, "temperature": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Temperature" }, "top_p": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Top P" }, "user": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "User" }, "stream_options": { "anyOf": [ { "$ref": "#/$defs/StreamOptions" }, { "type": "null" } ], "default": null }, "parallel_tool_calls": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Parallel Tool Calls" }, "reasoning_effort": { "anyOf": [ { "enum": [ "minimal", "low", "medium", "high" ], "type": "string" }, { "type": "null" } ], "default": null, "title": "Reasoning Effort" }, "model": { "title": "Model", "type": "string" }, "api_key": { "anyOf": [ { "format": "password", "type": "string", "writeOnly": true }, { "type": "null" } ], "default": null, "title": "Api Key" }, "timeout": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Timeout" }, "max_retries": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Max Retries" }, "model_capabilities": { "anyOf": [ { "$ref": "#/$defs/ModelCapabilities" }, { "type": "null" } ], "default": null }, "model_info": { "anyOf": [ { "$ref": "#/$defs/ModelInfo" }, { "type": "null" } ], "default": null }, "add_name_prefixes": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Add Name Prefixes" }, "include_name_in_message": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Include Name In Message" }, "default_headers": { "anyOf": [ { "additionalProperties": { "type": "string" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Default Headers" }, "azure_endpoint": { "title": "Azure Endpoint", "type": "string" }, "azure_deployment": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Azure Deployment" }, "api_version": { "title": "Api Version", "type": "string" }, "azure_ad_token": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Azure Ad Token" }, "azure_ad_token_provider": { "anyOf": [ { "$ref": "#/$defs/ComponentModel" }, { "type": "null" } ], "default": null } }, "$defs": { "ComponentModel": { "description": "Model class for a component. Contains all information required to instantiate a component.", "properties": { "provider": { "title": "Provider", "type": "string" }, "component_type": { "anyOf": [ { "enum": [ "model", "agent", "tool", "termination", "token_provider", "workbench" ], "type": "string" }, { "type": "string" }, { "type": "null" } ], "default": null, "title": "Component Type" }, "version": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Version" }, "component_version": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Component Version" }, "description": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Description" }, "label": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Label" }, "config": { "title": "Config", "type": "object" } }, "required": [ "provider", "config" ], "title": "ComponentModel", "type": "object" }, "JSONSchema": { "properties": { "name": { "title": "Name", "type": "string" }, "description": { "title": "Description", "type": "string" }, "schema": { "title": "Schema", "type": "object" }, "strict": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "title": "Strict" } }, "required": [ "name" ], "title": "JSONSchema", "type": "object" }, "ModelCapabilities": { "deprecated": true, "properties": { "vision": { "title": "Vision", "type": "boolean" }, "function_calling": { "title": "Function Calling", "type": "boolean" }, "json_output": { "title": "Json Output", "type": "boolean" } }, "required": [ "vision", "function_calling", "json_output" ], "title": "ModelCapabilities", "type": "object" }, "ModelInfo": { "description": "ModelInfo is a dictionary that contains information about a model's properties.\nIt is expected to be used in the model_info property of a model client.\n\nWe are expecting this to grow over time as we add more features.", "properties": { "vision": { "title": "Vision", "type": "boolean" }, "function_calling": { "title": "Function Calling", "type": "boolean" }, "json_output": { "title": "Json Output", "type": "boolean" }, "family": { "anyOf": [ { "enum": [ "gpt-5", "gpt-41", "gpt-45", "gpt-4o", "o1", "o3", "o4", "gpt-4", "gpt-35", "r1", "gemini-1.5-flash", "gemini-1.5-pro", "gemini-2.0-flash", "gemini-2.5-pro", "gemini-2.5-flash", "claude-3-haiku", "claude-3-sonnet", "claude-3-opus", "claude-3-5-haiku", "claude-3-5-sonnet", "claude-3-7-sonnet", "claude-4-opus", "claude-4-sonnet", "llama-3.3-8b", "llama-3.3-70b", "llama-4-scout", "llama-4-maverick", "codestral", "open-codestral-mamba", "mistral", "ministral", "pixtral", "unknown" ], "type": "string" }, { "type": "string" } ], "title": "Family" }, "structured_output": { "title": "Structured Output", "type": "boolean" }, "multiple_system_messages": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "title": "Multiple System Messages" } }, "required": [ "vision", "function_calling", "json_output", "family", "structured_output" ], "title": "ModelInfo", "type": "object" }, "ResponseFormat": { "properties": { "type": { "enum": [ "text", "json_object", "json_schema" ], "title": "Type", "type": "string" }, "json_schema": { "anyOf": [ { "$ref": "#/$defs/JSONSchema" }, { "type": "null" } ] } }, "required": [ "type", "json_schema" ], "title": "ResponseFormat", "type": "object" }, "StreamOptions": { "properties": { "include_usage": { "title": "Include Usage", "type": "boolean" } }, "required": [ "include_usage" ], "title": "StreamOptions", "type": "object" } }, "required": [ "model", "azure_endpoint", "api_version" ] }
- 字段:
- field azure_ad_token_provider: ComponentModel | None = None#
- pydantic model OpenAIClientConfigurationConfigModel[source]#
基类:
BaseOpenAIClientConfigurationConfigModel显示 JSON 模式
{ "title": "OpenAIClientConfigurationConfigModel", "type": "object", "properties": { "frequency_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Frequency Penalty" }, "logit_bias": { "anyOf": [ { "additionalProperties": { "type": "integer" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Logit Bias" }, "max_tokens": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Max Tokens" }, "n": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "N" }, "presence_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Presence Penalty" }, "response_format": { "anyOf": [ { "$ref": "#/$defs/ResponseFormat" }, { "type": "null" } ], "default": null }, "seed": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Seed" }, "stop": { "anyOf": [ { "type": "string" }, { "items": { "type": "string" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Stop" }, "temperature": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Temperature" }, "top_p": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Top P" }, "user": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "User" }, "stream_options": { "anyOf": [ { "$ref": "#/$defs/StreamOptions" }, { "type": "null" } ], "default": null }, "parallel_tool_calls": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Parallel Tool Calls" }, "reasoning_effort": { "anyOf": [ { "enum": [ "minimal", "low", "medium", "high" ], "type": "string" }, { "type": "null" } ], "default": null, "title": "Reasoning Effort" }, "model": { "title": "Model", "type": "string" }, "api_key": { "anyOf": [ { "format": "password", "type": "string", "writeOnly": true }, { "type": "null" } ], "default": null, "title": "Api Key" }, "timeout": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Timeout" }, "max_retries": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Max Retries" }, "model_capabilities": { "anyOf": [ { "$ref": "#/$defs/ModelCapabilities" }, { "type": "null" } ], "default": null }, "model_info": { "anyOf": [ { "$ref": "#/$defs/ModelInfo" }, { "type": "null" } ], "default": null }, "add_name_prefixes": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Add Name Prefixes" }, "include_name_in_message": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Include Name In Message" }, "default_headers": { "anyOf": [ { "additionalProperties": { "type": "string" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Default Headers" }, "organization": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Organization" }, "base_url": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Base Url" } }, "$defs": { "JSONSchema": { "properties": { "name": { "title": "Name", "type": "string" }, "description": { "title": "Description", "type": "string" }, "schema": { "title": "Schema", "type": "object" }, "strict": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "title": "Strict" } }, "required": [ "name" ], "title": "JSONSchema", "type": "object" }, "ModelCapabilities": { "deprecated": true, "properties": { "vision": { "title": "Vision", "type": "boolean" }, "function_calling": { "title": "Function Calling", "type": "boolean" }, "json_output": { "title": "Json Output", "type": "boolean" } }, "required": [ "vision", "function_calling", "json_output" ], "title": "ModelCapabilities", "type": "object" }, "ModelInfo": { "description": "ModelInfo is a dictionary that contains information about a model's properties.\nIt is expected to be used in the model_info property of a model client.\n\nWe are expecting this to grow over time as we add more features.", "properties": { "vision": { "title": "Vision", "type": "boolean" }, "function_calling": { "title": "Function Calling", "type": "boolean" }, "json_output": { "title": "Json Output", "type": "boolean" }, "family": { "anyOf": [ { "enum": [ "gpt-5", "gpt-41", "gpt-45", "gpt-4o", "o1", "o3", "o4", "gpt-4", "gpt-35", "r1", "gemini-1.5-flash", "gemini-1.5-pro", "gemini-2.0-flash", "gemini-2.5-pro", "gemini-2.5-flash", "claude-3-haiku", "claude-3-sonnet", "claude-3-opus", "claude-3-5-haiku", "claude-3-5-sonnet", "claude-3-7-sonnet", "claude-4-opus", "claude-4-sonnet", "llama-3.3-8b", "llama-3.3-70b", "llama-4-scout", "llama-4-maverick", "codestral", "open-codestral-mamba", "mistral", "ministral", "pixtral", "unknown" ], "type": "string" }, { "type": "string" } ], "title": "Family" }, "structured_output": { "title": "Structured Output", "type": "boolean" }, "multiple_system_messages": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "title": "Multiple System Messages" } }, "required": [ "vision", "function_calling", "json_output", "family", "structured_output" ], "title": "ModelInfo", "type": "object" }, "ResponseFormat": { "properties": { "type": { "enum": [ "text", "json_object", "json_schema" ], "title": "Type", "type": "string" }, "json_schema": { "anyOf": [ { "$ref": "#/$defs/JSONSchema" }, { "type": "null" } ] } }, "required": [ "type", "json_schema" ], "title": "ResponseFormat", "type": "object" }, "StreamOptions": { "properties": { "include_usage": { "title": "Include Usage", "type": "boolean" } }, "required": [ "include_usage" ], "title": "StreamOptions", "type": "object" } }, "required": [ "model" ] }
- pydantic model BaseOpenAIClientConfigurationConfigModel[source]#
-
显示 JSON 模式
{ "title": "BaseOpenAIClientConfigurationConfigModel", "type": "object", "properties": { "frequency_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Frequency Penalty" }, "logit_bias": { "anyOf": [ { "additionalProperties": { "type": "integer" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Logit Bias" }, "max_tokens": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Max Tokens" }, "n": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "N" }, "presence_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Presence Penalty" }, "response_format": { "anyOf": [ { "$ref": "#/$defs/ResponseFormat" }, { "type": "null" } ], "default": null }, "seed": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Seed" }, "stop": { "anyOf": [ { "type": "string" }, { "items": { "type": "string" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Stop" }, "temperature": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Temperature" }, "top_p": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Top P" }, "user": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "User" }, "stream_options": { "anyOf": [ { "$ref": "#/$defs/StreamOptions" }, { "type": "null" } ], "default": null }, "parallel_tool_calls": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Parallel Tool Calls" }, "reasoning_effort": { "anyOf": [ { "enum": [ "minimal", "low", "medium", "high" ], "type": "string" }, { "type": "null" } ], "default": null, "title": "Reasoning Effort" }, "model": { "title": "Model", "type": "string" }, "api_key": { "anyOf": [ { "format": "password", "type": "string", "writeOnly": true }, { "type": "null" } ], "default": null, "title": "Api Key" }, "timeout": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Timeout" }, "max_retries": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Max Retries" }, "model_capabilities": { "anyOf": [ { "$ref": "#/$defs/ModelCapabilities" }, { "type": "null" } ], "default": null }, "model_info": { "anyOf": [ { "$ref": "#/$defs/ModelInfo" }, { "type": "null" } ], "default": null }, "add_name_prefixes": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Add Name Prefixes" }, "include_name_in_message": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Include Name In Message" }, "default_headers": { "anyOf": [ { "additionalProperties": { "type": "string" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Default Headers" } }, "$defs": { "JSONSchema": { "properties": { "name": { "title": "Name", "type": "string" }, "description": { "title": "Description", "type": "string" }, "schema": { "title": "Schema", "type": "object" }, "strict": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "title": "Strict" } }, "required": [ "name" ], "title": "JSONSchema", "type": "object" }, "ModelCapabilities": { "deprecated": true, "properties": { "vision": { "title": "Vision", "type": "boolean" }, "function_calling": { "title": "Function Calling", "type": "boolean" }, "json_output": { "title": "Json Output", "type": "boolean" } }, "required": [ "vision", "function_calling", "json_output" ], "title": "ModelCapabilities", "type": "object" }, "ModelInfo": { "description": "ModelInfo is a dictionary that contains information about a model's properties.\nIt is expected to be used in the model_info property of a model client.\n\nWe are expecting this to grow over time as we add more features.", "properties": { "vision": { "title": "Vision", "type": "boolean" }, "function_calling": { "title": "Function Calling", "type": "boolean" }, "json_output": { "title": "Json Output", "type": "boolean" }, "family": { "anyOf": [ { "enum": [ "gpt-5", "gpt-41", "gpt-45", "gpt-4o", "o1", "o3", "o4", "gpt-4", "gpt-35", "r1", "gemini-1.5-flash", "gemini-1.5-pro", "gemini-2.0-flash", "gemini-2.5-pro", "gemini-2.5-flash", "claude-3-haiku", "claude-3-sonnet", "claude-3-opus", "claude-3-5-haiku", "claude-3-5-sonnet", "claude-3-7-sonnet", "claude-4-opus", "claude-4-sonnet", "llama-3.3-8b", "llama-3.3-70b", "llama-4-scout", "llama-4-maverick", "codestral", "open-codestral-mamba", "mistral", "ministral", "pixtral", "unknown" ], "type": "string" }, { "type": "string" } ], "title": "Family" }, "structured_output": { "title": "Structured Output", "type": "boolean" }, "multiple_system_messages": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "title": "Multiple System Messages" } }, "required": [ "vision", "function_calling", "json_output", "family", "structured_output" ], "title": "ModelInfo", "type": "object" }, "ResponseFormat": { "properties": { "type": { "enum": [ "text", "json_object", "json_schema" ], "title": "Type", "type": "string" }, "json_schema": { "anyOf": [ { "$ref": "#/$defs/JSONSchema" }, { "type": "null" } ] } }, "required": [ "type", "json_schema" ], "title": "ResponseFormat", "type": "object" }, "StreamOptions": { "properties": { "include_usage": { "title": "Include Usage", "type": "boolean" } }, "required": [ "include_usage" ], "title": "StreamOptions", "type": "object" } }, "required": [ "model" ] }
- 字段:
- field model_capabilities: ModelCapabilities | None = None#
- pydantic model CreateArgumentsConfigModel[source]#
基类:
BaseModel显示 JSON 模式
{ "title": "CreateArgumentsConfigModel", "type": "object", "properties": { "frequency_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Frequency Penalty" }, "logit_bias": { "anyOf": [ { "additionalProperties": { "type": "integer" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Logit Bias" }, "max_tokens": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Max Tokens" }, "n": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "N" }, "presence_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Presence Penalty" }, "response_format": { "anyOf": [ { "$ref": "#/$defs/ResponseFormat" }, { "type": "null" } ], "default": null }, "seed": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Seed" }, "stop": { "anyOf": [ { "type": "string" }, { "items": { "type": "string" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Stop" }, "temperature": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Temperature" }, "top_p": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Top P" }, "user": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "User" }, "stream_options": { "anyOf": [ { "$ref": "#/$defs/StreamOptions" }, { "type": "null" } ], "default": null }, "parallel_tool_calls": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Parallel Tool Calls" }, "reasoning_effort": { "anyOf": [ { "enum": [ "minimal", "low", "medium", "high" ], "type": "string" }, { "type": "null" } ], "default": null, "title": "Reasoning Effort" } }, "$defs": { "JSONSchema": { "properties": { "name": { "title": "Name", "type": "string" }, "description": { "title": "Description", "type": "string" }, "schema": { "title": "Schema", "type": "object" }, "strict": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "title": "Strict" } }, "required": [ "name" ], "title": "JSONSchema", "type": "object" }, "ResponseFormat": { "properties": { "type": { "enum": [ "text", "json_object", "json_schema" ], "title": "Type", "type": "string" }, "json_schema": { "anyOf": [ { "$ref": "#/$defs/JSONSchema" }, { "type": "null" } ] } }, "required": [ "type", "json_schema" ], "title": "ResponseFormat", "type": "object" }, "StreamOptions": { "properties": { "include_usage": { "title": "Include Usage", "type": "boolean" } }, "required": [ "include_usage" ], "title": "StreamOptions", "type": "object" } } }
- 字段:
- field response_format: ResponseFormat | None = None#
- 字段 stream_options: StreamOptions | None = None#