autogen_core.model_context#
- 类 ChatCompletionContext(initial_messages: List[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]] | None = None)[源]#
基类:
ABC,ComponentBase[BaseModel]用于定义聊天完成上下文接口的抽象基类。聊天完成上下文允许代理存储和检索 LLM 消息。它可以通过不同的召回策略实现。
- 参数:
initial_messages (List[LLMMessage] | None) – 初始消息。
示例
创建自定义模型上下文,以从 AssistantMessage 中过滤掉思想字段。这对于像 DeepSeek R1 这样的推理模型很有用,它会产生很长的思想,而这些思想对于后续的完成是不需要的。
from typing import List from autogen_core.model_context import UnboundedChatCompletionContext from autogen_core.models import AssistantMessage, LLMMessage class ReasoningModelContext(UnboundedChatCompletionContext): """A model context for reasoning models.""" async def get_messages(self) -> List[LLMMessage]: messages = await super().get_messages() # Filter out thought field from AssistantMessage. messages_out: List[LLMMessage] = [] for message in messages: if isinstance(message, AssistantMessage): message.thought = None messages_out.append(message) return messages_out
- component_type: ClassVar[ComponentType] = 'chat_completion_context'#
组件的逻辑类型。
- 异步 add_message(message: Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]) None[源]#
向上下文添加消息。
- 抽象 异步 get_messages() List[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]][源]#
- pydantic 模型 ChatCompletionContextState[源]#
基类:
BaseModel显示 JSON 模式
{ "title": "ChatCompletionContextState", "type": "object", "properties": { "messages": { "items": { "discriminator": { "mapping": { "AssistantMessage": "#/$defs/AssistantMessage", "FunctionExecutionResultMessage": "#/$defs/FunctionExecutionResultMessage", "SystemMessage": "#/$defs/SystemMessage", "UserMessage": "#/$defs/UserMessage" }, "propertyName": "type" }, "oneOf": [ { "$ref": "#/$defs/SystemMessage" }, { "$ref": "#/$defs/UserMessage" }, { "$ref": "#/$defs/AssistantMessage" }, { "$ref": "#/$defs/FunctionExecutionResultMessage" } ] }, "title": "Messages", "type": "array" } }, "$defs": { "AssistantMessage": { "description": "Assistant message are sampled from the language model.", "properties": { "content": { "anyOf": [ { "type": "string" }, { "items": { "$ref": "#/$defs/FunctionCall" }, "type": "array" } ], "title": "Content" }, "thought": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Thought" }, "source": { "title": "Source", "type": "string" }, "type": { "const": "AssistantMessage", "default": "AssistantMessage", "title": "Type", "type": "string" } }, "required": [ "content", "source" ], "title": "AssistantMessage", "type": "object" }, "FunctionCall": { "properties": { "id": { "title": "Id", "type": "string" }, "arguments": { "title": "Arguments", "type": "string" }, "name": { "title": "Name", "type": "string" } }, "required": [ "id", "arguments", "name" ], "title": "FunctionCall", "type": "object" }, "FunctionExecutionResult": { "description": "Function execution result contains the output of a function call.", "properties": { "content": { "title": "Content", "type": "string" }, "name": { "title": "Name", "type": "string" }, "call_id": { "title": "Call Id", "type": "string" }, "is_error": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Is Error" } }, "required": [ "content", "name", "call_id" ], "title": "FunctionExecutionResult", "type": "object" }, "FunctionExecutionResultMessage": { "description": "Function execution result message contains the output of multiple function calls.", "properties": { "content": { "items": { "$ref": "#/$defs/FunctionExecutionResult" }, "title": "Content", "type": "array" }, "type": { "const": "FunctionExecutionResultMessage", "default": "FunctionExecutionResultMessage", "title": "Type", "type": "string" } }, "required": [ "content" ], "title": "FunctionExecutionResultMessage", "type": "object" }, "SystemMessage": { "description": "System message contains instructions for the model coming from the developer.\n\n.. note::\n\n Open AI is moving away from using 'system' role in favor of 'developer' role.\n See `Model Spec <https://cdn.openai.com/spec/model-spec-2024-05-08.html#definitions>`_ for more details.\n However, the 'system' role is still allowed in their API and will be automatically converted to 'developer' role\n on the server side.\n So, you can use `SystemMessage` for developer messages.", "properties": { "content": { "title": "Content", "type": "string" }, "type": { "const": "SystemMessage", "default": "SystemMessage", "title": "Type", "type": "string" } }, "required": [ "content" ], "title": "SystemMessage", "type": "object" }, "UserMessage": { "description": "User message contains input from end users, or a catch-all for data provided to the model.", "properties": { "content": { "anyOf": [ { "type": "string" }, { "items": { "anyOf": [ { "type": "string" }, {} ] }, "type": "array" } ], "title": "Content" }, "source": { "title": "Source", "type": "string" }, "type": { "const": "UserMessage", "default": "UserMessage", "title": "Type", "type": "string" } }, "required": [ "content", "source" ], "title": "UserMessage", "type": "object" } } }
- 字段:
messages (List[autogen_core.models._types.SystemMessage | autogen_core.models._types.UserMessage | autogen_core.models._types.AssistantMessage | autogen_core.models._types.FunctionExecutionResultMessage])
- 字段 messages: List[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]] [可选]#
- 类 UnboundedChatCompletionContext(initial_messages: List[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]] | None = None)[源]#
基类:
ChatCompletionContext,Component[UnboundedChatCompletionContextConfig]一个无限制的聊天完成上下文,它保留所有消息的视图。
- component_config_schema#
别名
UnboundedChatCompletionContextConfig
- component_provider_override: ClassVar[str | None] = 'autogen_core.model_context.UnboundedChatCompletionContext'#
覆盖组件的提供者字符串。这应该用于防止内部模块名称成为模块名称的一部分。
- 异步 get_messages() List[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]][源]#
获取最多 buffer_size 条最新消息。
- 类 BufferedChatCompletionContext(buffer_size: int, initial_messages: List[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]] | None = None)[源]#
基类:
ChatCompletionContext,Component[BufferedChatCompletionContextConfig]一个缓冲的聊天完成上下文,它保留最后 n 条消息的视图,其中 n 是缓冲区大小。缓冲区大小在初始化时设置。
- 参数:
buffer_size (int) – 缓冲区的大小。
initial_messages (List[LLMMessage] | None) – 初始消息。
- component_config_schema#
别名
BufferedChatCompletionContextConfig
- component_provider_override: ClassVar[str | None] = 'autogen_core.model_context.BufferedChatCompletionContext'#
覆盖组件的提供者字符串。这应该用于防止内部模块名称成为模块名称的一部分。
- 异步 get_messages() List[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]][源]#
获取最多 buffer_size 条最新消息。
- 类 TokenLimitedChatCompletionContext(model_client: ChatCompletionClient, *, token_limit: int | None = None, tool_schema: List[ToolSchema] | None = None, initial_messages: List[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]] | None = None)[源]#
基类:
ChatCompletionContext,Component[TokenLimitedChatCompletionContextConfig](实验性)基于令牌的聊天完成上下文,维护一个达到令牌限制的上下文视图。
注意
在 v0.4.10 中添加。这是一个实验性组件,未来可能会更改。
- 参数:
model_client (ChatCompletionClient) – 用于令牌计数的模型客户端。模型客户端必须实现
count_tokens()和remaining_tokens()方法。token_limit (int | None) – 使用
count_tokens()方法在上下文中保留的最大令牌数。如果为 None,则上下文将由模型客户端使用remaining_tokens()方法限制。tools (List[ToolSchema] | None) – 在上下文中使用的工具模式列表。
initial_messages (List[LLMMessage] | None) – 要包含在上下文中的初始消息列表。
- component_config_schema#
别名
TokenLimitedChatCompletionContextConfig
- component_provider_override: ClassVar[str | None] = 'autogen_core.model_context.TokenLimitedChatCompletionContext'#
覆盖组件的提供者字符串。这应该用于防止内部模块名称成为模块名称的一部分。
- 异步 get_messages() List[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]][源]#
获取最近消息中最多 token_limit 个令牌。如果未提供令牌限制,则返回模型客户端允许的剩余令牌数的消息。
- 类 HeadAndTailChatCompletionContext(head_size: int, tail_size: int, initial_messages: List[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]] | None = None)[源]#
基类:
ChatCompletionContext,Component[HeadAndTailChatCompletionContextConfig]一个聊天完成上下文,它保留前 n 条消息和后 m 条消息的视图,其中 n 是头部大小,m 是尾部大小。头部和尾部大小在初始化时设置。
- 参数:
- component_config_schema#
别名
HeadAndTailChatCompletionContextConfig
- component_provider_override: ClassVar[str | None] = 'autogen_core.model_context.HeadAndTailChatCompletionContext'#
覆盖组件的提供者字符串。这应该用于防止内部模块名称成为模块名称的一部分。
- 异步 get_messages() List[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]][源]#
获取最多 head_size 条最新消息和 tail_size 条最旧消息。