Closed
Description
I used claude's model following the CUSTOM_MODEL_PROVIDER approach, as shown below
client = AsyncOpenAI(
base_url="https://api.anthropic.com/v1/",
api_key="sk-xxxxxxxxxxxxxxx-AAA",
)
class CustomModelProvider(ModelProvider):
def get_model(self, model_name: str | None) -> Model:
return OpenAIChatCompletionsModel(model="claude-3-5-haiku-20241022", openai_client=client)
# return OpenAIChatCompletionsModel(model="claude-3-7-sonnet-20250219", openai_client=client)
CUSTOM_MODEL_PROVIDER = CustomModelProvider()
tracking_task_agent = Agent(
name="team_leader",
instructions=TRACKING_TASK_TEMPLATE.format(work_id=work_id),
handoffs=[project_manager, architect, engineer],
tools=[execute_mysql_sql],
)
result = Runner.run_streamed(
starting_agent=tracking_task_agent,
input=first_input,
context=context,
max_turns=30,
run_config=RunConfig(model_provider=CUSTOM_MODEL_PROVIDER)
)
When I use Runner.run, the program runs normally and completes my entire workflow. When I use Runner.run_streamed, entering the handoff process, the API call returns the following error:
openai.BadRequestError: Error code: 400 - {'error': {'code': 'invalid_request_error', 'message': 'Failed to parse JSON: ', 'type': 'invalid_request_error', 'param': None}}
LLM The final output is
{
"role": "assistant",
"content": "\u6211\u5c06\u5e2e\u52a9\u60a8\u89c4\u5212\u8fd9\u4e2a\u7f51\u7ad9\u9879\u76ee\u3002\u9996\u5148\uff0c\u6211\u4f1a\u4f7f\u7528Eve\u4ee3\u7406\u6765\u8fdb\u884c\u9700\u6c42\u5206\u6790\u548c\u4ea7\u54c1\u8bbe\u8ba1\u3002",
"tool_calls": [
{
"id": "toolu_01SnwkRUjwXYHygLL2uSy1gZ",
"type": "function",
"function": {
"name": "transfer_to_eve",
"arguments": ""
}
}
]
},
{
"role": "tool",
"tool_call_id": "toolu_01SnwkRUjwXYHygLL2uSy1gZ",
"content": "{'assistant': 'Eve'}"
}