-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Added support for "return" handoffs (#1) #869
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Could also solve the problem specified in: #858 @rm-openai - Would love your review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought about this for a while, and ultimately I think this doesn't belong in handoffs because the definition of a handoff is something that takes over control. Whether it chooses to return control should be up to the new agent.
That said, the problem identified is a real one. I think the right way to do it is via a FunctionTool that also receives the full conversation history. i.e. you do something like
@function_tool
def my_function(context, history, ... other args):
That function could then use an agent or not, but either way it has access to the conversation history.
Thoughts?
EDIT: Also, this should be really easy now bc of the ToolContext
you added - can just add a history
field with all the prev items in there.
@rm-openai - Generally, I agree - this was actually my goto as well. There are, however two gaps with that implementation:
If you have any good solution for either of these with the function calls \ agent as tool approach, I'd love your input :) EDIT: I also thought about using the existing handoffs mechanism, and as you suggested, giving the new agent the choice of whether to return control or not. But that seemed unnatural to me for two reasons:
EDIT 2: Another option is to support something parallel to Handoffs, like |
@rm-openai @akhilsmokie - Bumping this :) |
I assumed this was how it worked from the start and was surprised to find it didn't. This is a great addition. How do we set it up after this is implemented to make sure sub-agents hand back off to the overall agent? Also, does this slow things down since presumably the top agent is doing some AI API calls of its own? That would of course not be ideal. |
Related to issue: #847