Skip to content

Recursively rebuild models in openai.types #967

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 5 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 26 additions & 0 deletions temporalio/contrib/openai_agents/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,12 @@
Use with caution in production environments.
"""

import importlib
import inspect
import pkgutil

from pydantic import BaseModel

from temporalio.contrib.openai_agents._invoke_model_activity import ModelActivity
from temporalio.contrib.openai_agents._model_parameters import ModelActivityParameters
from temporalio.contrib.openai_agents._trace_interceptor import (
Expand All @@ -30,3 +36,23 @@
"TestModel",
"TestModelProvider",
]


def _reload_models(module_name: str) -> None:
"""Recursively walk through modules and rebuild BaseModel classes."""
module = importlib.import_module(module_name)

# Process classes in the current module
for _, obj in inspect.getmembers(module, inspect.isclass):
if issubclass(obj, BaseModel) and obj is not BaseModel:
obj.model_rebuild()

# Recursively process submodules
if hasattr(module, "__path__"):
for _, submodule_name, _ in pkgutil.iter_modules(module.__path__):
full_submodule_name = f"{module_name}.{submodule_name}"
_reload_models(full_submodule_name)


# Recursively call model_rebuild() on all BaseModel classes in openai.types
_reload_models("openai.types")
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@

import enum
import json
from dataclasses import dataclass
from typing import Any, Optional, Union, cast

from agents import (
Expand All @@ -24,6 +23,7 @@
WebSearchTool,
)
from agents.models.multi_provider import MultiProvider
from pydantic.dataclasses import dataclass
from typing_extensions import Required, TypedDict

from temporalio import activity
Expand Down
21 changes: 20 additions & 1 deletion tests/contrib/openai_agents/test_openai.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@
)
from openai.types.responses.response_function_web_search import ActionSearch
from openai.types.responses.response_prompt_param import ResponsePromptParam
from pydantic import ConfigDict, Field
from pydantic import ConfigDict, Field, TypeAdapter

from temporalio import activity, workflow
from temporalio.client import Client, WorkflowFailureError, WorkflowHandle
Expand Down Expand Up @@ -1778,3 +1778,22 @@ async def test_workflow_method_tools(client: Client):
execution_timeout=timedelta(seconds=10),
)
await workflow_handle.result()


async def test_response_serialization():
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume this test fails until openai/openai-agents-python#1131 (or equivalent) is merged?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So... it does work if we explicitly rebuild all the models, but I'm hoping they will fix their dataclasses so we don't have to do this at all.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you give an idea of the cost of that full model rebuild? Just confirming it doesn't take like a whole second would be good. Do you think we should wait until they fix this issue or should we model rebuild now? Also, can discuss off-PR the current status of that PR vs their other possible solutions.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not exactly valid statistics but I ran it a few times locally with a result of ~.2 seconds.

I'm conflicted on fixing it at the moment. I think we need a resolution for public preview, and I don't know how likely it is we get a fix from them soon. Their fix has to go through stainless and they were unable to share the PR with me.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similarly conflicted as it is rough continually working around broken dependencies lest we make that our default posture. Up to you.

import json

from openai.types.responses.response_output_item import ImageGenerationCall

data = json.loads(
b'{"id": "msg_68757ec43348819d86709f0fcb70316301a1194a3e05b38c","type": "image_generation_call","status": "completed"}'
)
call = TypeAdapter(ImageGenerationCall).validate_python(data)
model_response = ModelResponse(
output=[
call,
],
usage=Usage(),
response_id="",
)
encoded = await pydantic_data_converter.encode([model_response])
6 changes: 3 additions & 3 deletions uv.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Loading