class documentation

A Temporal-based LLM model that executes model invocations as activities.

Method __init__ Initialize the TemporalModel.
Async Method generate_content_async Generate content asynchronously by executing model invocation as a Temporal activity.
Instance Variable _activity_config Undocumented
Instance Variable _model_name Undocumented
Instance Variable _streaming_batch_interval Undocumented
Instance Variable _streaming_topic Undocumented
Instance Variable _summary_fn Undocumented
def __init__(self, model_name: str, activity_config: ActivityConfig | None = None, *, summary_fn: Callable[[LlmRequest], str | None] | None = None, streaming_topic: str | None = None, streaming_batch_interval: timedelta = timedelta(milliseconds=100)): (source)

Initialize the TemporalModel.

Streaming is selected by the caller via the ADK generate_content_async(stream=True) argument; no plugin-level flag is needed.

Parameters
model_name:strThe name of the model to use.
activity_config:ActivityConfig | NoneConfiguration options for the activity execution.
summary_fn:Callable[[LlmRequest], str | None] | NoneOptional callable that receives the LlmRequest and returns a summary string (or None) for the activity. Must be deterministic as it is called during workflow execution. If the callable raises, the exception will propagate and fail the workflow task.
streaming_topic:str | NoneStream topic to publish raw LlmResponse chunks to when streaming. Required when callers invoke generate_content_async(stream=True); if None, the streaming call raises before scheduling an activity. The workflow must host a temporalio.contrib.workflow_streams.WorkflowStream to receive the publishes; otherwise the signals are unhandled and dropped. Streaming support is experimental and may change in future versions.
streaming_batch_interval:timedeltaInterval between automatic flushes for the stream publisher used by the streaming activity. Streaming support is experimental and may change in future versions.
Raises
ValueErrorIf both ActivityConfig["summary"] and summary_fn are set.
async def generate_content_async(self, llm_request: LlmRequest, stream: bool = False) -> AsyncGenerator[LlmResponse, None]: (source)

Generate content asynchronously by executing model invocation as a Temporal activity.

Parameters
llm_request:LlmRequestThe LLM request containing model parameters and content.
stream:boolWhether to use the streaming activity. When True, each chunk is also published to streaming_topic (if set) for external consumers. Streaming support is experimental and may change in future versions.
Returns
AsyncGenerator[LlmResponse, None]Undocumented
Yields
The responses from the model.
_activity_config = (source)

Undocumented

_model_name = (source)

Undocumented

_streaming_batch_interval = (source)

Undocumented

_streaming_topic = (source)

Undocumented

_summary_fn = (source)

Undocumented