class documentation
class TemporalModel(BaseLlm): (source)
Constructor: TemporalModel(model_name, activity_config, summary_fn, streaming_topic, ...)
A Temporal-based LLM model that executes model invocations as activities.
| Method | __init__ |
Initialize the TemporalModel. |
| Async Method | generate |
Generate content asynchronously by executing model invocation as a Temporal activity. |
| Instance Variable | _activity |
Undocumented |
| Instance Variable | _model |
Undocumented |
| Instance Variable | _streaming |
Undocumented |
| Instance Variable | _streaming |
Undocumented |
| Instance Variable | _summary |
Undocumented |
def __init__(self, model_name: milliseconds=100)):
(source)
¶
str, activity_config: ActivityConfig | None = None, *, summary_fn: Callable[ [ LlmRequest], str | None] | None = None, streaming_topic: str | None = None, streaming_batch_interval: timedelta = timedelta(Initialize the TemporalModel.
Streaming is selected by the caller via the ADK generate_content_async(stream=True) argument; no plugin-level flag is needed.
| Parameters | |
modelstr | The name of the model to use. |
activityActivityConfig | None | Configuration options for the activity execution. |
summaryCallable[ | Optional callable that receives the LlmRequest and returns a summary string (or None) for the activity. Must be deterministic as it is called during workflow execution. If the callable raises, the exception will propagate and fail the workflow task. |
streamingstr | None | Stream topic to publish raw
LlmResponse chunks to when streaming. Required when
callers invoke generate_content_async(stream=True);
if None, the streaming call raises before scheduling
an activity. The workflow must host a
temporalio.contrib.workflow_streams.WorkflowStream
to receive the publishes; otherwise the signals are
unhandled and dropped. Streaming support is
experimental and may change in future versions. |
streamingtimedelta | Interval between automatic flushes for the stream publisher used by the streaming activity. Streaming support is experimental and may change in future versions. |
| Raises | |
ValueError | If both ActivityConfig["summary"] and summary_fn are set. |
async def generate_content_async(self, llm_request:
LlmRequest, stream: bool = False) -> AsyncGenerator[ LlmResponse, None]:
(source)
¶
Generate content asynchronously by executing model invocation as a Temporal activity.
| Parameters | |
llmLlmRequest | The LLM request containing model parameters and content. |
stream:bool | Whether to use the streaming activity. When True, each chunk is also published to streaming_topic (if set) for external consumers. Streaming support is experimental and may change in future versions. |
| Returns | |
AsyncGenerator[ | Undocumented |
| Yields | |
| The responses from the model. |