module documentation

Undocumented

Class StreamingInvokeInput Input for invoke_model_streaming.
Async Function invoke_model Activity that invokes an LLM model.
Async Function invoke_model_streaming Streaming-aware model activity.
async def invoke_model(llm_request: LlmRequest) -> list[LlmResponse]: (source)

Activity that invokes an LLM model.

Parameters
llm_request:LlmRequestThe LLM request containing model name and parameters.
Returns
list[LlmResponse]List of LLM responses from the model.
Raises
ValueErrorIf model name is not provided or LLM creation fails.
async def invoke_model_streaming(input: StreamingInvokeInput) -> list[LlmResponse]: (source)

Streaming-aware model activity.

Warning

Streaming support is experimental and may change in future versions.

Calls the LLM with stream=True and returns the collected list of raw LlmResponse chunks. The workflow's TemporalModel.generate_content_async yields these to the caller.

Each response is also published to the workflow's stream on streaming_topic so external consumers (UIs, tracing, etc.) can observe responses as they arrive.