module documentation
Undocumented
| Class | |
Input for invoke_model_streaming. |
| Async Function | invoke |
Activity that invokes an LLM model. |
| Async Function | invoke |
Streaming-aware model activity. |
Activity that invokes an LLM model.
| Parameters | |
llmLlmRequest | The LLM request containing model name and parameters. |
| Returns | |
list[ | List of LLM responses from the model. |
| Raises | |
ValueError | If model name is not provided or LLM creation fails. |
Streaming-aware model activity.
Warning
Streaming support is experimental and may change in future versions.
Calls the LLM with stream=True and returns the collected list of raw LlmResponse chunks. The workflow's TemporalModel.generate_content_async yields these to the caller.
Each response is also published to the workflow's stream on streaming_topic so external consumers (UIs, tracing, etc.) can observe responses as they arrive.