Workflow Model Step Design Tradeoff
Source: examples/workflow/workflow_model_step_design_tradeoff.py
Introduction
FrugalGPT frames cost-aware model choice, HELM frames robust comparative evaluation, and Toward Engineering AGI frames engineering-task relevance of those choices. This example demonstrates model-step tradeoff handling inside a workflow graph with deterministic trace capture.
Technical Implementation
Configure
Tracerwith JSONL + console output so each run emits machine-readable traces and lifecycle logs.Build the runtime surface (public APIs only) and execute
Workflow.run(...)with a fixedrequest_id.Capture structured outputs from runtime execution and preserve termination metadata for analysis.
Print a compact JSON payload including
trace_infofor deterministic tests and docs examples.
flowchart LR
A["Input prompt or scenario"] --> B["main(): runtime wiring"]
B --> C["Workflow.run(...)"]
C --> D["WorkflowRuntime schedules step graph (LogicStep, ModelStep)"]
C --> E["Tracer JSONL + console events"]
D --> F["ExecutionResult/payload"]
E --> F
F --> G["Printed JSON output"]
1from __future__ import annotations
2
3import json
4from pathlib import Path
5
6from design_research_agents import LlamaCppServerLLMClient, LogicStep, ModelStep, Tracer, Workflow
7from design_research_agents.llm import LLMMessage, LLMRequest
8
9
10def main() -> None:
11 """Run model-step workflow and print compact design tradeoff summary."""
12 # Fixed request id keeps traces and docs output deterministic across runs.
13 request_id = "example-workflow-model-step-design-001"
14 tracer = Tracer(
15 enabled=True,
16 trace_dir=Path("artifacts/examples/traces"),
17 enable_jsonl=True,
18 enable_console=True,
19 )
20 # Run the model-step workflow using public runtime surfaces. Using this with statement will automatically
21 # shut down the managed client when the example is done.
22 with LlamaCppServerLLMClient() as llm_client:
23 workflow = Workflow(
24 tool_runtime=None,
25 tracer=tracer,
26 input_schema={"type": "object"},
27 steps=[
28 ModelStep(
29 step_id="design_tradeoff_model",
30 llm_client=llm_client,
31 request_builder=lambda context: LLMRequest(
32 messages=[
33 LLMMessage(
34 role="user",
35 content=(
36 "Summarize one engineering tradeoff for this goal: "
37 f"{context['inputs'].get('design_goal', '')}"
38 ),
39 )
40 ],
41 model=llm_client.default_model(),
42 ),
43 response_parser=lambda response, _context: {
44 "tradeoff_summary": response.text,
45 "model": response.model,
46 },
47 ),
48 LogicStep(
49 step_id="finalize",
50 dependencies=("design_tradeoff_model",),
51 handler=lambda context: {
52 "tradeoff": context["dependency_results"]["design_tradeoff_model"]["output"]["parsed"][
53 "tradeoff_summary"
54 ]
55 },
56 ),
57 ],
58 )
59
60 result = workflow.run(
61 {"design_goal": "reduce repair time for edge-device battery modules"},
62 request_id=request_id,
63 )
64 # Print the results
65 summary = result.summary()
66 print(json.dumps(summary, ensure_ascii=True, indent=2, sort_keys=True))
67
68
69if __name__ == "__main__":
70 main()
Expected Results
Run Command
PYTHONPATH=src python3 examples/workflow/workflow_model_step_design_tradeoff.py
Example output shape (values vary by run):
{
"success": true,
"final_output": "<example-specific payload>",
"terminated_reason": "<string-or-null>",
"error": null,
"trace": {
"request_id": "<request-id>",
"trace_dir": "artifacts/examples/traces",
"trace_path": "artifacts/examples/traces/run_<timestamp>_<request_id>.jsonl"
}
}