Workflow Model Step Design Tradeoff#
Source: examples/workflow/workflow_model_step_design_tradeoff.py
Introduction#
FrugalGPT frames cost-aware model choice, HELM frames robust comparative evaluation, and Toward Engineering AGI frames engineering-task relevance of those choices. This example demonstrates model-step tradeoff handling inside a workflow graph with deterministic trace capture.
Technical Implementation#
Configure
Tracerwith JSONL + console output so each run emits machine-readable traces and lifecycle logs.Build the runtime surface (public APIs only) and execute
Workflow.run(...)with a fixedrequest_id.Capture structured outputs from runtime execution and preserve termination metadata for analysis.
Print a compact JSON payload including
trace_infofor deterministic tests and docs examples.
The diagram below is generated from the example’s configured Workflow.
flowchart LR
workflow_entry["Workflow Entrypoint"]
step_1["design_tradeoff_model<br/>ModelStep"]
step_2["finalize<br/>LogicStep"]
workflow_entry --> step_1
step_1 --> step_2
1from __future__ import annotations
2
3import json
4from pathlib import Path
5
6import design_research_agents as drag
7
8WORKFLOW_DIAGRAM_DIRECTION = "LR"
9
10
11class _DocLLMClient:
12 """Minimal LLM client stub used only for docs-diagram workflow construction."""
13
14 def default_model(self) -> str:
15 return "doc-model"
16
17
18def build_example_workflow(
19 *,
20 tracer: drag.Tracer | None = None,
21 llm_client: object | None = None,
22) -> drag.Workflow:
23 """Build the model-step workflow used for docs diagrams and runtime execution."""
24 resolved_llm_client = llm_client or _DocLLMClient()
25 return drag.Workflow(
26 tool_runtime=None,
27 tracer=tracer,
28 input_schema={"type": "object"},
29 steps=[
30 drag.ModelStep(
31 step_id="design_tradeoff_model",
32 llm_client=resolved_llm_client,
33 request_builder=lambda context: drag.LLMRequest(
34 messages=[
35 drag.LLMMessage(
36 role="user",
37 content=(
38 "Summarize one engineering tradeoff for this goal: "
39 f"{context['inputs'].get('design_goal', '')}"
40 ),
41 )
42 ],
43 model=resolved_llm_client.default_model(),
44 ),
45 response_parser=lambda response, _context: {
46 "tradeoff_summary": response.text,
47 "model": response.model,
48 },
49 ),
50 drag.LogicStep(
51 step_id="finalize",
52 dependencies=("design_tradeoff_model",),
53 handler=lambda context: {
54 "tradeoff": context["dependency_results"]["design_tradeoff_model"]["output"]["parsed"][
55 "tradeoff_summary"
56 ]
57 },
58 ),
59 ],
60 )
61
62
63def main() -> None:
64 """Run model-step workflow and print compact design tradeoff summary."""
65 # Fixed request id keeps traces and docs output deterministic across runs.
66 request_id = "example-workflow-model-step-design-001"
67 tracer = drag.Tracer(
68 enabled=True,
69 trace_dir=Path("artifacts/examples/traces"),
70 enable_jsonl=True,
71 enable_console=True,
72 )
73 # Run the model-step workflow using public runtime surfaces. Using this with statement will automatically
74 # shut down the managed client when the example is done.
75 with drag.LlamaCppServerLLMClient() as llm_client:
76 workflow = build_example_workflow(tracer=tracer, llm_client=llm_client)
77
78 result = workflow.run(
79 {"design_goal": "reduce repair time for edge-device battery modules"},
80 request_id=request_id,
81 )
82 # Print the results
83 summary = result.summary()
84 print(json.dumps(summary, ensure_ascii=True, indent=2, sort_keys=True))
85
86
87if __name__ == "__main__":
88 main()
Expected Results#
Run Command
PYTHONPATH=src python3 examples/workflow/workflow_model_step_design_tradeoff.py
Example output shape (values vary by run):
{
"success": true,
"final_output": "<example-specific payload>",
"terminated_reason": "<string-or-null>",
"error": null,
"trace": {
"request_id": "<request-id>",
"trace_dir": "artifacts/examples/traces",
"trace_path": "artifacts/examples/traces/run_<timestamp>_<request_id>.jsonl"
}
}