Multi Step Direct LLM Agent
Source: examples/agents/multi_step_direct_llm_agent.py
Introduction
ReAct and Plan-and-Solve both motivate explicit multi-step reasoning loops instead of single-shot prompting, and Toward Engineering AGI highlights why that structure matters for measurable engineering outcomes. This example demonstrates a direct multi-step agent loop with traced iterations so design reasoning can be inspected rather than inferred.
Technical Implementation
Configure
Tracerwith JSONL + console output so each run emits machine-readable traces and lifecycle logs.Build the runtime surface (public APIs only) and execute
MultiStepAgent.run(...)with a fixedrequest_id.Capture structured outputs from runtime execution and preserve termination metadata for analysis.
Print a compact JSON payload including
trace_infofor deterministic tests and docs examples.
flowchart LR
A["Input prompt or scenario"] --> B["main(): runtime wiring"]
B --> C["MultiStepAgent.run(...)"]
C --> D["WorkflowRuntime loop enforces continuation and max-step policy"]
C --> E["Tracer JSONL + console events"]
D --> F["ExecutionResult/payload"]
E --> F
F --> G["Printed JSON output"]
1from __future__ import annotations
2
3import json
4from pathlib import Path
5
6from design_research_agents import LlamaCppServerLLMClient, MultiStepAgent, Tracer
7
8
9def main() -> None:
10 """Execute one multi-step direct run and print summary."""
11 # Fixed request id keeps traces and docs output deterministic across runs.
12 request_id = "example-multi-step-direct-design-001"
13 tracer = Tracer(
14 enabled=True,
15 trace_dir=Path("artifacts/examples/traces"),
16 enable_jsonl=True,
17 enable_console=True,
18 )
19 # Run the direct multi-step example using the managed local client. Using this with statement will automatically
20 # shut down the client when the example is done.
21 with LlamaCppServerLLMClient() as llm_client:
22 direct_agent = MultiStepAgent(
23 mode="direct",
24 llm_client=llm_client,
25 max_steps=3,
26 tracer=tracer,
27 )
28 result = direct_agent.run(
29 prompt=(
30 "Draft then finalize the title of a memo about reducing maintenance time in a modular lab rig. "
31 "Return only the memo title."
32 ),
33 request_id=request_id,
34 )
35
36 # Print the results
37 summary = result.summary()
38 print(json.dumps(summary, ensure_ascii=True, indent=2, sort_keys=True))
39
40
41if __name__ == "__main__":
42 main()
Expected Results
Run Command
PYTHONPATH=src python3 examples/agents/multi_step_direct_llm_agent.py
Example output shape (values vary by run):
{
"success": true,
"final_output": "<example-specific payload>",
"terminated_reason": "<string-or-null>",
"error": null,
"trace": {
"request_id": "<request-id>",
"trace_dir": "artifacts/examples/traces",
"trace_path": "artifacts/examples/traces/run_<timestamp>_<request_id>.jsonl"
}
}