Multi Step JSON Tool Calling Agent
Source: examples/agents/multi_step_json_tool_calling_agent.py
Introduction
Toolformer motivates tool-use planning, JSON Schema defines stable machine-readable contracts, and OpenAI function-calling guidance captures operational patterns for structured tool dispatch. This example shows a JSON-mode agent that repeatedly selects tools through explicit schema-constrained payloads.
Technical Implementation
Configure
Tracerwith JSONL + console output so each run emits machine-readable traces and lifecycle logs.Build the runtime surface (public APIs only) and execute
MultiStepAgent.run(...)with a fixedrequest_id.Configure and invoke
Toolboxintegrations (core/script/MCP/callable) before assembling the final payload.Print a compact JSON payload including
trace_infofor deterministic tests and docs examples.
flowchart LR
A["Input prompt or scenario"] --> B["main(): runtime wiring"]
B --> C["MultiStepAgent.run(...)"]
C --> D["WorkflowRuntime loop enforces explicit final-answer and max-step policy"]
C --> E["Tracer JSONL + console events"]
D --> F["ExecutionResult/payload"]
E --> F
F --> G["Printed JSON output"]
1from __future__ import annotations
2
3import json
4from pathlib import Path
5
6from design_research_agents import LlamaCppServerLLMClient, MultiStepAgent, Toolbox, Tracer
7
8_EXAMPLE_LLAMA_CLIENT_KWARGS = {
9 "model": "Qwen_Qwen3-4B-Instruct-2507-Q4_K_M.gguf",
10 "hf_model_repo_id": "bartowski/Qwen_Qwen3-4B-Instruct-2507-GGUF",
11 "api_model": "qwen3-4b-instruct-2507-q4km",
12 "context_window": 8192,
13 "startup_timeout_seconds": 240.0,
14 "request_timeout_seconds": 240.0,
15}
16
17
18def main() -> None:
19 """Execute one traced multi-step JSON tool-calling run."""
20 # Stable ids make trace correlation and docs output easier to audit.
21 request_id = "example-multi-step-json-design-001"
22 tracer = Tracer(
23 enabled=True,
24 trace_dir=Path("artifacts/examples/traces"),
25 enable_jsonl=True,
26 enable_console=True,
27 )
28 # Run the JSON tool-calling example using public runtime surfaces. Using this with statement will automatically
29 # shut down the managed client and tool runtime when the example is done.
30 with Toolbox() as tool_runtime, LlamaCppServerLLMClient(**_EXAMPLE_LLAMA_CLIENT_KWARGS) as llm_client:
31 json_tool_agent = MultiStepAgent(
32 mode="json",
33 llm_client=llm_client,
34 tool_runtime=tool_runtime,
35 max_steps=3,
36 # Constrain selection so the example exercises an explicit tool surface.
37 allowed_tools=("text.word_count",),
38 tracer=tracer,
39 )
40 result = json_tool_agent.run(
41 prompt=(
42 "Use text.word_count once to count the words in the phrase "
43 "'design research agents', then finish by returning only the word_count."
44 ),
45 request_id=request_id,
46 )
47
48 # Print the results
49 summary = result.summary()
50 print(json.dumps(summary, ensure_ascii=True, indent=2, sort_keys=True))
51
52
53if __name__ == "__main__":
54 main()
Expected Results
Run Command
PYTHONPATH=src python3 examples/agents/multi_step_json_tool_calling_agent.py
Example output shape (values vary by run):
{
"success": true,
"final_output": "<example-specific payload>",
"terminated_reason": "<string-or-null>",
"error": null,
"trace": {
"request_id": "<request-id>",
"trace_dir": "artifacts/examples/traces",
"trace_path": "artifacts/examples/traces/run_<timestamp>_<request_id>.jsonl"
}
}