Multi Step Code Tool Calling Agent ================================== Source: ``examples/agents/multi_step_code_tool_calling_agent.py`` Introduction ------------ ReAct and Toolformer motivate external action for model reasoning, while AutoGen highlights how multi-agent/tool ecosystems depend on explicit execution boundaries. This example focuses on code-tool calling so you can study how executable outputs are requested, validated, and traced in a controlled loop. Technical Implementation ------------------------ 1. Configure ``Tracer`` with JSONL + console output so each run emits machine-readable traces and lifecycle logs. 2. Build the runtime surface (public APIs only) and execute ``MultiStepAgent.run(...)`` with a fixed ``request_id``. 3. Configure and invoke ``Toolbox`` integrations (core/script/MCP/callable) before assembling the final payload. 4. Print a compact JSON payload including ``trace_info`` for deterministic tests and docs examples. .. mermaid:: flowchart LR A["Input prompt or scenario"] --> B["main(): runtime wiring"] B --> C["MultiStepAgent.run(...)"] C --> D["WorkflowRuntime loop enforces explicit final-answer and max-step policy"] C --> E["Tracer JSONL + console events"] D --> F["ExecutionResult/payload"] E --> F F --> G["Printed JSON output"] .. literalinclude:: ../../../examples/agents/multi_step_code_tool_calling_agent.py :language: python :lines: 51- :linenos: Expected Results ---------------- .. rubric:: Run Command .. code-block:: bash PYTHONPATH=src python3 examples/agents/multi_step_code_tool_calling_agent.py Example output shape (values vary by run): .. code-block:: text { "success": true, "final_output": "", "terminated_reason": "", "error": null, "trace": { "request_id": "", "trace_dir": "artifacts/examples/traces", "trace_path": "artifacts/examples/traces/run__.jsonl" } } References ---------- - `ReAct: Synergizing Reasoning and Acting in Language Models `_ - `Toolformer: Language Models Can Teach Themselves to Use Tools `_ - `AutoGen `_