Plan Execute

Source: examples/patterns/plan_execute.py

Introduction

Plan-and-Solve and ReAct both separate planning from execution to reduce reasoning drift, while AutoGen shows how these roles can be modularized across components. This example encodes planner-executor separation with tool-backed execution and deterministic trace artifacts.

Technical Implementation

  1. Configure Tracer with JSONL + console output so each run emits machine-readable traces and lifecycle logs.

  2. Build the runtime surface (public APIs only) and execute PlanExecutePattern.run(...) with a fixed request_id.

  3. Configure and invoke Toolbox integrations (core/script/MCP/callable) before assembling the final payload.

  4. Print a compact JSON payload including trace_info for deterministic tests and docs examples.

        flowchart LR
    A["Input prompt or scenario"] --> B["main(): runtime wiring"]
    B --> C["PlanExecutePattern.run(...)"]
    C --> D["Planner and executor phases share tool/runtime state"]
    C --> E["Tracer JSONL + console events"]
    D --> F["ExecutionResult/payload"]
    E --> F
    F --> G["Printed JSON output"]
    
 1from __future__ import annotations
 2
 3import json
 4from pathlib import Path
 5
 6from design_research_agents import (
 7    LlamaCppServerLLMClient,
 8    MultiStepAgent,
 9    Toolbox,
10    Tracer,
11)
12from design_research_agents.patterns import PlanExecutePattern
13
14_EXAMPLE_LLAMA_CLIENT_KWARGS = {
15    "model": "Qwen_Qwen3-4B-Instruct-2507-Q4_K_M.gguf",
16    "hf_model_repo_id": "bartowski/Qwen_Qwen3-4B-Instruct-2507-GGUF",
17    "api_model": "qwen3-4b-instruct-2507-q4km",
18    "context_window": 8192,
19    "startup_timeout_seconds": 240.0,
20    "request_timeout_seconds": 240.0,
21}
22
23
24def main() -> None:
25    """Run planner-executor orchestration with tracing."""
26    # Fixed request ids keep trace paths and sample output stable for docs/tests.
27    request_id = "example-workflow-plan-execute-design-001"
28    tracer = Tracer(
29        enabled=True,
30        trace_dir=Path("artifacts/examples/traces"),
31        enable_jsonl=True,
32        enable_console=True,
33    )
34    # Run the planner/executor pattern using public runtime surfaces. Using this with statement will
35    # automatically shut down the managed client and tool runtime when the example is done.
36    with Toolbox() as tool_runtime, LlamaCppServerLLMClient(**_EXAMPLE_LLAMA_CLIENT_KWARGS) as llm_client:
37        executor_delegate = MultiStepAgent(
38            mode="json",
39            llm_client=llm_client,
40            tool_runtime=tool_runtime,
41            max_steps=3,
42            allowed_tools=("text.word_count",),
43            tracer=tracer,
44        )
45        workflow = PlanExecutePattern(
46            llm_client=llm_client,
47            tool_runtime=tool_runtime,
48            executor_delegate=executor_delegate,
49            max_iterations=1,
50            tracer=tracer,
51        )
52        result = workflow.run(
53            prompt=(
54                "Create and execute a one-step plan that uses text.word_count to count the words "
55                "in the phrase 'design system research workflow', then return only word_count."
56            ),
57            request_id=request_id,
58        )
59
60    # Print the results
61    summary = result.summary()
62    print(json.dumps(summary, ensure_ascii=True, indent=2, sort_keys=True))
63
64
65if __name__ == "__main__":
66    main()

Expected Results

Run Command

PYTHONPATH=src python3 examples/patterns/plan_execute.py

Example output shape (values vary by run):

{
  "success": true,
  "final_output": "<example-specific payload>",
  "terminated_reason": "<string-or-null>",
  "error": null,
  "trace": {
    "request_id": "<request-id>",
    "trace_dir": "artifacts/examples/traces",
    "trace_path": "artifacts/examples/traces/run_<timestamp>_<request_id>.jsonl"
  }
}

References