Skip to main content
Ctrl+K

design-research-agents

  • Quickstart
  • Installation
  • VS Code Setup Guide
  • Concepts
    • Typical Workflow
    • Philosophy
    • Examples Guide
    • API
    • LLM Clients
    • Tools
    • Agents
    • Workflows
    • Patterns
    • Module Reference
    • Dependencies and Extras
    • CONTRIBUTING.md
  • GitHub
  • Quickstart
  • Installation
  • VS Code Setup Guide
  • Concepts
  • Typical Workflow
  • Philosophy
  • Examples Guide
  • API
  • LLM Clients
  • Tools
  • Agents
  • Workflows
  • Patterns
  • Module Reference
  • Dependencies and Extras
  • CONTRIBUTING.md
  • GitHub

Section Navigation

  • Agent Examples
    • Direct LLM Call
    • Direct LLM Compiled Execution
    • Direct LLM With Pinned Skills
    • Multi Step Code Tool Calling Agent
    • Multi Step Direct LLM Agent
    • Multi Step JSON Tool Calling Agent
    • Multi Step JSON With Memory
    • Multi Step JSON With Skills
    • Prompt Workflow Agent
    • Seeded Random Baseline Agent
    • VS Code Hello World
  • Workflow Primitive Examples
    • Workflow Delegate And Memory Steps
    • Workflow Diagram Generation
    • Workflow Model Step Design Tradeoff
    • Workflow Prompt Mode
    • Workflow Runtime
    • Workflow Runtime Loop Step
    • Workflow Schema Mode
  • Pattern Examples
    • Coordination Patterns
    • Debate Pattern
    • Nominal Team
    • Plan Execute
    • Propose Critic
    • Rag
    • Ralph Loop
    • Router Delegate
    • Tree Search
    • Two Speaker Conversation
  • Client Examples
    • Anthropic Service Client
    • Gemini Service Client
    • Groq Service Client
    • Llama CPP Server Client
    • MLX Local Client
    • Ollama Local Client
    • OpenAI Compatible HTTP Client
    • OpenAI Service Client
    • SGLang Server Client
    • Transformers Local Client
    • vLLM Server Client
  • Model Selection Examples
    • Local
    • Remote
  • Tool Examples
    • MCP Minimal
    • Multi Source Tool Usage
    • Script Tools / Repo Quickscan
    • Script Tools / Rubric Score
  • Optimization Examples
    • Multi Step JSON Tool Calling 1d Optimization
  • Examples Guide
  • Pattern Examples
  • Tree Search

Tree Search#

Source: examples/patterns/tree_search.py

Introduction#

Tree of Thoughts motivates explicit branching and ranking instead of single-pass revision. This example uses dedicated generator/evaluator delegates and a bounded beam search to show search-policy behavior (expand, score, prune) in a traceable way.

Technical Implementation#

  1. Configure Tracer with JSONL + console output so each run emits machine-readable traces and lifecycle logs.

  2. Build generator and evaluator delegates with DirectLLMCall and a managed LlamaCppServerLLMClient.

  3. Execute TreeSearchPattern.run(...) with explicit search controls and preserve frontier diagnostics.

  4. Print a compact JSON payload including trace_info for deterministic tests and docs examples.

        flowchart LR
    A["Input prompt or scenario"] --> B["main(): runtime wiring"]
    B --> C["TreeSearchPattern.run(...)"]
    C --> D["generator/evaluator delegates expand and score candidate nodes"]
    C --> E["Tracer JSONL + console events"]
    D --> F["ExecutionResult/payload"]
    E --> F
    F --> G["Printed JSON output"]
    
 1from __future__ import annotations
 2
 3import json
 4from pathlib import Path
 5
 6from design_research_agents import DirectLLMCall, LlamaCppServerLLMClient, Tracer
 7from design_research_agents.patterns import TreeSearchPattern
 8
 9_EXAMPLE_LLAMA_CLIENT_KWARGS = {
10    "model": "Qwen_Qwen3-4B-Instruct-2507-Q4_K_M.gguf",
11    "hf_model_repo_id": "bartowski/Qwen_Qwen3-4B-Instruct-2507-GGUF",
12    "api_model": "qwen3-4b-instruct-2507-q4km",
13    "context_window": 8192,
14    "startup_timeout_seconds": 240.0,
15    "request_timeout_seconds": 240.0,
16}
17
18
19def main() -> None:
20    """Run one tree-search workflow and print JSON summary."""
21    # Fixed request id keeps traces and docs output deterministic across runs.
22    request_id = "example-pattern-tree-search-design-001"
23    tracer = Tracer(
24        enabled=True,
25        trace_dir=Path("artifacts/examples/traces"),
26        enable_jsonl=True,
27        enable_console=True,
28    )
29    with LlamaCppServerLLMClient(**_EXAMPLE_LLAMA_CLIENT_KWARGS) as llm_client:
30        generator_delegate = DirectLLMCall(
31            llm_client=llm_client,
32            system_prompt=(
33                "You are a search-node generator. Return JSON with key `candidates` mapped to a list of"
34                " 1-2 short candidate objects. Keep output concise."
35            ),
36            tracer=tracer,
37        )
38        evaluator_delegate = DirectLLMCall(
39            llm_client=llm_client,
40            system_prompt=(
41                "You are a search-node evaluator. Return JSON with numeric key `score` in [0,1]"
42                " for the candidate provided by the user."
43            ),
44            tracer=tracer,
45        )
46        pattern = TreeSearchPattern(
47            generator_delegate=generator_delegate,
48            evaluator_delegate=evaluator_delegate,
49            max_depth=2,
50            branch_factor=2,
51            beam_width=1,
52            search_strategy="beam",
53            tracer=tracer,
54        )
55        result = pattern.run(
56            "Find the most robust concept architecture for a serviceable edge-device enclosure.",
57            request_id=request_id,
58        )
59    # Print the results
60    summary = result.summary()
61    print(json.dumps(summary, ensure_ascii=True, indent=2, sort_keys=True))
62
63
64if __name__ == "__main__":
65    main()

Expected Results#

Run Command

PYTHONPATH=src python3 examples/patterns/tree_search.py

Example output shape (values vary by run):

{
  "success": true,
  "final_output": "<example-specific payload>",
  "terminated_reason": "<string-or-null>",
  "error": null,
  "trace": {
    "request_id": "<request-id>",
    "trace_dir": "artifacts/examples/traces",
    "trace_path": "artifacts/examples/traces/run_<timestamp>_<request_id>.jsonl"
  }
}

References#

  • Tree of Thoughts

  • Plan-and-Solve Prompting

  • ReAct: Synergizing Reasoning and Acting in Language Models

On this page
  • Introduction
  • Technical Implementation
  • Expected Results
  • References

© Copyright 2026, design-research-agents contributors.

Created using Sphinx 8.2.3.

Built with the PyData Sphinx Theme 0.16.1.