Tree Search#
Source: examples/patterns/tree_search.py
Introduction#
Tree of Thoughts motivates explicit branching and ranking instead of single-pass revision. This example uses dedicated generator/evaluator delegates and a bounded beam search to show search-policy behavior (expand, score, prune) in a traceable way.
Technical Implementation#
Configure
Tracerwith JSONL + console output so each run emits machine-readable traces and lifecycle logs.Build generator and evaluator delegates with
DirectLLMCalland a managedLlamaCppServerLLMClient.Execute
TreeSearchPattern.run(...)with explicit search controls and preserve frontier diagnostics.Print a compact JSON payload including
trace_infofor deterministic tests and docs examples.
flowchart LR
A["Input prompt or scenario"] --> B["main(): runtime wiring"]
B --> C["TreeSearchPattern.run(...)"]
C --> D["generator/evaluator delegates expand and score candidate nodes"]
C --> E["Tracer JSONL + console events"]
D --> F["ExecutionResult/payload"]
E --> F
F --> G["Printed JSON output"]
1from __future__ import annotations
2
3import json
4from pathlib import Path
5
6from design_research_agents import DirectLLMCall, LlamaCppServerLLMClient, Tracer
7from design_research_agents.patterns import TreeSearchPattern
8
9_EXAMPLE_LLAMA_CLIENT_KWARGS = {
10 "model": "Qwen_Qwen3-4B-Instruct-2507-Q4_K_M.gguf",
11 "hf_model_repo_id": "bartowski/Qwen_Qwen3-4B-Instruct-2507-GGUF",
12 "api_model": "qwen3-4b-instruct-2507-q4km",
13 "context_window": 8192,
14 "startup_timeout_seconds": 240.0,
15 "request_timeout_seconds": 240.0,
16}
17
18
19def main() -> None:
20 """Run one tree-search workflow and print JSON summary."""
21 # Fixed request id keeps traces and docs output deterministic across runs.
22 request_id = "example-pattern-tree-search-design-001"
23 tracer = Tracer(
24 enabled=True,
25 trace_dir=Path("artifacts/examples/traces"),
26 enable_jsonl=True,
27 enable_console=True,
28 )
29 with LlamaCppServerLLMClient(**_EXAMPLE_LLAMA_CLIENT_KWARGS) as llm_client:
30 generator_delegate = DirectLLMCall(
31 llm_client=llm_client,
32 system_prompt=(
33 "You are a search-node generator. Return JSON with key `candidates` mapped to a list of"
34 " 1-2 short candidate objects. Keep output concise."
35 ),
36 tracer=tracer,
37 )
38 evaluator_delegate = DirectLLMCall(
39 llm_client=llm_client,
40 system_prompt=(
41 "You are a search-node evaluator. Return JSON with numeric key `score` in [0,1]"
42 " for the candidate provided by the user."
43 ),
44 tracer=tracer,
45 )
46 pattern = TreeSearchPattern(
47 generator_delegate=generator_delegate,
48 evaluator_delegate=evaluator_delegate,
49 max_depth=2,
50 branch_factor=2,
51 beam_width=1,
52 search_strategy="beam",
53 tracer=tracer,
54 )
55 result = pattern.run(
56 "Find the most robust concept architecture for a serviceable edge-device enclosure.",
57 request_id=request_id,
58 )
59 # Print the results
60 summary = result.summary()
61 print(json.dumps(summary, ensure_ascii=True, indent=2, sort_keys=True))
62
63
64if __name__ == "__main__":
65 main()
Expected Results#
Run Command
PYTHONPATH=src python3 examples/patterns/tree_search.py
Example output shape (values vary by run):
{
"success": true,
"final_output": "<example-specific payload>",
"terminated_reason": "<string-or-null>",
"error": null,
"trace": {
"request_id": "<request-id>",
"trace_dir": "artifacts/examples/traces",
"trace_path": "artifacts/examples/traces/run_<timestamp>_<request_id>.jsonl"
}
}