design-research-agents Logo

Guides

  • Quickstart
  • Dependencies and Extras
  • Examples Guide
    • Categories
      • Agent Examples
      • Workflow Primitive Examples
      • Pattern Examples
        • Beam Search
        • Coordination Patterns
        • Debate Pattern
        • Plan Execute
        • Propose Critic
        • Rag
        • Router Delegate
        • Two Speaker Conversation
      • Client Examples
      • Model Selection Examples
      • Tool Examples
      • Optimization Examples
  • Philosophy
  • LLM Clients
  • Tools
  • Agents
  • Workflows
  • Patterns

Reference

  • API
  • Module Reference
design-research-agents
  • Examples Guide
  • Pattern Examples
  • Beam Search
  • View page source

Beam Search

Source: examples/patterns/beam_search.py

Introduction

Tree of Thoughts motivates branching deliberation over single-chain prompting, while Plan-and-Solve and ReAct provide complementary stepwise control principles. This example instantiates tree-search reasoning as an inspectable pattern for comparing branch quality under fixed runtime controls.

Technical Implementation

  1. Configure Tracer with JSONL + console output so each run emits machine-readable traces and lifecycle logs.

  2. Build the runtime surface (public APIs only) and execute BeamSearchPattern.run(...) with a fixed request_id.

  3. Capture structured outputs from runtime execution and preserve termination metadata for analysis.

  4. Print a compact JSON payload including trace_info for deterministic tests and docs examples.

        flowchart LR
    A["Input prompt or scenario"] --> B["main(): runtime wiring"]
    B --> C["BeamSearchPattern.run(...)"]
    C --> D["generator/evaluator loop expands and prunes candidate tree"]
    C --> E["Tracer JSONL + console events"]
    D --> F["ExecutionResult/payload"]
    E --> F
    F --> G["Printed JSON output"]
    
 1from __future__ import annotations
 2
 3import json
 4from collections.abc import Mapping
 5from pathlib import Path
 6
 7from design_research_agents import Tracer
 8from design_research_agents.patterns import BeamSearchPattern
 9
10
11def _generator(context: Mapping[str, object]) -> list[dict[str, object]]:
12    """Generate deterministic design candidates for one search depth."""
13    depth = int(context.get("depth", 0))
14    if depth == 1:
15        return [
16            {"concept": "lightweight frame", "score_hint": 0.45},
17            {"concept": "modular frame", "score_hint": 0.7},
18        ]
19    return [
20        {"concept": "modular frame + fail-safe", "score_hint": 0.92},
21        {"concept": "modular frame + low cost", "score_hint": 0.61},
22    ]
23
24
25def _evaluator(context: Mapping[str, object]) -> float:
26    """Return deterministic score for one candidate payload."""
27    candidate = context.get("candidate")
28    if isinstance(candidate, Mapping):
29        score = candidate.get("score_hint")
30        if isinstance(score, (int, float)):
31            return float(score)
32    return 0.0
33
34
35def main() -> None:
36    """Run one beam-search workflow and print JSON summary."""
37    # Fixed request id keeps traces and docs output deterministic across runs.
38    request_id = "example-workflow-beam-search-design-001"
39    tracer = Tracer(
40        enabled=True,
41        trace_dir=Path("artifacts/examples/traces"),
42        enable_jsonl=True,
43        enable_console=True,
44    )
45    pattern = BeamSearchPattern(
46        generator_delegate=_generator,
47        evaluator_delegate=_evaluator,
48        max_depth=2,
49        branch_factor=2,
50        beam_width=1,
51        tracer=tracer,
52    )
53    result = pattern.run(
54        "Find the most robust concept architecture for a serviceable edge-device enclosure.",
55        request_id=request_id,
56    )
57    # Print the results
58    summary = result.summary()
59    print(json.dumps(summary, ensure_ascii=True, indent=2, sort_keys=True))
60
61
62if __name__ == "__main__":
63    main()

Expected Results

Run Command

PYTHONPATH=src python3 examples/patterns/beam_search.py

Example output shape (values vary by run):

{
  "success": true,
  "final_output": "<example-specific payload>",
  "terminated_reason": "<string-or-null>",
  "error": null,
  "trace": {
    "request_id": "<request-id>",
    "trace_dir": "artifacts/examples/traces",
    "trace_path": "artifacts/examples/traces/run_<timestamp>_<request_id>.jsonl"
  }
}

References

  • Tree of Thoughts

  • Plan-and-Solve Prompting

  • ReAct: Synergizing Reasoning and Acting in Language Models

Previous Next

© Copyright 2026, design-research-agents contributors.

Built with Sphinx using a theme provided by Read the Docs.