Nominal Team#
Source: examples/patterns/nominal_team.py
Introduction#
Nominal teams explore one task independently, then hand all candidate outputs to a dedicated evaluator for best-of-N selection. This example fans out a design prompt to three focused contributors and selects the strongest result with a structured evaluator response.
Technical Implementation#
Configure
Tracerwith JSONL + console output so each run emits machine-readable traces and lifecycle logs.Build three focused
DirectLLMCalldelegates and one evaluator over a sharedLlamaCppServerLLMClient.Execute
NominalTeamPattern.run(...)with member-specific prompt templates for diverse independent drafts.Print a compact JSON payload including
trace_infofor deterministic tests and docs examples.
flowchart LR
A["Input prompt or scenario"] --> B["NominalTeamPattern.run(...)"]
B --> C["repairability / reliability / manufacturability members generate independently"]
C --> D["evaluator compares candidates and selects best member"]
D --> E["ExecutionResult/payload"]
E --> F["Printed JSON output"]
1from __future__ import annotations
2
3import json
4from pathlib import Path
5
6from design_research_agents import DirectLLMCall, LlamaCppServerLLMClient, Tracer
7from design_research_agents.patterns import NominalTeamPattern
8
9_EXAMPLE_LLAMA_CLIENT_KWARGS = {
10 "model": "Qwen_Qwen3-4B-Instruct-2507-Q4_K_M.gguf",
11 "hf_model_repo_id": "bartowski/Qwen_Qwen3-4B-Instruct-2507-GGUF",
12 "api_model": "qwen3-4b-instruct-2507-q4km",
13 "context_window": 8192,
14 "startup_timeout_seconds": 240.0,
15 "request_timeout_seconds": 240.0,
16}
17
18
19def main() -> None:
20 """Run one nominal-team workflow and print JSON summary."""
21 request_id = "example-pattern-nominal-team-design-001"
22 tracer = Tracer(
23 enabled=True,
24 trace_dir=Path("artifacts/examples/traces"),
25 enable_jsonl=True,
26 enable_console=True,
27 )
28 with LlamaCppServerLLMClient(**_EXAMPLE_LLAMA_CLIENT_KWARGS) as llm_client:
29 repairability = DirectLLMCall(
30 llm_client=llm_client,
31 system_prompt=(
32 "You are a repairability-focused designer. Return concise JSON with concept, strengths, and risks."
33 ),
34 tracer=tracer,
35 )
36 reliability = DirectLLMCall(
37 llm_client=llm_client,
38 system_prompt=(
39 "You are a reliability-focused designer. Return concise JSON with concept, strengths, and risks."
40 ),
41 tracer=tracer,
42 )
43 manufacturability = DirectLLMCall(
44 llm_client=llm_client,
45 system_prompt=(
46 "You are a manufacturability-focused designer. Return concise JSON with concept, strengths, and risks."
47 ),
48 tracer=tracer,
49 )
50 evaluator = DirectLLMCall(
51 llm_client=llm_client,
52 system_prompt=(
53 "Compare the candidate concepts and return JSON with best_member_id, "
54 "scores keyed by member id, and a short rationale."
55 ),
56 tracer=tracer,
57 )
58
59 pattern = NominalTeamPattern(
60 team_members=(
61 NominalTeamPattern.MemberSpec(
62 member_id="repairability",
63 delegate=repairability,
64 prompt_template=(
65 "Task: {task}\nPerspective: maximize field-service speed and tool simplicity.\n"
66 "Return concise JSON candidate output."
67 ),
68 ),
69 NominalTeamPattern.MemberSpec(
70 member_id="reliability",
71 delegate=reliability,
72 prompt_template=(
73 "Task: {task}\nPerspective: maximize sealing reliability and failure tolerance.\n"
74 "Return concise JSON candidate output."
75 ),
76 ),
77 NominalTeamPattern.MemberSpec(
78 member_id="manufacturability",
79 delegate=manufacturability,
80 prompt_template=(
81 "Task: {task}\nPerspective: maximize fabrication simplicity and repeatability.\n"
82 "Return concise JSON candidate output."
83 ),
84 ),
85 ),
86 evaluator_delegate=evaluator,
87 tracer=tracer,
88 )
89
90 result = pattern.run(
91 "Propose a field-serviceable enclosure concept for a remote environmental sensor.",
92 request_id=request_id,
93 )
94 print(json.dumps(result.summary(), ensure_ascii=True, indent=2, sort_keys=True))
95
96
97if __name__ == "__main__":
98 main()
Expected Results#
Run Command
PYTHONPATH=src python3 examples/patterns/nominal_team.py
Example output shape (values vary by run):
{
"success": true,
"final_output": "<selected-candidate-payload>",
"terminated_reason": "<string-or-null>",
"error": null,
"trace": {
"request_id": "<request-id>",
"trace_dir": "artifacts/examples/traces",
"trace_path": "artifacts/examples/traces/run_<timestamp>_<request_id>.jsonl"
}
}