Two Speaker Conversation#
Source: examples/patterns/two_speaker_conversation.py
Introduction#
AutoGen-style multi-agent conversations can externalize reasoning roles, Human-AI collaboration by design explains why role separation matters for oversight, and AI-assisted design synthesis work motivates structured dialogue in design ideation. This example implements a two-agent conversation loop with trace visibility at each turn.
Technical Implementation#
Configure
Tracerwith JSONL + console output so each run emits machine-readable traces and lifecycle logs.Build the runtime surface (public APIs only) and execute
TwoSpeakerConversationPattern.run(...)with a fixedrequest_id.Capture structured outputs from runtime execution and preserve termination metadata for analysis.
Print a compact JSON payload including
trace_infofor deterministic tests and docs examples.
flowchart LR
A["Input prompt or scenario"] --> B["main(): runtime wiring"]
B --> C["TwoSpeakerConversationPattern.run(...)"]
C --> D["turn-based conversation state drives each step"]
C --> E["Tracer JSONL + console events"]
D --> F["ExecutionResult/payload"]
E --> F
F --> G["Printed JSON output"]
1from __future__ import annotations
2
3import json
4from pathlib import Path
5
6import design_research_agents as drag
7
8_EXAMPLE_LLAMA_CLIENT_KWARGS = {
9 "context_window": 8192,
10 "startup_timeout_seconds": 180.0,
11 "request_timeout_seconds": 180.0,
12}
13
14
15def main() -> None:
16 """Run two-speaker brainstorming loop for a serviceable device enclosure."""
17 # Fixed request id keeps traces and docs output deterministic across runs.
18 request_id = "example-workflow-two-speaker-conversation-design-001"
19 tracer = drag.Tracer(
20 enabled=True,
21 trace_dir=Path("artifacts/examples/traces"),
22 enable_jsonl=True,
23 enable_console=True,
24 )
25 # Run the two-speaker conversation using the managed local client. Using this with statement will
26 # automatically shut down the client when the example is done.
27 with drag.LlamaCppServerLLMClient(**_EXAMPLE_LLAMA_CLIENT_KWARGS) as llm_client:
28 pattern = drag.TwoSpeakerConversationPattern(
29 llm_client_a=llm_client,
30 max_turns=2,
31 speaker_a_name="Concept Designer",
32 speaker_b_name="Validation Engineer",
33 speaker_a_system_prompt=(
34 "You are Concept Designer. Propose practical ideas for a field-serviceable sensor enclosure."
35 ),
36 speaker_b_system_prompt=(
37 "You are Validation Engineer. Stress-test ideas for manufacturability, safety, and maintenance time."
38 ),
39 tracer=tracer,
40 )
41 result = pattern.run(
42 prompt=(
43 "Brainstorm a modular enclosure for a wearable biosensor. Cover sealing strategy, "
44 "fastener choices, and quick battery replacement."
45 ),
46 request_id=request_id,
47 )
48
49 # Print the results
50 summary = result.summary()
51 print(json.dumps(summary, ensure_ascii=True, indent=2, sort_keys=True))
52
53
54if __name__ == "__main__":
55 main()
Expected Results#
Run Command
PYTHONPATH=src python3 examples/patterns/two_speaker_conversation.py
Example output shape (values vary by run):
{
"success": true,
"final_output": "<example-specific payload>",
"terminated_reason": "<string-or-null>",
"error": null,
"trace": {
"request_id": "<request-id>",
"trace_dir": "artifacts/examples/traces",
"trace_path": "artifacts/examples/traces/run_<timestamp>_<request_id>.jsonl"
}
}