Direct LLM Compiled Execution
Source: examples/agents/direct_llm_compiled_execution.py
Introduction
Compiled delegate execution is useful when you want to inspect the bound workflow and tracing metadata
before running it. This example makes the intermediate CompiledExecution object explicit so the compile
step itself is visible and testable.
Technical Implementation
Configure
Tracerwith JSONL + console output so each run emits machine-readable traces and lifecycle logs.Build the runtime surface (public APIs only) and execute
DirectLLMCall.compile(...)to obtainCompiledExecution.Validate the compiled wrapper shape, then call
CompiledExecution.run()with the bound request metadata.Print a compact JSON payload that includes compile metadata alongside the normalized execution summary.
flowchart LR
A["Input prompt or scenario"] --> B["main(): runtime wiring"]
B --> C["DirectLLMCall.compile(...)"]
C --> D["CompiledExecution"]
D --> E["CompiledExecution.run()"]
E --> F["ExecutionResult/payload"]
F --> G["Printed JSON output"]
1from __future__ import annotations
2
3import json
4from pathlib import Path
5
6from design_research_agents import (
7 CompiledExecution,
8 DirectLLMCall,
9 LlamaCppServerLLMClient,
10 Tracer,
11)
12
13
14def main() -> None:
15 """Compile one direct model call, then run the compiled execution."""
16 # Fixed request id keeps traces and docs output deterministic across runs.
17 request_id = "example-direct-llm-compiled-design-001"
18 tracer = Tracer(
19 enabled=True,
20 trace_dir=Path("artifacts/examples/traces"),
21 enable_jsonl=True,
22 enable_console=True,
23 )
24 # Run the compiled direct LLM example using public runtime APIs. Using this with statement will automatically
25 # shut down the managed client when the example is done.
26 with LlamaCppServerLLMClient() as llm_client:
27 llm = DirectLLMCall(llm_client=llm_client, tracer=tracer)
28 prompt = (
29 "Write one sentence describing the one primary engineering specification for a "
30 "field-repairable wearable sensor enclosure."
31 )
32 compiled: CompiledExecution = llm.compile(
33 prompt=prompt,
34 request_id=request_id,
35 )
36 result = compiled.run()
37
38 # Print the results
39 payload = {
40 "compiled_delegate_name": compiled.delegate_name,
41 "compiled_request_id": compiled.request_id,
42 **result.summary(),
43 }
44 print(json.dumps(payload, ensure_ascii=True, indent=2, sort_keys=True))
45
46
47if __name__ == "__main__":
48 main()
Expected Results
Run Command
PYTHONPATH=src python3 examples/agents/direct_llm_compiled_execution.py
Example output shape (values vary by run):
{
"compiled_delegate_name": "DirectLLMCall",
"compiled_request_id": "<request-id>",
"success": true,
"final_output": "<example-specific payload>",
"terminated_reason": "<string-or-null>",
"error": null,
"trace": {
"request_id": "<request-id>",
"trace_dir": "artifacts/examples/traces",
"trace_path": "artifacts/examples/traces/run_<timestamp>_<request_id>.jsonl"
}
}