Multi Step JSON With Memory
Source: examples/agents/multi_step_json_with_memory.py
Introduction
Reflexion, Generative Agents, and MemGPT each emphasize that iterative performance improves when prior state is persisted and reused rather than recomputed from scratch. This example adds memory reads/writes to JSON tool-calling so multi-step behavior remains auditable across turns.
Technical Implementation
Configure
Tracerwith JSONL + console output so each run emits machine-readable traces and lifecycle logs.Build the runtime surface (public APIs only) and execute
MultiStepAgent.run(...)with a fixedrequest_id.Configure and invoke
Toolboxintegrations (core/script/MCP/callable) before assembling the final payload.Persist and query context via
SQLiteMemoryStoreto demonstrate memory-backed workflow behavior.Print a compact JSON payload including
trace_infofor deterministic tests and docs examples.
flowchart LR
A["Input prompt or scenario"] --> B["main(): runtime wiring"]
B --> C["MultiStepAgent.run(...)"]
C --> D["WorkflowRuntime loop enforces explicit final-answer and max-step policy"]
C --> E["Tracer JSONL + console events"]
D --> F["ExecutionResult/payload"]
E --> F
F --> G["Printed JSON output"]
1from __future__ import annotations
2
3import json
4from pathlib import Path
5
6from design_research_agents import LlamaCppServerLLMClient, MultiStepAgent, Toolbox, Tracer
7from design_research_agents.memory import SQLiteMemoryStore
8
9_EXAMPLE_LLAMA_CLIENT_KWARGS = {
10 "model": "Qwen_Qwen3-4B-Instruct-2507-Q4_K_M.gguf",
11 "hf_model_repo_id": "bartowski/Qwen_Qwen3-4B-Instruct-2507-GGUF",
12 "api_model": "qwen3-4b-instruct-2507-q4km",
13 "context_window": 8192,
14 "startup_timeout_seconds": 240.0,
15 "request_timeout_seconds": 240.0,
16}
17
18
19def main() -> None:
20 """Run one multi-step JSON tool call with memory retrieval and write-back."""
21 # Keep the request id stable so trace filenames and test snapshots stay comparable.
22 request_id = "example-multi-step-json-memory-design-001"
23 tracer = Tracer(
24 enabled=True,
25 trace_dir=Path("artifacts/examples/traces"),
26 enable_jsonl=True,
27 enable_console=True,
28 )
29 db_path = Path("artifacts/examples/multi_step_json_with_memory.sqlite3")
30 db_path.parent.mkdir(parents=True, exist_ok=True)
31 # Recreate the DB per run to keep the example deterministic across repeated executions.
32 if db_path.exists():
33 db_path.unlink()
34
35 # Run the memory-backed JSON example using public runtime surfaces. Using this with statement will
36 # automatically close the tool runtime, memory store, and managed client when the example is done.
37 with (
38 Toolbox() as tool_runtime,
39 SQLiteMemoryStore(db_path=db_path) as store,
40 LlamaCppServerLLMClient(**_EXAMPLE_LLAMA_CLIENT_KWARGS) as llm_client,
41 ):
42 # Seed one memory item so the agent can demonstrate retrieval-conditioned behavior.
43 tool_runtime.invoke_dict(
44 "memory.write",
45 {
46 "db_path": str(db_path),
47 "namespace": "design_examples",
48 "records": [
49 {
50 "content": (
51 "Prior design note: target quick maintenance by minimizing tool changes and "
52 "favoring reusable fasteners."
53 )
54 }
55 ],
56 },
57 request_id=f"{request_id}:seed_memory",
58 dependencies={},
59 )
60 memory_agent = MultiStepAgent(
61 mode="json",
62 llm_client=llm_client,
63 tool_runtime=tool_runtime,
64 max_steps=3,
65 memory_store=store,
66 memory_namespace="design_examples",
67 memory_read_top_k=1,
68 memory_write_observations=False,
69 allowed_tools=("text.word_count",),
70 tracer=tracer,
71 )
72 result = memory_agent.run(
73 (
74 "Use the retrieved design note as your input, count its words with text.word_count, "
75 "then return only the resulting word_count."
76 ),
77 request_id=request_id,
78 )
79 # Print the results
80 summary = result.summary()
81 print(json.dumps(summary, ensure_ascii=True, indent=2, sort_keys=True))
82
83
84if __name__ == "__main__":
85 main()
Expected Results
Run Command
PYTHONPATH=src python3 examples/agents/multi_step_json_with_memory.py
Example output shape (values vary by run):
{
"success": true,
"final_output": "<example-specific payload>",
"terminated_reason": "<string-or-null>",
"error": null,
"trace": {
"request_id": "<request-id>",
"trace_dir": "artifacts/examples/traces",
"trace_path": "artifacts/examples/traces/run_<timestamp>_<request_id>.jsonl"
}
}