MLX Local Client#

Source: examples/clients/mlx_local_client.py

Introduction#

MLX-LM provides an Apple-silicon-native local inference stack, HELM motivates reproducible evaluation baselines, and AI-assisted design synthesis work connects these runtimes to educational design workflows. This example exercises the MLX local client path with trace artifacts suitable for repeatable comparisons.

Technical Implementation#

  1. Configure Tracer with JSONL + console output so each run emits machine-readable traces and lifecycle logs.

  2. Build the runtime surface (public APIs only) and execute MLXLocalLLMClient.generate(...) with a fixed request_id.

  3. Construct LLMRequest inputs and call generate through the selected client implementation.

  4. Print a compact JSON payload including trace_info for deterministic tests and docs examples.

        flowchart LR
    A["Input prompt or scenario"] --> B["main(): runtime wiring"]
    B --> C["MLXLocalLLMClient.generate(...)"]
    C --> D["LLMRequest/LLMResponse contracts wrap provider behavior"]
    C --> E["Tracer JSONL + console events"]
    D --> F["ExecutionResult/payload"]
    E --> F
    F --> G["Printed JSON output"]
    
 1from __future__ import annotations
 2
 3import json
 4from pathlib import Path
 5
 6import design_research_agents as drag
 7
 8
 9def _build_payload() -> dict[str, object]:
10    # Run the local MLX client using public runtime APIs. Using this with statement will automatically
11    # release any loaded model resources when the example is done.
12    with drag.MLXLocalLLMClient(
13        name="mlx-local-dev",
14        model_id="mlx-community/Qwen2.5-1.5B-Instruct-4bit",
15        default_model="mlx-community/Qwen2.5-1.5B-Instruct-4bit",
16        quantization="4bit",
17        max_retries=2,
18        model_patterns=("mlx-community/*", "qwen2.5-*"),
19    ) as client:
20        description = client.describe()
21        prompt = "Give one concise guideline for maintainable design telemetry schemas."
22        response = client.generate(
23            drag.LLMRequest(
24                messages=(
25                    drag.LLMMessage(role="system", content="You are a concise engineering design assistant."),
26                    drag.LLMMessage(role="user", content=prompt),
27                ),
28                model=client.default_model(),
29                temperature=0.0,
30                max_tokens=120,
31            )
32        )
33        llm_call = {
34            "prompt": prompt,
35            "response_text": response.text,
36            "response_model": response.model,
37            "response_provider": response.provider,
38            "response_has_text": bool(response.text.strip()),
39        }
40        return {
41            "client_class": description["client_class"],
42            "default_model": description["default_model"],
43            "llm_call": llm_call,
44            "backend": description["backend"],
45            "capabilities": description["capabilities"],
46            "server": description["server"],
47        }
48
49
50def main() -> None:
51    """Run traced MLX client call payload."""
52    # Fixed request id keeps traces and docs output deterministic across runs.
53    request_id = "example-clients-mlx-local-call-001"
54    tracer = drag.Tracer(
55        enabled=True,
56        trace_dir=Path("artifacts/examples/traces"),
57        enable_jsonl=True,
58        enable_console=True,
59    )
60    payload = tracer.run_callable(
61        agent_name="ExamplesMlxClientCall",
62        request_id=request_id,
63        input_payload={"scenario": "mlx-local-client-call"},
64        function=_build_payload,
65    )
66    assert isinstance(payload, dict)
67    payload["example"] = "clients/mlx_local_client.py"
68    payload["trace"] = tracer.trace_info(request_id)
69    # Print the results
70    print(json.dumps(payload, ensure_ascii=True, indent=2, sort_keys=True))
71
72
73if __name__ == "__main__":
74    main()

Expected Results#

Run Command

PYTHONPATH=src python3 examples/clients/mlx_local_client.py

Example output captured with DRA_EXAMPLE_LLM_MODE=deterministic (timestamps, durations, and trace filenames vary by run):

{
  "backend": {
    "base_url": null,
    "default_model": "mlx-community/Qwen2.5-1.5B-Instruct-4bit",
    "kind": "mlx_local",
    "max_retries": 2,
    "model_id": "mlx-community/Qwen2.5-1.5B-Instruct-4bit",
    "model_patterns": [
      "mlx-community/*",
      "qwen2.5-*"
    ],
    "name": "mlx-local-dev",
    "quantization": "4bit"
  },
  "capabilities": {
    "json_mode": "prompt+validate",
    "max_context_tokens": null,
    "streaming": false,
    "tool_calling": "best_effort",
    "vision": false
  },
  "client_class": "MLXLocalLLMClient",
  "default_model": "mlx-community/Qwen2.5-1.5B-Instruct-4bit",
  "example": "clients/mlx_local_client.py",
  "llm_call": {
    "prompt": "Give one concise guideline for maintainable design telemetry schemas.",
    "response_has_text": true,
    "response_model": "mlx-community/Qwen2.5-1.5B-Instruct-4bit",
    "response_provider": "example-test-monkeypatch",
    "response_text": "Keep schema fields stable, documented, and versioned for comparability."
  },
  "server": null,
  "trace": {
    "request_id": "example-clients-mlx-local-call-001",
    "trace_dir": "artifacts/examples/traces",
    "trace_path": "artifacts/examples/traces/run_20260222T162206Z_example-clients-mlx-local-call-001.jsonl"
  }
}

References#