Quickstart

Requires Python 3.12+ and assumes you are working from the repository root.

Create and activate a virtual environment:

python -m venv .venv
source .venv/bin/activate
python -m pip install --upgrade pip

Path A: Hosted (fastest)

Use this when you want the shortest path to a working run.

  1. Install the default development toolchain:

make dev
  1. Set API key:

export OPENAI_API_KEY="<your-key>"
  1. Run one agent call:

from design_research_agents import DirectLLMCall, OpenAIServiceLLMClient

with OpenAIServiceLLMClient() as llm_client:
    agent = DirectLLMCall(llm_client=llm_client)
    result = agent.run("List three interview themes about onboarding friction.")
    print(result.output)

Path B: Local (privacy-first)

Use this when you want local execution and are willing to manage local runtime/model setup.

  1. Install backend-specific extras for local inference:

pip install -e ".[dev,llama_cpp]"      # managed llama.cpp server client
# or: pip install -e ".[dev,transformers]"  # in-process transformers backend
# or: pip install -e ".[dev,mlx]"           # Apple MLX backend
# or: pip install -e ".[dev,full]"          # all optional backend extras
  1. Run one agent call with the managed llama.cpp server client:

from design_research_agents import DirectLLMCall, LlamaCppServerLLMClient

with LlamaCppServerLLMClient() as llm_client:
    agent = DirectLLMCall(llm_client=llm_client)
    result = agent.run("Summarize this study brief in five bullets.")
    print(result.output)

Checks and Docs

make test
make docs-check
make docs-build

Next Steps