Quickstart
Requires Python 3.12+ and assumes you are working from the repository root.
Create and activate a virtual environment:
python -m venv .venv
source .venv/bin/activate
python -m pip install --upgrade pip
Path A: Hosted (fastest)
Use this when you want the shortest path to a working run.
Install the default development toolchain:
make dev
Set API key:
export OPENAI_API_KEY="<your-key>"
Run one agent call:
from design_research_agents import DirectLLMCall, OpenAIServiceLLMClient
with OpenAIServiceLLMClient() as llm_client:
agent = DirectLLMCall(llm_client=llm_client)
result = agent.run("List three interview themes about onboarding friction.")
print(result.output)
Path B: Local (privacy-first)
Use this when you want local execution and are willing to manage local runtime/model setup.
Install backend-specific extras for local inference:
pip install -e ".[dev,llama_cpp]" # managed llama.cpp server client
# or: pip install -e ".[dev,transformers]" # in-process transformers backend
# or: pip install -e ".[dev,mlx]" # Apple MLX backend
# or: pip install -e ".[dev,full]" # all optional backend extras
Run one agent call with the managed llama.cpp server client:
from design_research_agents import DirectLLMCall, LlamaCppServerLLMClient
with LlamaCppServerLLMClient() as llm_client:
agent = DirectLLMCall(llm_client=llm_client)
result = agent.run("Summarize this study brief in five bullets.")
print(result.output)
Checks and Docs
make test
make docs-check
make docs-build
Next Steps
Optional dependency profiles and platform notes: Dependencies and Extras
Scenario-driven examples and expected outputs: Examples Guide
Explore runnable examples:
examples/README.mdLLM client setup details: LLM Clients
Agent behavior tradeoffs: Agents
Workflow builder primitives: Workflows
Prebuilt workflow implementations: Patterns
Tool runtime and integrations: Tools