OllamaLLMClient
OllamaLLMClient targets local/self-hosted Ollama chat inference.
Default behavior
Default managed mode:
manage_server=TrueDefault model:
qwen2.5:1.5b-instructDefault managed endpoint:
http://127.0.0.1:11434
Constructor-first usage
from design_research_agents import OllamaLLMClient
from design_research_agents.llm import LLMMessage, LLMRequest
with OllamaLLMClient() as client:
response = client.generate(
LLMRequest(
messages=(LLMMessage(role="user", content="Summarize one design principle."),),
model=client.default_model(),
)
)
Prefer the context-manager form so managed local runtime processes shut down
deterministically. close() remains available for explicit lifecycle
control.
Dependencies and environment
Install and run Ollama locally (ollama serve) if using connect mode.
Managed mode starts
ollama serveautomatically using the configuredollama_executable.Optional model prefetch:
auto_pull_model=True.
Examples
examples/clients/ollama_local_client.py
Attribution
Docs: Ollama API docs
Homepage: Ollama