design-research-agents#
A modular framework for building and studying AI agents in engineering design workflows.
What This Library Does#
design-research-agents provides reusable abstractions for agent behavior,
tool use, workflow composition, and multi-step reasoning. It is built for
research workflows where traceability, reproducibility, and controlled
comparison matter as much as raw model capability.
Interpretable traces, explicit tool boundaries, and documented workflow contracts are core features. They make agent studies easier to reproduce, compare, and audit across experiments.
Highlights#
Two core agent entry points:
DirectLLMCallandMultiStepAgentExplicit multi-step modes for
direct,json, andcodeexecutionWorkflow primitives for model, tool, delegate, loop, and memory steps
Model-selection policies with local and remote catalogs
Tool contracts and schemas for safe, structured I/O
Tracing hooks and emitters for debugging, evaluation, and reproducibility
Workflow-native memory and reusable reasoning patterns including tree search, Ralph loops, nominal teams, debate, and RAG
Runnable examples for deterministic validation and experimentation
Typical Workflow#
Start from
DirectLLMCallorMultiStepAgentdepending on the level of control you need.Configure runtime mode, tools, models, and any workflow or memory helpers.
Run a deterministic example or local quickstart to validate the environment.
Inspect traces, tool boundaries, and structured outputs for debugging and evaluation.
Reuse the same runtime contracts inside broader experiments and downstream analysis.
Note
Start with Quickstart to run a first agent workflow, inspect the public runtime surface, and get the package into a stable local loop before diving into the broader patterns and reference material.
Guides#
Learn the base concepts, setup flow, and execution patterns that shape a stable agent-research pipeline.
Examples#
Browse runnable examples and guided landing pages for the major public subsystems.
Reference#
Look up the stable import surface, package extras, and deeper API reference material for the runtime boundaries that matter in CI and downstream studies.
Integration With The Ecosystem#
The Design Research Collective maintains a modular ecosystem of libraries for studying human and AI design behavior.
design-research-agents implements AI participants, workflows, and tool-using reasoning patterns.
design-research-problems provides benchmark design tasks, prompts, grammars, and evaluators.
design-research-analysis analyzes the traces, event tables, and outcomes generated during studies.
design-research-experiments sits above the stack as the study-design and orchestration layer, defining hypotheses, factors, conditions, replications, and artifact flows across agents, problems, and analysis.
Together these libraries support end-to-end design research pipelines, from study design through execution and interpretation.