Contracts Modules
This page documents internal contract modules for contributor visibility. These underscored module paths are intentionally unstable.
Delegate runtime protocol shared by all concrete executable delegates.
- class design_research_agents._contracts._delegate.Delegate(*args, **kwargs)[source]
Protocol that every direct delegate implementation must satisfy.
The protocol intentionally keeps the execution contract small: one compile phase plus one non-streaming execution call.
- compile(prompt, *, request_id=None, dependencies=None)[source]
Compile one delegate run into a bound workflow execution.
- run(prompt, *, request_id=None, dependencies=None)[source]
Execute one delegate run and return the final
ExecutionResultpayload.Implementations should treat
promptas the prompt text for one run. Userequest_idanddependenciesfor run metadata and upstream dependency payloads.- Parameters:
prompt – Prompt text for the run.
request_id – Optional caller-provided request id for tracing.
dependencies – Optional dependency payload mapping.
- Returns:
Final execution result payload.
- class design_research_agents._contracts._delegate.ExecutionResult(*, success, output=<factory>, tool_results=<factory>, model_response=None, step_results=<factory>, execution_order=<factory>, metadata=<factory>)[source]
Structured output produced by one execution entrypoint.
This shape intentionally covers both agent-like executions and workflow-like executions so callers can consume one result contract everywhere.
- property error
Return terminal error payload when present.
- Returns:
Error payload from
outputmapping, orNone.
- execution_order
Step ids in the order they were executed for workflow-style runs.
- property final_output
Return workflow/agent
final_outputpayload when present.- Returns:
Final output value from
outputpayload, orNone.
- metadata
Additional diagnostics, runtime counters, and trace metadata.
- model_response
Final model response associated with the run, when available.
- output
Primary payload produced by the entrypoint.
- output_dict(key)[source]
Return one output value normalized to a dictionary.
- Parameters:
key – Output key to read.
- Returns:
Dictionary value when the output value is mapping-like, else
{}.
- output_list(key)[source]
Return one output value normalized to a list.
- Parameters:
key – Output key to read.
- Returns:
List value when the output value is a list/tuple, else
[].
- output_value(key, default=None)[source]
Return one output value by key with optional default.
- Parameters:
key – Output key to read.
default – Value returned when
keyis absent.
- Returns:
Output value for
keywhen present, elsedefault.
- step_results
Per-step results keyed by step id for workflow-style runs.
- success
True when the overall run completed without terminal failure.
- summary()[source]
Return one compact summary payload for user-facing output.
- Returns:
Compact summary payload with canonical execution fields.
- property terminated_reason
Return normalized termination reason when present.
- Returns:
Termination reason string, or
None.
- to_dict()[source]
Return a JSON-serializable dictionary representation of the result.
- Returns:
Dictionary representation of the result payload.
- to_json(*, ensure_ascii=True, indent=2, sort_keys=True)[source]
Return JSON string for deterministic pretty-printing.
- Parameters:
ensure_ascii – Forwarded to
json.dumps.indent – Forwarded to
json.dumps.sort_keys – Forwarded to
json.dumps.
- Returns:
JSON representation of this result.
- tool_results
Tool invocation results captured during execution, in call order.
Unified execution result contract shared by agents and workflows.
- class design_research_agents._contracts._execution.ExecutionResult(*, success, output=<factory>, tool_results=<factory>, model_response=None, step_results=<factory>, execution_order=<factory>, metadata=<factory>)[source]
Structured output produced by one execution entrypoint.
This shape intentionally covers both agent-like executions and workflow-like executions so callers can consume one result contract everywhere.
- property error
Return terminal error payload when present.
- Returns:
Error payload from
outputmapping, orNone.
- execution_order
Step ids in the order they were executed for workflow-style runs.
- property final_output
Return workflow/agent
final_outputpayload when present.- Returns:
Final output value from
outputpayload, orNone.
- metadata
Additional diagnostics, runtime counters, and trace metadata.
- model_response
Final model response associated with the run, when available.
- output
Primary payload produced by the entrypoint.
- output_dict(key)[source]
Return one output value normalized to a dictionary.
- Parameters:
key – Output key to read.
- Returns:
Dictionary value when the output value is mapping-like, else
{}.
- output_list(key)[source]
Return one output value normalized to a list.
- Parameters:
key – Output key to read.
- Returns:
List value when the output value is a list/tuple, else
[].
- output_value(key, default=None)[source]
Return one output value by key with optional default.
- Parameters:
key – Output key to read.
default – Value returned when
keyis absent.
- Returns:
Output value for
keywhen present, elsedefault.
- step_results
Per-step results keyed by step id for workflow-style runs.
- success
True when the overall run completed without terminal failure.
- summary()[source]
Return one compact summary payload for user-facing output.
- Returns:
Compact summary payload with canonical execution fields.
- property terminated_reason
Return normalized termination reason when present.
- Returns:
Termination reason string, or
None.
- to_dict()[source]
Return a JSON-serializable dictionary representation of the result.
- Returns:
Dictionary representation of the result payload.
- to_json(*, ensure_ascii=True, indent=2, sort_keys=True)[source]
Return JSON string for deterministic pretty-printing.
- Parameters:
ensure_ascii – Forwarded to
json.dumps.indent – Forwarded to
json.dumps.sort_keys – Forwarded to
json.dumps.
- Returns:
JSON representation of this result.
- tool_results
Tool invocation results captured during execution, in call order.
Provider-agnostic LLM interfaces, payloads, and normalized error taxonomy.
These contracts are shared across agent code and backend adapters so call sites can stay provider-neutral while still supporting both chat-style and request- object style execution paths.
- class design_research_agents._contracts._llm.BackendCapabilities(*, streaming, tool_calling, json_mode, vision, max_context_tokens)[source]
Capabilities supported by a backend.
- json_mode
Structured JSON output mode supported by backend.
- max_context_tokens
Maximum context window tokens, if known.
- streaming
Whether backend supports incremental streaming responses.
- tool_calling
Tool-calling capability mode supported by backend.
- vision
Whether backend accepts vision/image inputs.
- class design_research_agents._contracts._llm.BackendStatus(*, ok, message=None, details=None, checked_at=None)[source]
Healthcheck status returned by a backend.
- checked_at
ISO timestamp for when the healthcheck was performed.
- details
Optional structured diagnostics from healthcheck.
- message
Optional human-readable healthcheck summary.
- ok
True when backend healthcheck succeeded.
- class design_research_agents._contracts._llm.EmbeddingResult(*, vectors, model_id=None, usage=None)[source]
Embedding response payload returned by a backend.
- model_id
Model identifier used for embedding generation.
- usage
Usage counters associated with the embedding request.
- vectors
Embedding vectors in request input order.
- exception design_research_agents._contracts._llm.LLMAuthError[source]
Authentication or authorization failure raised by provider backends.
- exception design_research_agents._contracts._llm.LLMBadResponseError[source]
Raised when a provider returns an invalid or empty response payload.
- exception design_research_agents._contracts._llm.LLMCapabilityError[source]
Raised when a backend cannot satisfy required capabilities.
- class design_research_agents._contracts._llm.LLMChatParams(*, temperature=None, max_tokens=None, response_schema=None, provider_options=<factory>)[source]
Provider-neutral generation controls passed with chat requests.
- max_tokens
Maximum number of output tokens to generate.
- provider_options
Provider-specific raw options forwarded to backend adapters.
- response_schema
Optional JSON schema describing required structured output.
- temperature
Sampling temperature override for the request.
- class design_research_agents._contracts._llm.LLMClient(*args, **kwargs)[source]
Protocol implemented by provider-agnostic LLM clients.
Implementations may support one or both call styles used in this package: chat-style methods (
chat/stream_chat) and request-object methods (generate/stream).- capabilities()[source]
Return declared backend capabilities for this client.
- Returns:
Backend capability payload.
- chat(messages, *, model, params)[source]
Generate and return a full chat completion response.
- Parameters:
messages – Ordered chat messages for this completion request.
model – Target model identifier.
params – Request controls such as temperature and max token limits.
- Returns:
Normalized completion response payload.
- close()[source]
Release any client-owned resources.
Implementations that do not own external resources may implement this as a no-op so callers can use a uniform lifecycle pattern.
- config_snapshot()[source]
Return stable client/backend configuration metadata.
- Returns:
Mapping safe for diagnostics and example output.
- default_model()[source]
Return default model identifier for the configured backend.
- Returns:
Model identifier used when no model is supplied on a request.
- describe()[source]
Return a composed client configuration and capability summary.
- Returns:
JSON-serializable runtime description mapping.
- generate(request)[source]
Generate and return a full response from a request object.
- Parameters:
request – Provider-neutral request payload.
- Returns:
Normalized completion response payload.
- server_snapshot()[source]
Return managed-server metadata when this client owns a server.
- Returns:
Server metadata mapping, or
Nonewhen not managed.
- stream(request)[source]
Stream a response from a request object.
- Parameters:
request – Provider-neutral request payload.
- Returns:
Iterator of normalized response deltas.
- stream_chat(messages, *, model, params)[source]
Generate a streaming chat completion event sequence.
- Parameters:
messages – Ordered chat messages for this completion request.
model – Target model identifier.
params – Request controls such as temperature and max token limits.
- Returns:
Iterator of normalized streaming completion events.
- class design_research_agents._contracts._llm.LLMDelta(*, text_delta=None, tool_call_delta=None, usage_delta=None)[source]
Incremental delta emitted by streaming model responses.
- text_delta
Incremental text token/segment.
- tool_call_delta
Incremental tool-call payload, when provided by backend.
- usage_delta
Incremental usage counters emitted mid-stream.
- exception design_research_agents._contracts._llm.LLMError[source]
Base exception for provider-independent LLM runtime failures.
- exception design_research_agents._contracts._llm.LLMInvalidRequestError[source]
Invalid request payload or unsupported provider/backend configuration.
- class design_research_agents._contracts._llm.LLMMessage(*, role, content, name=None, tool_call_id=None, tool_name=None)[source]
One chat message in the provider-neutral completion format.
- content
Plain-text message content.
- name
Optional participant name, when supported by the provider.
- role
Message role used by chat-compatible backends.
- tool_call_id
Tool call identifier for tool-response messages.
- tool_name
Tool name associated with a tool-response message.
- class design_research_agents._contracts._llm.LLMProviderAdapter(*args, **kwargs)[source]
Backend adapter contract consumed by
LLMClientimplementations.- chat(messages, *, model, params)[source]
Generate one provider-backed chat response in normalized format.
- Parameters:
messages – Ordered chat messages for this completion request.
model – Target model identifier.
params – Request controls such as temperature and max token limits.
- Returns:
Normalized completion response payload.
- provider_name
- stream_chat(messages, *, model, params)[source]
Stream provider-backed chat events in normalized format.
- Parameters:
messages – Ordered chat messages for this completion request.
model – Target model identifier.
params – Request controls such as temperature and max token limits.
- Returns:
Iterator of normalized streaming completion events.
- exception design_research_agents._contracts._llm.LLMProviderError[source]
General provider runtime failure not covered by specialized subclasses.
- exception design_research_agents._contracts._llm.LLMRateLimitError[source]
Provider rate-limit failure indicating callers should throttle or retry.
- class design_research_agents._contracts._llm.LLMRequest(*, messages, model=None, temperature=None, max_tokens=None, tools=(), response_schema=None, response_format=None, metadata=<factory>, provider_options=<factory>, task_profile=None)[source]
Provider-neutral request payload for LLM generation.
- max_tokens
Maximum output token limit.
- messages
Ordered conversation/messages sent to the model.
- metadata
Caller metadata forwarded for tracing and diagnostics.
- model
Explicit model identifier override for this request.
- provider_options
Backend/provider-specific low-level options.
- response_format
Provider-specific response-format hints.
- response_schema
Optional schema for structured output validation.
- task_profile
Optional routing profile used by selector-aware clients.
- temperature
Sampling temperature override.
- tools
Tool specifications exposed for model tool-calling.
- class design_research_agents._contracts._llm.LLMResponse(*, text, model=None, provider=None, finish_reason=None, usage=None, latency_ms=None, raw_output=None, tool_calls=(), raw=None, provenance=None)[source]
Normalized non-streaming response payload returned by a backend.
- finish_reason
Provider-specific completion reason.
- latency_ms
End-to-end latency in milliseconds.
- model
Model identifier reported by the backend.
- provenance
Execution provenance metadata for auditability.
- provider
Provider/backend name that produced this response.
- raw
Canonical raw backend payload snapshot.
- raw_output
Legacy/raw backend payload for debugging.
- text
Primary response text emitted by the model.
- tool_calls
Tool calls requested by the model in this response.
- usage
Token usage counters when available.
- class design_research_agents._contracts._llm.LLMStreamEvent(*, kind, delta_text=None, response=None)[source]
One event emitted from a streaming model response.
- delta_text
Incremental text fragment for
kind='delta'events.
- kind
Event kind, either incremental delta or stream completion.
- response
Final assembled response for
kind='completed'events.
- class design_research_agents._contracts._llm.LLMToolResult(*, call_id, output_json, error=None)[source]
Result payload used to feed tool outputs back into model turns.
- call_id
Identifier of the originating tool call.
- error
Optional error text when tool execution failed.
- output_json
JSON-encoded tool output returned to the model.
- class design_research_agents._contracts._llm.Provenance(*, backend_name, backend_kind, model_id, base_url, started_at, completed_at, config_hash)[source]
Provenance metadata for reproducibility and audit trails.
- backend_kind
Backend implementation family/type identifier.
- backend_name
Configured backend instance name.
- base_url
Backend endpoint base URL, if network-backed.
- completed_at
ISO timestamp when request execution completed.
- config_hash
Stable hash of backend configuration inputs.
- model_id
Resolved model identifier used for the request.
- static now_iso()[source]
Return the current UTC timestamp in ISO 8601 format.
- Returns:
Current UTC timestamp as an ISO 8601 string.
- started_at
ISO timestamp when request execution started.
- class design_research_agents._contracts._llm.TaskProfile(*, priority='balanced', max_cost_usd=None, max_latency_ms=None, tags=())[source]
Routing hints for selecting a backend.
- max_cost_usd
Upper cost bound for a single request, when enforced.
- max_latency_ms
Upper latency target for a single request, when enforced.
- priority
Primary optimization objective for backend selection.
- tags
Free-form tags used by model selection policies.
- class design_research_agents._contracts._llm.ToolCall(*, name, arguments_json, call_id)[source]
Tool-call intent emitted by a backend.
- arguments_json
JSON-encoded argument payload to pass to the tool.
- call_id
Stable call identifier used to pair call and result.
- name
Resolved tool name selected by the model.
- class design_research_agents._contracts._llm.ToolCallDelta(*, call_id=None, name=None, arguments_json_delta=None)[source]
Incremental tool-call delta used for streaming responses.
- arguments_json_delta
Incremental JSON argument text for the streamed call.
- call_id
Tool call id fragment or full id as streamed by provider.
- name
Tool name fragment or full name for the streamed call.
- class design_research_agents._contracts._llm.Usage(*, prompt_tokens=None, completion_tokens=None, total_tokens=None)[source]
Token accounting information for an LLM call.
- completion_tokens
Completion token count reported by the backend.
- prompt_tokens
Prompt token count reported by the backend.
- total_tokens
Total token count if reported by the backend.
Memory contracts for pluggable retrieval and persistence backends.
- class design_research_agents._contracts._memory.MemoryRecord(*, item_id, namespace, content, metadata=<factory>, created_at=None, updated_at=None, score=None, lexical_score=None, vector_score=None)[source]
Retrieved or persisted memory record.
- content
Record content text.
- created_at
ISO timestamp for record creation.
- item_id
Stable memory record identifier.
- lexical_score
Lexical relevance score.
- metadata
Record metadata fields.
- namespace
Namespace partition that owns this record.
- score
Combined ranking score when returned by search.
- to_dict()[source]
Return a JSON-serializable dictionary representation.
- Returns:
Serialized dataclass payload.
- updated_at
ISO timestamp for last record update.
- vector_score
Vector similarity score when embeddings are available.
- class design_research_agents._contracts._memory.MemorySearchQuery(*, text, namespace='default', top_k=5, min_score=None, metadata_filters=<factory>)[source]
Structured memory search query.
- metadata_filters
Exact-match metadata filters applied before ranking.
- min_score
Optional minimum score threshold for returned matches.
- namespace
Namespace partition used for isolation.
- text
Natural language query text.
- to_dict()[source]
Return a JSON-serializable dictionary representation.
- Returns:
Serialized dataclass payload.
- top_k
Maximum number of matches to return.
- class design_research_agents._contracts._memory.MemoryStore(*args, **kwargs)[source]
Protocol implemented by memory stores used by workflows and agents.
- close()[source]
Release any store-owned resources.
Implementations that do not own external resources may implement this as a no-op so callers can use a uniform lifecycle pattern.
- search(query)[source]
Search memory records using lexical/vector relevance.
- Parameters:
query – Structured memory search query.
- Returns:
Ordered list of matching records.
- write(records, *, namespace='default')[source]
Persist one or more records and return normalized stored entries.
- Parameters:
records – Record payloads to persist.
namespace – Namespace partition to store records under.
- Returns:
Stored records including ids and timestamps.
- class design_research_agents._contracts._memory.MemoryWriteRecord(*, content, metadata=<factory>, item_id=None)[source]
One record payload to be persisted into a memory store.
- content
Primary text content to persist.
- item_id
Optional caller-supplied id for deterministic upserts.
- metadata
Optional metadata fields used for downstream filtering/ranking.
- to_dict()[source]
Return a JSON-serializable dictionary representation.
- Returns:
Serialized dataclass payload.
Centralized termination-reason and decision-source constants.
- design_research_agents._contracts._termination.SOURCE_GUARDRAIL = 'guardrail'
Decision source for guardrail-enforced outcomes.
- design_research_agents._contracts._termination.SOURCE_INVALID_PAYLOAD = 'invalid_payload'
Decision source for invalid/unparseable structured payloads.
- design_research_agents._contracts._termination.SOURCE_MODEL = 'model'
Decision source for model-produced structured outputs.
- design_research_agents._contracts._termination.TERMINATED_APPROVED = 'approved'
Termination reason when critique loop approves the proposal.
- design_research_agents._contracts._termination.TERMINATED_COMPLETED = 'completed'
Termination reason when the execution plan completes successfully.
- design_research_agents._contracts._termination.TERMINATED_CONTINUATION_INVALID_PAYLOAD = 'continuation_invalid_payload'
Termination reason when continuation payload is invalid.
- design_research_agents._contracts._termination.TERMINATED_CONTROLLER_INVALID_PAYLOAD = 'controller_invalid_payload'
Termination reason when controller payload is invalid.
- design_research_agents._contracts._termination.TERMINATED_INVALID_ROUTE_SELECTION = 'invalid_route_selection'
Termination reason when selected route does not resolve to a known alternative.
- design_research_agents._contracts._termination.TERMINATED_INVALID_STEP_OUTPUT = 'invalid_step_output'
Termination reason when one step output payload is invalid.
- design_research_agents._contracts._termination.TERMINATED_MAX_ITERATIONS_REACHED = 'max_iterations_reached'
Termination reason when the iteration cap is reached.
- design_research_agents._contracts._termination.TERMINATED_MAX_STEPS_REACHED = 'max_steps_reached'
Termination reason when step cap is reached.
- design_research_agents._contracts._termination.TERMINATED_ROUTING_FAILURE = 'routing_failure'
Termination reason when routing fails before delegate execution.
- design_research_agents._contracts._termination.TERMINATED_STEP_FAILURE = 'step_failure'
Termination reason when one execution step fails.
- design_research_agents._contracts._termination.TERMINATED_UNKNOWN_ALTERNATIVE = 'unknown_alternative'
Termination reason when selected alternative is not registered.
- design_research_agents._contracts._termination.continuation_stopped_reason(source)[source]
Build continuation stop reason string from source label.
- Parameters:
source – Continuation decision source.
- Returns:
Prefixed continuation stop reason.
- design_research_agents._contracts._termination.stop_reason(source)[source]
Build stop reason string from source label.
- Parameters:
source – Stop decision source.
- Returns:
Prefixed stop reason.
Tool specification payloads and runtime protocol contracts.
These definitions describe how tools are registered, invoked, and reported across agents and runtimes in a provider-neutral manner.
- class design_research_agents._contracts._tools.ToolArtifact(*, path, mime)[source]
File-like artifact emitted by a tool invocation.
- mime
MIME type describing artifact content.
- path
Filesystem path to the emitted artifact.
- class design_research_agents._contracts._tools.ToolCostHints(*, token_cost_estimate=None, latency_ms_estimate=None, usd_cost_estimate=None)[source]
Approximate cost metadata associated with a tool invocation.
- token_cost_estimate
Estimated token cost consumed by the tool.
- Type:
int | None
- latency_ms_estimate
Estimated wall-clock latency in milliseconds.
- Type:
int | None
- usd_cost_estimate
Estimated direct monetary cost in USD.
- Type:
float | None
- latency_ms_estimate
Estimated end-to-end latency in milliseconds.
- token_cost_estimate
Estimated token usage for one invocation.
- usd_cost_estimate
Estimated direct monetary cost in USD.
- class design_research_agents._contracts._tools.ToolError(*, type, message)[source]
Structured tool failure details.
- message
Human-readable error message.
- type
Machine-readable error type identifier.
- class design_research_agents._contracts._tools.ToolMetadata(*, source='core', side_effects=<factory>, timeout_s=30, max_output_bytes=65536, risky=None, server_id=None)[source]
Tool source and guardrail metadata surfaced to runtimes/agents.
- max_output_bytes
Maximum serialized output size accepted by runtime wrappers.
- risky
Explicit risk marker; inferred from side effects when omitted.
- server_id
Owning server id for remote tools (for example MCP), when applicable.
- side_effects
Declared operational side effects used for policy enforcement.
- source
Origin of the tool implementation.
- timeout_s
Maximum allowed invocation time in seconds.
- class design_research_agents._contracts._tools.ToolResult(*, tool_name, ok, result=None, artifacts=(), warnings=(), error=None, metadata=None)[source]
Result payload emitted from a tool runtime invocation.
Initialize canonical tool result payload.
- Parameters:
tool_name – Name of the invoked tool.
ok – Invocation success flag.
result – Primary result payload (defaults to empty mapping).
artifacts – Raw or typed artifact entries to normalize.
warnings – Warning messages to attach to the result.
error – Error payload to normalize into
ToolError.metadata – Optional diagnostic metadata mapping.
- property artifact_paths
Return artifact paths in emitted order.
- Returns:
Tuple of artifact path strings.
- artifacts
Artifact list emitted by the invocation.
- error
Structured error details when
okis false.
- property error_message
Return the normalized tool error message when present.
- Returns:
Error message string, or
None.
- metadata
Supplemental runtime metadata for diagnostics and tracing.
- ok
True when invocation succeeded.
- result
Primary tool return payload.
- result_dict()[source]
Return the primary result payload normalized to a dictionary.
- Returns:
Dictionary value when
resultis mapping-like, else{}.
- result_list()[source]
Return the primary result payload normalized to a list.
- Returns:
List value when
resultis a list/tuple, else[].
- tool_name
Name of the invoked tool.
- warnings
Non-fatal warnings produced during invocation.
- class design_research_agents._contracts._tools.ToolRuntime(*args, **kwargs)[source]
Protocol for registering and invoking named tools.
Implementations may be in-memory, remote, or hybrid, but must present the same listing and invocation interface to agents.
- close()[source]
Release any runtime-owned resources.
Implementations that do not own external resources may implement this as a no-op so callers can use a uniform lifecycle pattern.
- invoke(tool_name, input, *, request_id, dependencies)[source]
Invoke one tool using structured input and execution metadata payloads.
Implementations should avoid raising for expected tool failures and instead return
ToolResult(ok=False)with error details.- Parameters:
tool_name – Name of the tool to invoke.
input – Tool input payload mapping.
request_id – Request identifier for tracing.
dependencies – Dependency payload mapping for the tool.
- Returns:
Tool invocation result payload.
- list_tools()[source]
Return all currently registered tool specifications.
Returned specs describe every tool callable through
invoke.- Returns:
Sequence of registered tool specifications.
- class design_research_agents._contracts._tools.ToolSideEffects(*, filesystem_read=False, filesystem_write=False, network=False, commands=())[source]
Declared side effects for one tool implementation.
- commands
Command names the tool may execute, when command execution is used.
- filesystem_read
Whether the tool reads from the filesystem.
- filesystem_write
Whether the tool writes to the filesystem.
- network
Whether the tool performs network I/O.
- class design_research_agents._contracts._tools.ToolSpec(*, name, description, input_schema, output_schema, metadata=<factory>, permissions=(), cost_hints=<factory>)[source]
Static description of a tool available to agent runtimes.
- name
Stable tool identifier used for invocation.
- Type:
str
- description
Natural-language description used for planning/routing.
- Type:
str
- input_schema
JSON-schema-like object describing accepted inputs.
- Type:
dict[str, object]
- output_schema
JSON-schema-like object describing tool outputs.
- Type:
dict[str, object]
- metadata
Source/policy metadata for runtime enforcement.
- Type:
design_research_agents._contracts._tools.ToolMetadata
- permissions
Optional permission labels associated with the tool.
- Type:
tuple[str, …]
- cost_hints
Optional cost estimates used for planning heuristics.
- Type:
design_research_agents._contracts._tools.ToolCostHints
- cost_hints
Optional cost estimates used by planning heuristics.
- description
Human-readable tool description used by planners and routers.
- input_schema
JSON-schema-like input contract for tool calls.
- property json_schema
Return the input schema for LLM tool-calling payloads.
- Returns:
Input JSON schema mapping for this tool.
- metadata
Operational metadata and policy hints.
- name
Stable tool identifier used for invocation.
- output_schema
JSON-schema-like output contract for tool results.
- permissions
Permission tags surfaced to callers and policy layers.
Workflow runtime contracts and typed step payloads.
- class design_research_agents._contracts._workflow.DelegateBatchCall(*, call_id, delegate, prompt, execution_mode='sequential', failure_policy='skip_dependents')[source]
One delegate call specification executed by
DelegateBatchStep.- call_id
Unique call identifier within the batch.
- delegate
Delegate object invoked for this call.
- execution_mode
Execution mode propagated when the delegate is workflow-like.
- failure_policy
Failure policy propagated when the delegate is workflow-like.
- prompt
Prompt passed to the delegate for this call.
- class design_research_agents._contracts._workflow.DelegateBatchStep(*, step_id, calls_builder, dependencies=(), fail_fast=True, artifacts_builder=None)[source]
Workflow step that executes multiple delegate invocations in sequence.
- artifacts_builder
Optional callback that extracts user-facing artifact manifests from step context.
- calls_builder
Callback that builds batch delegate call specs from runtime context.
- dependencies
Step ids that must complete before this step can run.
- fail_fast
Whether to stop executing additional calls after first failure.
- step_id
Unique step identifier used for dependency wiring and result lookup.
- class design_research_agents._contracts._workflow.DelegateRunner(*args, **kwargs)[source]
Protocol for configured orchestration chunks with fixed step topology.
- run(*, context=None, execution_mode='dag', failure_policy='skip_dependents', request_id=None, dependencies=None)[source]
Execute the configured orchestration and return aggregated results.
- Parameters:
context – Optional shared context mapping available to step builders.
execution_mode – Runtime scheduling mode (for example
dag).failure_policy – Failure behavior when a step fails.
request_id – Optional request id used for tracing and downstream calls.
dependencies – Optional dependency payload mapping exposed to steps.
- Returns:
Aggregated workflow execution result.
- class design_research_agents._contracts._workflow.DelegateStep(*, step_id, delegate, dependencies=(), prompt=None, prompt_builder=None, artifacts_builder=None)[source]
Workflow step that invokes one direct delegate.
- artifacts_builder
Optional callback that extracts user-facing artifact manifests from step context.
- delegate
Direct delegate object (agent, pattern, or workflow-like runner).
- dependencies
Step ids that must complete before this step can run.
- prompt
Static prompt passed to the delegate when
prompt_builderis absent.
- prompt_builder
Optional callback that derives a prompt string from runtime step context.
- step_id
Unique step identifier used for dependency wiring and result lookup.
- class design_research_agents._contracts._workflow.LogicStep(*, step_id, handler, dependencies=(), route_map=None, artifacts_builder=None)[source]
Workflow step that executes deterministic local logic.
- artifacts_builder
Optional callback that extracts user-facing artifact manifests from step context.
- dependencies
Step ids that must complete before this step can run.
- handler
Deterministic local function that computes this step output.
- route_map
Optional route key to downstream-target mapping for conditional activation.
- step_id
Unique step identifier used for dependency wiring and result lookup.
- class design_research_agents._contracts._workflow.LoopStep(*, step_id, steps, dependencies=(), max_iterations=1, initial_state=None, continue_predicate=None, state_reducer=None, execution_mode='sequential', failure_policy='skip_dependents', artifacts_builder=None)[source]
Workflow step that executes an iterative nested workflow body.
- artifacts_builder
Optional callback that extracts user-facing artifact manifests from step context.
- continue_predicate
Predicate deciding whether to execute the next iteration.
- dependencies
Step ids that must complete before loop iteration begins.
- execution_mode
Execution mode used for nested loop-body workflow runs.
- failure_policy
Failure handling policy applied within each loop iteration run.
- initial_state
Initial loop state mapping provided to iteration context.
- max_iterations
Hard cap on the number of loop iterations.
- state_reducer
Reducer that computes next loop state from prior state and iteration result.
- step_id
Unique step identifier used for dependency wiring and result lookup.
- steps
Static loop body steps executed for each iteration.
- class design_research_agents._contracts._workflow.MemoryReadStep(*, step_id, query_builder, dependencies=(), namespace='default', top_k=5, min_score=None, artifacts_builder=None)[source]
Workflow step that reads relevant records from the memory store.
- artifacts_builder
Optional callback that extracts user-facing artifact manifests from step context.
- dependencies
Step ids that must complete before this step can run.
- min_score
Optional minimum score threshold for returned records.
- namespace
Namespace partition to read from.
- query_builder
Callback that builds query text or query payload from step context.
- step_id
Unique step identifier used for dependency wiring and result lookup.
- top_k
Maximum number of records to return.
- class design_research_agents._contracts._workflow.MemoryWriteStep(*, step_id, records_builder, dependencies=(), namespace='default', artifacts_builder=None)[source]
Workflow step that writes records into the memory store.
- artifacts_builder
Optional callback that extracts user-facing artifact manifests from step context.
- dependencies
Step ids that must complete before this step can run.
- namespace
Namespace partition to write into.
- records_builder
Callback that builds record payloads from step context.
- step_id
Unique step identifier used for dependency wiring and result lookup.
- class design_research_agents._contracts._workflow.ModelStep(*, step_id, llm_client, request_builder, dependencies=(), response_parser=None, artifacts_builder=None)[source]
Workflow step that executes one model request through an LLM client.
- artifacts_builder
Optional callback that extracts user-facing artifact manifests from step context.
- dependencies
Step ids that must complete before this step can run.
- llm_client
LLM client used to execute the request built for this step.
- request_builder
Callback that builds the
LLMRequestpayload from runtime context.
- response_parser
Optional callback that parses model response into structured output.
- step_id
Unique step identifier used for dependency wiring and result lookup.
- class design_research_agents._contracts._workflow.ToolStep(*, step_id, tool_name, dependencies=(), input_data=None, input_builder=None, artifacts_builder=None)[source]
Workflow step that invokes one runtime tool.
- artifacts_builder
Optional callback that extracts user-facing artifact manifests from step context.
- dependencies
Step ids that must complete before this step can run.
- input_builder
Optional callback that derives input payload from runtime step context.
- input_data
Static input payload used when
input_builderis not provided.
- step_id
Unique step identifier used for dependency wiring and result lookup.
- tool_name
Registered tool name to invoke through the tool runtime.
- class design_research_agents._contracts._workflow.WorkflowArtifact(*, path, mime, title=None, summary=None, audience=None, producer_step_id=None, sources=(), metadata=<factory>)[source]
User-facing workflow artifact manifest entry.
- audience
Optional target audience label (for example
user).
- metadata
Supplemental artifact metadata.
- mime
MIME type for artifact consumers.
- path
Filesystem path to the artifact.
- producer_step_id
Step id that produced the artifact when known.
- sources
Provenance entries describing source steps/fields.
- summary
Optional artifact summary for user-facing rendering.
- title
Optional short artifact title for UIs.
- to_dict()[source]
Return a JSON-serializable dictionary representation.
- Returns:
Dictionary representation of this artifact entry.
- class design_research_agents._contracts._workflow.WorkflowArtifactSource(*, step_id, field=None, note=None)[source]
Provenance entry describing one artifact source edge.
- field
Optional output field or source label within the step payload.
- note
Optional human-readable provenance note.
- step_id
Step id that contributed to this artifact.
- class design_research_agents._contracts._workflow.WorkflowDelegate(*args, **kwargs)[source]
Protocol for raw
Workflowobjects used as delegates.- run(input=None, *, execution_mode='sequential', failure_policy='skip_dependents', request_id=None, dependencies=None)[source]
Execute a workflow object and return one aggregate result.
- Parameters:
input – Optional workflow input payload.
execution_mode – Runtime scheduling mode (for example
dag).failure_policy – Failure behavior when a step fails.
request_id – Optional request id used for tracing and downstream calls.
dependencies – Optional dependency payload mapping exposed to steps.
- Returns:
Aggregated workflow execution result.
- class design_research_agents._contracts._workflow.WorkflowRunner(*args, **kwargs)[source]
Protocol implemented by workflow runtime implementations.
- run(steps, *, context=None, execution_mode='dag', failure_policy='skip_dependents', request_id=None, dependencies=None)[source]
Execute a workflow definition and return aggregated results.
- Parameters:
steps – Workflow step sequence to execute.
context – Optional shared context mapping available to step builders.
execution_mode – Global runtime scheduling mode (for example
dag).failure_policy – Global failure behavior when a step fails.
request_id – Optional request id used for tracing and downstream calls.
dependencies – Optional dependency payload mapping exposed to steps.
- Returns:
Aggregated workflow execution result.
- class design_research_agents._contracts._workflow.WorkflowStepResult(*, step_id, status, success, output=<factory>, error=None, metadata=<factory>, artifacts=())[source]
Result payload for one workflow step execution.
- artifacts
User-facing artifact manifests produced by this step.
- error
Human-readable error message when step fails.
- property final_output
Return
final_outputvalue fromoutputwhen present.- Returns:
Final output payload value, or
None.
- metadata
Supplemental runtime metadata for diagnostics or tracing.
- output
Step output payload produced by the runtime.
- output_dict(key)[source]
Return one output value normalized to a dictionary.
- Parameters:
key – Output key to read.
- Returns:
Dictionary value when the output value is mapping-like, else
{}.
- output_list(key)[source]
Return one output value normalized to a list.
- Parameters:
key – Output key to read.
- Returns:
List value when the output value is a list/tuple, else
[].
- output_value(key, default=None)[source]
Return one output value by key with optional default.
- Parameters:
key – Output key to read.
default – Value returned when
keyis absent.
- Returns:
Output value for
keywhen present, elsedefault.
- status
Execution status for the step.
- step_id
Step id this result belongs to.
- success
True when step completed successfully.
- property terminated_reason
Return normalized step termination reason when present.
- Returns:
Termination reason string, or
None.
- to_dict()[source]
Return a JSON-serializable dictionary representation.
- Returns:
Dictionary representation of this step result.