Problem Classes

Public problem-family exports.

class design_research_problems.problems.Citation(key, kind, authors, title, year, raw_text, venue=None, doi=None, formatted_text=None, url=None, provisional=False)[source]

Citation metadata for one problem entry.

authors

Structured author list used for search and summary rendering.

doi = None

Digital object identifier when known.

formatted_text = None

Optional curated display string.

key

Stable citation key, typically a BibTeX identifier.

kind

Citation format label such as bibtex or inline.

provisional = False

Whether the record is intentionally incomplete and needs follow-up.

raw_text

Canonical raw citation text.

summary_text()[source]

Return a short human-readable citation line.

Returns:

Curated citation text when available, otherwise a synthesized summary.

title

Canonical work title.

url = None

Optional canonical source URL.

venue = None

Journal, conference, or source venue when known.

year

Publication year when known.

class design_research_problems.problems.ComputableProblem(metadata, statement_markdown='', resource_bundle=None)[source]

Shared base for problems that can evaluate one candidate.

Store the shared packaged metadata and resource handle.

Parameters:
  • metadata – Shared packaged metadata.

  • statement_markdown – Canonical Markdown statement.

  • resource_bundle – Optional package-resource loader for problem assets.

abstractmethod evaluate(candidate)[source]

Evaluate one candidate in the problem’s native representation.

Parameters:

candidate – Candidate value expressed in the problem’s native input representation.

Returns:

Problem-specific evaluation result for candidate.

class design_research_problems.problems.DecisionChoiceBenchmark(key, label, top_choice_share, mean_rating, median_rating, std_rating)[source]

One empirical categorical choice benchmark entry.

key

Stable option key used for evaluation.

label

Human-readable option label.

mean_rating

Mean 0-10 source rating for this choice.

median_rating

Median 0-10 source rating for this choice.

std_rating

Sample standard deviation of the source ratings.

top_choice_share

Tie-adjusted fraction of experts whose top score includes this choice.

class design_research_problems.problems.DecisionConstraintSpec(key, label, relation, domain, expression, variables, executable)[source]

Structured description of a decision constraint.

domain

Model domain such as continuous-design.

executable

Whether the package can evaluate this constraint directly.

expression

Symbolic formula text for the constraint.

key

Stable identifier for the constraint.

label

Human-readable constraint label.

relation

Constraint relation such as <=.

variables

Referenced variable names.

class design_research_problems.problems.DecisionEvaluation(candidate_kind, candidate, candidate_label, objective_value, objective_metric, higher_is_better=True, option=None, utility=None, predicted_share=None, expected_demand_units=None, choice_key=None, choice_label=None, top_choice_share=None, mean_rating=None, median_rating=None, std_rating=None, response_count=None)[source]

Unified evaluation result for one decision candidate.

candidate

Canonical candidate representation used for evaluation.

candidate_kind

Whether the result came from a discrete or empirical decision problem.

candidate_label

Human-readable candidate label.

choice_key = None

Canonical empirical choice key when in empirical mode.

choice_label = None

Human-readable empirical choice label when in empirical mode.

expected_demand_units = None

Expected demand under the fixed market-size assumption.

higher_is_better = True

Whether larger objective values are better for ranking.

mean_rating = None

Mean 0-10 source rating for the choice.

median_rating = None

Median 0-10 source rating for the choice.

objective_metric

Metric or objective identifier used to populate objective_value.

objective_value

Objective scalar used for ranking.

option = None

Normalized option that was evaluated when in discrete mode.

predicted_share = None

Predicted logit market share against the competitor set.

response_count = None

Number of valid respondents included in the aggregate.

std_rating = None

Sample standard deviation of the source ratings.

top_choice_share = None

Tie-adjusted fraction of expert top matches.

utility = None

Discrete part-worth utility of the option.

class design_research_problems.problems.DecisionFactor(key, label, unit, levels, part_worths)[source]

One discrete factor in an explicit option space.

key

Stable factor key used in discrete option mappings.

label

Human-readable factor label.

levels

Ordered discrete levels in source order.

part_worths

Utility coefficients aligned with levels.

unit

Display unit when the source provides one.

class design_research_problems.problems.DecisionObjectiveSpec(key, label, sense, domain, expression, variables, executable)[source]

Structured description of a decision objective.

domain

Model domain such as discrete-option.

executable

Whether the package can evaluate this objective directly.

expression

Symbolic formula text for the objective.

key

Stable identifier for the objective.

label

Human-readable objective label.

sense

Optimization sense such as maximize.

variables

Referenced variable or factor names.

class design_research_problems.problems.DecisionOption(values)[source]

One explicit discrete option candidate.

values

Exact factor-key to level-value mapping.

class design_research_problems.problems.DecisionProblem(*, metadata, statement_markdown='', parameters, resource_bundle=None)[source]

Concrete decision problem with a unified candidate/evaluation workflow.

Parse the structured decision payload and cache shared lookups.

Parameters:
  • metadata – Value for metadata.

  • statement_markdown – Value for statement_markdown.

  • parameters – Value for parameters.

  • resource_bundle – Value for resource_bundle.

Raises:

Exception – Raised when the callable encounters an invalid state.

property assumptions

Return the modeling assumptions or caveats.

Returns:

Assumption or caveat strings in source order.

best_evaluation(metric=None)[source]

Return the highest-ranked evaluation in the active mode.

Parameters:

metric – Value for metric.

Returns:

Computed result for this callable.

Raises:

Exception – Raised when the callable encounters an invalid state.

property candidate_count

Return the number of candidates exposed by the active decision mode.

Returns:

Computed result for this callable.

property candidate_kind

Return the active decision-candidate mode.

Returns:

Computed result for this callable.

property choice_benchmarks

Return the empirical categorical choice benchmarks.

Returns:

Parsed empirical choice benchmarks in source order.

property competitor_profiles

Return the observed competitor profiles.

Returns:

Parsed competitor profiles in source order.

property constraint_specs

Return typed constraint descriptors.

Returns:

Parsed constraint specs in source order.

property constraints

Return the curated constraint descriptions.

Returns:

Constraint descriptions in source order.

property decision_variable_specs

Return typed engineering variable specifications.

Returns:

Parsed engineering variable specs in source order.

property decision_variables

Return the curated decision-variable descriptions.

Returns:

Decision-variable descriptions in source order.

property default_choice_metric

Return the default metric used for empirical choice ranking.

Returns:

Supported metric name.

evaluate(candidate)[source]

Evaluate one candidate using the active decision mode.

Parameters:

candidate – Value for candidate.

Returns:

Computed result for this callable.

Raises:

Exception – Raised when the callable encounters an invalid state.

classmethod from_manifest(manifest)[source]

Construct the problem directly from a packaged manifest.

Parameters:

manifest – Value for manifest.

Returns:

Computed result for this callable.

iter_candidates()[source]

Yield candidates in deterministic source order.

Yields:

Generated values from iter candidates.

iter_evaluations(metric=None)[source]

Yield evaluations for every candidate in deterministic order.

Parameters:

metric – Value for metric.

Yields:

Generated values from iter evaluations.

Raises:

Exception – Raised when the callable encounters an invalid state.

property objective_specs

Return typed objective descriptors.

Returns:

Parsed objective specs in source order.

property objectives

Return the stated objective descriptions.

Returns:

Objective descriptions in source order.

property option_count

Return the total size of the explicit discrete option space.

Returns:

Product of all discrete factor cardinalities.

property option_factors

Return the typed discrete conjoint factors.

Returns:

Parsed discrete factors in source order.

rank_evaluations(metric=None)[source]

Return all candidate evaluations ranked by the active objective.

Parameters:

metric – Value for metric.

Returns:

Computed result for this callable.

Raises:

Exception – Raised when the callable encounters an invalid state.

render_brief(include_citation=True, citation_mode='summary')[source]

Render the decision statement plus its extracted structure.

Parameters:
  • include_citation – Whether to append bundled source citations.

  • citation_mode – Citation rendering mode for the Sources section.

Returns:

Markdown brief suitable for review or reuse.

to_mcp_server(*, server_name=None, include_citation=True, citation_mode='summary')[source]

Expose this decision problem through FastMCP.

The exported server exposes: - problem://design-brief resource (structured decision brief) - problem://decision-candidates resource (deterministic index mapping) - list_candidates() tool for deterministic candidate indexing - evaluate(choice_index) tool - submit_final(choice_index, justification?) tool

Parameters:
  • server_name – Optional explicit server name.

  • include_citation – Whether the brief includes citation sections.

  • citation_mode – Citation rendering mode for the brief resource.

Returns:

Configured FastMCP server.

class design_research_problems.problems.DecisionProfile(name, values)[source]

One observed alternative profile, such as a competitor.

name

Display name for the observed profile.

values

Observed factor-key to value mapping.

class design_research_problems.problems.DecisionVariableSpec(symbol, label, unit, lower_bound, upper_bound)[source]

One bounded engineering-side design variable.

label

Human-readable variable label.

lower_bound

Inclusive lower bound for the variable.

symbol

Stable symbol used in equations and references.

unit

Display unit when the source provides one.

upper_bound

Inclusive upper bound for the variable.

class design_research_problems.problems.GrammarProblem(metadata, statement_markdown='', resource_bundle=None)[source]

Abstract base for grammar-defined discrete design problems.

Store shared metadata for one grammar problem.

Parameters:
  • metadata – Shared packaged metadata for the problem.

  • statement_markdown – Human-readable problem statement.

  • resource_bundle – Optional package-resource loader.

enumerate_next_states(state)[source]

Return the legal successor states for the given state.

This convenience method supports generic grammar-family tooling that only needs the next design states, not the richer transition metadata.

Parameters:

state – Current grammar state.

Returns:

Deterministic next states in the same order as enumerate_transitions().

abstractmethod enumerate_transitions(state)[source]

Return deterministic legal transitions for the given state.

Parameters:

state – Current grammar state.

Returns:

Fully specified legal transitions in deterministic order.

abstractmethod evaluate(state)[source]

Evaluate one design state.

Parameters:

state – Grammar state to evaluate.

Returns:

Problem-specific evaluation result.

abstractmethod initial_state()[source]

Return the canonical starting state.

Returns:

Library-defined initial design state.

to_mcp_server(*, server_name=None, include_citation=True, citation_mode='summary', include_grammar_helpers=True)[source]

Expose this grammar problem through a stateful FastMCP server.

The server maintains one mutable current_state per server instance. It is intended for one-agent/single-client usage by contract.

Parameters:
  • server_name – Optional explicit server name.

  • include_citation – Whether the design brief includes citations.

  • citation_mode – Citation rendering mode for the design brief.

  • include_grammar_helpers – Whether to include helper tools reset_design, get_design, list_transitions, and evaluate.

Returns:

Configured FastMCP server.

class design_research_problems.problems.GrammarTransition(rule_name, parameters, next_state)[source]

One deterministic grammar transition produced by a concrete rule method.

next_state

State returned by applying the rule.

parameters

Ordered keyword arguments that fully specify the rule call.

rule_name

Concrete public method name used to produce the transition.

class design_research_problems.problems.MCPProblem(*, metadata, statement_markdown='', command, args=(), cwd=None, env=None, resource_bundle=None)[source]

Problem wrapper that ingests one external MCP server over stdio.

Store metadata and upstream stdio server launch configuration.

Parameters:
  • metadata – Shared packaged metadata.

  • statement_markdown – Canonical Markdown statement.

  • command – Executable command used to start the upstream MCP server.

  • args – Command-line arguments passed to the upstream server.

  • cwd – Optional working directory for the subprocess.

  • env – Optional environment variables merged over inherited defaults.

  • resource_bundle – Optional package-resource loader for problem assets.

classmethod from_manifest(manifest)[source]

Construct one MCP-backed problem directly from a packaged manifest.

Parameters:

manifest – Parsed manifest used to initialize the problem instance.

Returns:

Problem instance populated from the manifest data.

Raises:

ProblemEvaluationError – If the manifest parameters are invalid.

classmethod from_stdio(*, metadata, command, args=(), cwd=None, env=None, statement_markdown='', resource_bundle=None)[source]

Construct one MCP-backed problem directly from stdio launch parameters.

Parameters:
  • metadata – Shared packaged metadata.

  • command – Executable command used to start the upstream MCP server.

  • args – Command-line arguments passed to the upstream server.

  • cwd – Optional working directory for the subprocess.

  • env – Optional environment variables merged over inherited defaults.

  • statement_markdown – Canonical Markdown statement.

  • resource_bundle – Optional package-resource loader for problem assets.

Returns:

MCP problem instance with validated stdio configuration.

to_mcp_server(*, server_name=None, include_citation=True, citation_mode='summary')[source]

Expose this MCP-backed problem through a local FastMCP proxy server.

The exported server exposes the standard problem://design-brief resource and proxies upstream MCP tools one-for-one.

Parameters:
  • server_name – Optional explicit server name.

  • include_citation – Whether the design brief includes citations.

  • citation_mode – Citation rendering mode for the design brief.

Returns:

Configured FastMCP server.

Raises:

ProblemEvaluationError – If discovered upstream tool schemas are not compatible with keyword-argument proxy wrapping.

class design_research_problems.problems.OptimizationEvaluation(x, objective_value, total_constraint_violation, max_constraint_violation, is_feasible, higher_is_better=False)[source]

Standardized evaluation result for one optimization candidate.

higher_is_better = False

Whether larger objective values are better for ranking.

is_feasible

Whether x is feasible under the default tolerance.

max_constraint_violation

Largest single bound or constraint violation.

objective_value

Objective value at x.

total_constraint_violation

Sum of all bound and constraint violations.

x

Evaluated candidate vector.

class design_research_problems.problems.OptimizationProblem(metadata, statement_markdown='', resource_bundle=None)[source]

Abstract base for optimization problems.

Store shared metadata and initialize empty bounds and constraints.

Parameters:
  • metadata – Shared packaged metadata for the problem.

  • statement_markdown – Human-readable problem statement.

  • resource_bundle – Optional package-resource loader.

bound_violation(variables)[source]

Return the total amount by which bounds are violated.

Parameters:

variables – Candidate design vector.

Returns:

Sum of lower- and upper-bound violations.

constraint_violation(variables)[source]

Return the total equality, inequality, and bound violation.

Parameters:

variables – Candidate design vector.

Returns:

Total scalar violation measure.

evaluate(variables)[source]

Evaluate one candidate vector without invoking the solver.

Parameters:

variables – Candidate design vector to score.

Returns:

Standardized optimization evaluation for variables.

abstractmethod generate_initial_solution(seed=None)[source]

Generate a deterministic or seeded initial solution.

Parameters:

seed – Optional random seed.

Returns:

Initial solution vector.

max_constraint_violation(variables)[source]

Return the largest single equality, inequality, or bound violation.

Parameters:

variables – Candidate design vector.

Returns:

Maximum scalar violation.

abstractmethod objective(variables)[source]

Evaluate the objective function.

Parameters:

variables – Candidate design vector.

Returns:

Scalar objective value.

abstractmethod solve(initial_solution=None, seed=None, maxiter=200)[source]

Run the problem’s representative baseline optimization routine.

Parameters:
  • initial_solution – Optional candidate vector supplied by the caller.

  • seed – Optional random seed for any stochastic baseline logic.

  • maxiter – Problem-specific iteration or candidate budget.

Returns:

Baseline optimization result.

to_mcp_server(*, server_name=None, include_citation=True, citation_mode='summary')[source]

Expose this optimization problem through FastMCP.

The exported server exposes: - problem://design-brief resource - evaluate(x) tool for stateless candidate scoring - submit_final(final_x, justification?) tool

Parameters:
  • server_name – Optional explicit server name.

  • include_citation – Whether the design brief includes citations.

  • citation_mode – Citation rendering mode for the design brief.

Returns:

Configured FastMCP server.

class design_research_problems.problems.Problem(metadata, statement_markdown='', resource_bundle=None)[source]

Shared documentation and resource container for one packaged problem.

Store the shared packaged metadata and resource handle.

Parameters:
  • metadata – Shared packaged metadata.

  • statement_markdown – Canonical Markdown statement.

  • resource_bundle – Optional package-resource loader for problem assets.

classmethod from_manifest(manifest)[source]

Construct one problem directly from a manifest entry.

Parameters:

manifest – Parsed manifest used to initialize the problem instance.

Returns:

Problem instance of cls populated from the manifest data.

read_asset(name)[source]

Read an asset by logical asset name.

Parameters:

name – Logical asset name declared on the problem metadata.

Returns:

Raw bytes for the requested packaged asset.

Raises:
  • RuntimeError – If the problem was created without a resource bundle.

  • KeyError – If name does not match any declared asset.

render_brief(include_citation=True, citation_mode='summary')[source]

Render a human-readable problem brief.

Parameters:
  • include_citation – Whether to append citation content to the brief.

  • citation_mode – Citation rendering mode to include summary text, raw citation text, or both.

Returns:

Markdown brief ready for display or export.

static resource_bundle_from_manifest(manifest)[source]

Build a package-resource loader rooted at one manifest entry.

Parameters:

manifest – Parsed manifest describing the packaged resource location.

Returns:

Resource bundle rooted at the manifest’s package directory.

to_mcp_server(*, server_name=None, include_citation=True, citation_mode='summary')[source]

Expose the problem through a minimal FastMCP interface.

This default implementation fits text-style problems: one design-brief resource plus one free-text submit_final tool.

Parameters:
  • server_name – Optional explicit server name.

  • include_citation – Whether the design brief includes citations.

  • citation_mode – Citation rendering mode used for the design brief.

Returns:

Configured FastMCP server.

class design_research_problems.problems.ProblemAsset(name, media_type, description, resource_path)[source]

Metadata for a non-code packaged asset.

description

Short human-readable asset description.

media_type

Media type such as image/png or text/plain.

name

Logical asset name used by the public API.

resource_path

Path to the packaged resource relative to the problem directory.

class design_research_problems.problems.ProblemKind(*values)[source]

Supported high-level problem families.

DECISION = 'decision'
GRAMMAR = 'grammar'
MCP = 'mcp'
OPTIMIZATION = 'optimization'
TEXT = 'text'
class design_research_problems.problems.ProblemMetadata(problem_id, title, summary, kind, taxonomy, citations, assets, capabilities, study_suitability, implementation=None)[source]

Packaged metadata for one problem entry.

assets

Packaged non-code assets associated with the problem.

capabilities

Machine-readable implementation and packaging capabilities.

citations

Canonical source citations.

property feature_flags

Return the compatibility union of capabilities and study-suitability flags.

Returns:

Sorted union of both controlled-vocabulary flag sets.

has_feature(feature_flag)[source]

Return whether this problem advertises one feature flag.

Parameters:

feature_flag – Feature flag name to test.

Returns:

True when the normalized feature flag is present.

implementation = None

Optional module:attribute import path for executable problem types.

kind

High-level problem family.

problem_id

Stable catalog identifier.

study_suitability

Machine-readable study-use labels for the entry.

summary

Short one-paragraph description.

taxonomy

Shared descriptive taxonomy.

title

Display title shown to users.

class design_research_problems.problems.ProblemTaxonomy(formulation, convexity, design_variable_type, is_dynamic, orientation, feasibility_ratio_hint, objective_mode, constraint_nature, bounds_summary, tags, deliverable_type=None, timebox_hint_minutes=None, participants=None, evaluation_mode=None)[source]

Shared descriptive taxonomy for all problem families.

bounds_summary

Human-readable summary of bounds or design-space limits.

constraint_nature

Constraint strictness label such as hard, soft, or informal.

convexity

Convexity characterization when applicable.

deliverable_type = None

Expected participant output such as sketches or concepts.

design_variable_type

Variable domain such as continuous, discrete, or mixed.

evaluation_mode = None

Typical study outcome mode such as novelty or requirement coverage.

feasibility_ratio_hint

Optional rough estimate of the feasible design-space fraction.

formulation

High-level mathematical or representational formulation.

is_dynamic

Whether the problem changes over time during evaluation or optimization.

objective_mode

Objective structure such as single, multi, or qualitative.

orientation

Problem framing such as engineering-practical or mathematical.

participants = None

Typical participation mode such as individual or team.

tags

Searchable descriptive tags.

timebox_hint_minutes = None

Typical task duration for the prompt when known.

class design_research_problems.problems.TextProblem(metadata, statement_markdown='', resource_bundle=None)[source]

Concrete text-only problem.

Store the shared packaged metadata and resource handle.

Parameters:
  • metadata – Shared packaged metadata.

  • statement_markdown – Canonical Markdown statement.

  • resource_bundle – Optional package-resource loader for problem assets.