Evaluation Metric Failure

Observations report the value of your metric evaluated on a given set of assignments. Sometimes, though, a set of Assignment is not feasible for evaluation. For example, a machine runs out of memory, a chemical mixture fails to stabilize and is not measurable, or the Assignments are simply not in the domain of the function you're trying to optimize.

Learning Failure Regions

Let's say we have an experiment with Bounds defined as:

{
  "parameters": [
    {
      "bounds": {
        "max": 1.0,
        "min": 0.0
      },
      "name": "x",
      "type": "double"
    },
    {
      "bounds": {
        "max": 1.0,
        "min": 0.0
      },
      "name": "y",
      "type": "double"
    }
  ]
}

with the following constraint: x + y < 1. We could provide a Suggestion with Assignments {"x": 0.3, "y": 0.8}, which violate this constraint. In situations like this, it's your responsibility to report a failed Observation, which tells us that this Suggestion has unfeasible Assignments.

As you report more of these failed Observations, our internal machine learning algorithms will figure out the feasible region and only recommend points there, optimizing your Experiment within this restricted, non-rectangular domain.

Note: The complexity of the failure region and the tightness of your Parameter Bounds impact the speed at which we will learn an Experiment's failure region.