Defining just the right metrics to assess a model in the context of your business is no easy feat. Over the past couple years, we’ve introduced a number of features to help you keep track of a variety of metrics, then optimize against one or more, and even apply thresholds to your tradeoff metrics. Today we’re happy to announce that Metric Constraints are generally available on SigOpt to all enterprise customers. This feature is part of our ongoing commitment to augment black-box optimization, by empowering our customers with “human-in the-loop” optimization. What does this mean? You or your modelers can establish “guardrails” that ensure the evaluated outcome of a specific model fits within certain auxiliary criteria, based on prior knowledge, domain expertise, or business requirements.
For example, a machine learning engineer tuning a computer vision model might optimize for the validation accuracy, given a hardware limitation on the size of the model and an engineering constraint on the inference time of the model. With Metric Constraints, the engineer can specify the desired thresholds on inference time and model size. In return, SigOpt will concentrate on suggesting model configurations that meet her problem-specific criteria.
Inspecting the Experiment History for each individual metric.
With simple single-metric optimization, one of the ways to account for these problem-specific criteria is to set observations that don’t meet thresholds as failed observations. This creates additional bookkeeping for the modeler and is especially cumbersome if the criteria need to be updated ad hoc during the optimization process. With Metric Constraints, modelers can adjust their thresholds at any point during the optimization with one line of code—or from the web—without interrupting the workflow.
Updating the threshold on a constraint metric on the Experiment Properties page.
Each Metric Constraint also works in conjunction with SigOpt Multimetric Experiments. For example, you can explore the tradeoff between accuracy and the inference time of the network, under a constraint on the size of the network. Comparable examples exist in quantitative trading, in which a trader must balance the return and risk of her portfolio, subject to market neutrality limitations.
In a follow-up to this blog post, we work through an example of tuning a CNN to classify traffic signs with a constraint on network size. In particular, we compare the Metric Constraints, Multimetric Optimization, and Metric Thresholds features, and how each feature can be applied to address different problems and needs.
To learn more about Metric Constraints and how they fit into SigOpt’s overall approach to Metric Management, register for our webinar on Thursday, May 28, 2020, at 10 AM PT. If you’re interested to learn more about how SigOpt can serve your modelers, fill out this form to try out SigOpt today. If you’re engaged in academic research, check out our free academic usage plan by submitting this form for access (or here if it’s COVID-19 focused).
Use SigOpt free. Sign up today.