Defining, selecting, and optimizing with the right set of metrics is critical to every modeling process, but these steps are often hard to execute well. Building a useful model requires that the modeler select the right set of metrics, then maximize or minimize them during the model training and tuning process. Metrics are critical to exploring model problems, understanding model behavior and advancing any modeling project to production. But there are often a variety of metrics that matter and it can be tough to track, analyze, and optimize all of these through the experimentation process.
At SigOpt, we’ve built out a comprehensive set of tools to help you understand, adjust, and adapt your metrics to suit your business needs, using them to advance your modeling process. To get started, set the Metric Strategy to determine whether a metric is stored, optimized (maximized or minimized), or constrained. This is a flexible way to classify these metrics that can be adjusted run-over-run as you iterate through your model development process. We capture this metric classification in our web dashboard so you have a full history of these metrics and visualizations of their behavior through training and tuning runs.
Some metrics are worth tracking but not optimizing. In our platform, you can store up to 50 metrics for each model as you train and tune. No need to track 50 if you only care about 1, but this can be useful if you need to design compound metrics and store them along with the standard metrics your model likely already produces like loss, F1, accuracy, or others.
If you’re debating whether optimizing a secondary metric might be necessary, you can add one in to seamlessly run a multimetric optimization experiment and generate a Pareto frontier of observations plotted along axes that represent both metrics, such as classification accuracy versus inference time.
This multimetric optimization approach may help you set either a metric threshold, which lets you specify that a certain minimum or maximum is required for one or both metrics, or a metric constraint, which lets you apply an arbitrary function as the boundary for valid observations.
Now, what happens when your model runs out of memory, or exhibits a computational error due to the parameters suggested? You can report these as failures to avoid this parameter region, saving time and compute costs. Then you can assess whether your model is needlessly executing training runs that fail in a certain region of the search space. You might speed up your optimization process by reporting these failures to SigOpt, so that our backend can start to avoid suggesting parameter sets that induce observation (or model) failure.
Our API makes it easy to vary the way you use metrics as you iterate on your modeling problem through training and tuning runs. Another possible workflow might begin with multimetric optimization for a longer training model, after which the modeler might realize that generating a dense Pareto frontier will be time-prohibitive. In this scenario, the modeler can swap out one of the two metrics for either a threshold or a constraint to more efficiently bound the problem and return parameter set suggestions more efficiently and with a reduction of your compute cost, because fewer training runs are required.
Metric Management enables a new way to explore, understand, and advance your metric selection process. The suite of tools at your disposal includes:
To learn more about Metric Management and how it can enhance your modelers’ approach to metrics, register for our webinar on Thursday, May 28 at 10am PT/1pm ET. If you’re an enterprise interested to learn more about SigOpt, schedule a meeting for a demo, or if you’re engaged in academic research, submit this form for free access.