Modeling can be a messy ordeal. At times, it is confusing, frustrating, and disorganized. If you’re still tracking progress in spreadsheets or on a physical notepad, there’s a better option. When you want to decide what approach to take next, it’s essential that you have all relevant information at your disposal. In the first post in this series, we reviewed the concept of “runs,” how tracking individual training runs can make your modeling process more successful and repeatable. In a second post, we described a common deep learning-focused workflow you might deploy, complete with dashboard and plot customization.
Today, we are tackling a different part of this tricky problem. How can you boost your workflow with a combination of training and tuning to quickly identify and select the best model? We’ll demonstrate how with just a few lines of code, you can incorporate SigOpt’s best-in-class optimization capabilities into your training and tuning workflows.
Choosing the best model to explore
In our last post, we showed how you can visualize training runs alongside the rest of the plots on the project Analysis page. Today, we’ll begin by using a model for which we’ve established a few manual training runs, trained in a notebook, at the experimentation phase of the modeling process. On the same plot, indicated by color, we can see the comparative performance of different model types, an xgboost gradient-boosted tree, and a sequential Keras MLP model. Given this dataset and classification problem, the xgboost learner is the clear winner with a cluster of promising results that score highly on both AUPRC and F1 metrics.
Building insights that help you find a viable parameter space
As above, it’s helpful to see which model performs better, but for that cluster of results at top-right, it’s also helpful to dig in to determine which hyperparameter regions yield that cluster.
The linked plots found on the Analysis page can help you better understand which hyperparameter values and ranges contribute to that successful cluster.
Defining the parameter search space
Establishing viable values for max_depth (the depth of the gradient-boosted tree), log10_learning_rate
, min_child_weight
and n_estimators
, we can now specify the range in which we want to optimize our model with SigOpt. We should also tell SigOpt to attempt to maximize the AUPRC metric.
To define the parameter range and kick off the SigOpt experiment in the notebook, we need only add %%experiment
, which will run “cell magic” and call SigOpt right from the notebook. In this example, we showcase Experiment Management paired with random search, accessible to all Experiment Management beta users. You can see an example here:
Then, in the notebook discussed above, simply change %%run
to %%optimize
to execute the SigOpt experiment, as follows:
If you implemented your model in a Python file, you can define the parameter ranges in a sigopt.yml
file.
Now with just a single command you can now launch the optimization job via the command-line interface (CLI). In this example we’re using the Python client available with the Experiment Management beta:
Now that the experiment is created and running, we can switch over to the Project page in SigOpt (this view is enabled only when you’ve signed up for the Experiment Management beta here), where you can expand the experiment to see the best run so far, and its optimized metric value (in this case, again AUPRC).
Additional metrics are listed below the Best Results table; other tracked metrics for this experiment include F1 Score (as seen above), as well as the dollar value of errors caused by incorrect predictions. Depending on your use case, these other metrics may be invaluable when selecting your final parameter set. (And they might even inspire you to use Metric Constraints.)
Intelligent Optimization with just a single line
Optimizing your model with random search might seem to be helpful at first, but SigOpt is designed to tune your model as efficiently as the latest research permits. Accordingly, all you need to do is remove the line type='random'
in the %%experiment
cell of your notebook, and intelligent optimization will be enabled for your experiment. As planned, you’ll start to see the performance of your model improve significantly faster, again with best results bubbling up to the top of the table.
You can really see the difference that intelligent optimization makes on the analysis page. Create a new scatter plot with AUPRC on one axis and F1 Score on the other, colored by experiment. The resulting plot shows how well the runs from our intelligently optimized experiment perform compared to random and manual experimentation.
Comparing across all runs to find the best found run
Once we’ve run a few experiments, we can compare runs, even across experiments, to determine whether we’ve found a winning model. To more efficiently examine only the best results, we set up a filter as follows, so that we can examine only the results that yield an AUPRC value of more than 0.9:
When we switch back to view plots, we see the following:
At this stage you can return to the Runs page to view that statistics and metrics of your best run compared to prior runs, all with the final Parameter Values for your best found run:
Streamlining your workflow with SigOpt Experiment Management
As you can see, SigOpt now enables you to leverage valuable information from past runs to help you select the best model and define your parameter space for further optimization. Using the same code you use to track Runs in SigOpt, you only need to add a .yml configuration file to state your parameter bounds, and add one line of “cell magic” in your notebook. With a one-line code change, it’s simple to use SigOpt’s intelligent optimization to search for the best hyperparameters for the model you’re training. Lastly, comparing all prior runs in a single dashboard now gives you the confidence that you’ve chosen the best model for your data, and for your chosen metric. You can read more about Experiment Management in the documentation, or if you’d like to review prior posts in this series, you can find them here and here. If you’re interested in signing up for our Experiment Management private beta, you can do so here.
Use SigOpt free. Sign up today.