Multisolution: A deeper dive

Joyce Tang and Gustavo Malkomes

Last week we shared our latest changes and updates to our advanced feature, Multisolution. In this post, we’re going to take a deeper dive into the intuition behind Multisolution and shed some light on how it differs from our other advanced offerings.

The Methodology behind Multisolution

Our Multisolution backend uses various techniques to balance the goal of finding high-performing models with parameter diversity in mind. We developed a novel methodology presented at ICML 2021, Constraint Active Search for Multiobjective Experimental Design to address this challenge. This approach powers our ensemble of strategies for Multisolution as well as All-Constraints experiments, which aims to efficiently search for promising configurations for several known constraints, and simultaneously sample diverse configurations within those bounds. If you are interested in exploring more about All-Constraints check out our blog post here or listen to a podcast with our research engineer, Gustavo Malkomes in this session.

Intuitively, our Multisolution search strategy will balance seeking out high-performing function values while favoring parameter configurations distinctly different from previous runs. This balance happens automatically within SigOpt, so you don’t need to make these choices manually. This is also the reason why a set budget is required to run Multisolution– you can view our recommendations around budget setting here. Because SigOpt searches for more than just the maximum value with Multisolution, the overall complexity of the problem increases, and the number of runs should also be increased to give us the best shot at success.

How does Multisolution Differ from Multimetric?

You can take advantage of both Multisolution and Multimetric optimization features to enhance the experimentation process of your models. While the two features do share the ability of returning multiple solutions for an experiment, their goals to optimization are vastly different, as are their potential applications.

In a Multimetric experiment, you must define two distinct measurements of success, for example model accuracy and inference time, and SigOpt will return several optimal points that trade off between the two metrics. SigOpt searches for a pareto efficient frontier from which you can determine how well you want one metric to perform without losing too much of the other. It looks for solutions that are optimal in metric space.

Figure 1: In a Multimetric experiment, two opposing metrics (f1, f2) are defined and a pareto efficient frontier (defined by the blue points) returned for you to pick and choose solutions from

Multisolution experiments, on the other hand, aim to find separate solutions that are distinct in parameter space. It also only allows for one defined metric to optimize over, focusing on shedding light on parameter groupings that perform well, but are distinct. The focus is on multiple solutions in parameter space, not in necessarily defining an optimal frontier.

Figure 2: Multisolution searches for all optima values in the input space and returns the topmost diverse solutions (in this case, five of them), as represented by the opaque white dots within the feasibly performing orange region.

Why diverse solutions matter

As we mentioned in the previous blog post, in some applications finding different models is the key to being able to find a solution that can help reconcile the differences between your validation and test scenarios. While the two may be weakly correlated, justifying the nature of your development process, it is still sometimes necessary to consider alternate solutions to ensure that the solution you deploy is the one you can depend on.

Let’s consider an example from Learning-to-Rank for Information Retrieval using the dataset MQ2008. In this example, we trained XGBoost models on the training set and performed hyperparameter optimization using the validation set to compute the offline metric (NDCG). We use the performance on the test set as our “online metric”. In the plot below, we can see that the offline and online metrics are weakly correlated, meaning that we assume models that perform well in the offline (development environment) also yield roughly desirable online (deployment) results.

Figure 3: Example of Offline vs. Online metrics achieved by an XGBoost model on the Learning-to-Rank task using the dataset MQ2008. The colored dots (blue and green) represent feasible, diverse solutions returned by SigOpt Multisolution; the blue represents the additional high performing solutions returned by Multisolution, while the green represents the single best model during the development phase.

The correlation, however, is not perfect. The best model during the offline optimization (green) is not the best performing during the online phase. Using SigOpt’s Multisolution, we found five alternative models (setting ‘num_solutions = 5’) that are high performing according to our optimized metric (offline metric) but are different in parameter space. That diversity yields different results in the online metric. In our case, we uncovered a solution that offers us up to a 10% boost in performance. Keep in mind, however, that SigOpt cannot observe the online metrics, so it’s up to you to decide which of the diverse solutions would work best for your use case.

Share your thoughts

Do you have examples where Multisolution could enhance your modeling experimentation? How are you planning on adding Multisolution to your workflow? Drop a comment on the SigOpt Community page to let us know! If you don’t already have a SigOpt account, you can sign up here for free.

Happy Modeling,

Joyce and Gustavo

img-Joyce
Joyce Tang Machine Learning Specialist
GustavoMalkomes
Gustavo Malkomes Research Engineer

Want more content from SigOpt? Sign up now.