ICYMI Recap: T4ST Talk 1: Intuition behind Bayesian optimization with and without multiple metrics

Barrett Williams
Advanced Optimization Techniques, Company news, Focus Area

In Case You Missed It

Recap for Tuning for Systematic Trading Talk 1:
Intuition behind Bayesian optimization with and without multiple metrics

Although much of the world is working from home (where and if possible) due to the COVID-19 pandemic, the markets are still—mostly—online. If your business focuses on systematic trading, we have a few handy tips to adjust, re-train, and tune your models as you adapt to changing market conditions.

Tobias Andreassen, who supports a number of our systematic trading customers, presented the intuition behind Bayesian optimization for model optimization with a single or multiple (often competing) metrics. Many times it makes sense to analyze a second metric to avoid myopic training runs that overfit on your data, or otherwise don’t represent or impede performance in real-world scenarios. Here are a quick set of takeaways:

  • Black box optimization can serve to tune many models in an efficient manner, especially if you report multiple metrics about your model back to SigOpt
  • SigOpt will handle the challenging task of finding the tradeoffs in two competing metrics, so that you can choose to prioritize one, the other, or both
  • Providing metric thresholds will help you tune your model faster, based on the constraints of your business needs

And here is a more detailed summary of what Tobias covered:

  • SigOpt’s role as a black-box optimization service (6:40)
  • Definition of tunable parameters and hyperparameters (10:41)
  • Comparison of search strategies (11:52)
  • How Bayesian Optimization is hard to parallelize (13:34)
  • Four examples of complexity in optimization (16:45)
  • Optimizing the optimizer is also hard, but this is our specialty (19:31)
  • Competing metrics: a definition, and why you should report multiple metrics (21:11)
  • SigOpt adjusts the lambda value to prioritize one metric over another, returning a Pareto frontier (25:44)
  • Metric thresholds let you limit your frontier to valid or useful regions (26:31)
  • Using feedback to reduce compute cost and experiment time (29:09)

If you joined us, we appreciate you taking the time! If you’d like to watch the recording, you can find it here, or find just the slides here. This is a monthly series, so join us for the third Tuesday of April. We’ll be covering a new topic related to technical use cases, modeling best practices or insights from our research on algorithms that support model optimization. You can learn more about the series and register for the next event here

Barrett-Williams
Barrett Williams Product Marketing Lead

Want more content from SigOpt? Sign up now.