In Case You Missed It:
Efficient Training and Tuning for Deep Learning Models
Although much of the world is working from home (where and if possible) due to the COVID-19 pandemic, the markets are still online. If your business focuses on systematic trading, we have a few handy tips to adjust, re-train, and tune your deep learning models as you adapt to fluid market conditions.
In our second talk in the Tuning for Systematic Trading series, Tobias Andreasen, a machine learning engineer who supports a number of our systematic trading customers, described how parallelism and multitask optimization can speed up the training and tuning of your deep learning models to ensure that your model achieves better results.
Here are two quick takeaways:
- Parallel optimization more efficiently utilizes your infrastructure and reduces wall-clock time
- Multitask optimization helps you train lower fidelity models quickly, to seed better parameters for higher fidelity or full versions of your model
And here is a more detailed summary of what Tobias covered:
- Parallel function evaluations have numerous efficiency benefits (7:04)
- Parallel evaluations are challenging, because some results are pending, while others are already complete: : SigOpt addresses this with asynchronous parallel optimization (10:16)
- Asynchronous parallelism allows you to explore and exploit at the same time (14:34)
- Long training and expensive models: how to tune efficiently (16:43)
- Multitask optimization lets you define and train lower-cost functions to learn parameters for the full function (18:56)
- You can subsample using a smaller number of epochs or a percentage of your training data, as long as low-fidelity versions of your model exist (23:33)
If you joined us live, thank you for taking the time to do so. If you’d like to watch the recording, you can find it here, or find just the slides here. This is a monthly series, so join us on May 19th for the next session. We’ll be covering a new topic related to technical use cases, modeling best practices or insights from our research on algorithms that support model optimization. You can learn more about the series and register for the next event here.