In Case You Missed It: Training, Tuning, and Metric Strategy
If your business focuses on systematic trading, we’d like to share how you can most effectively re-train, adjust, and tune your deep learning models as you adapt to fluid market conditions.
In our third talk in the Tuning for Systematic Trading series, Tobias Andreasen, a machine learning engineer who supports a number of our financial services customers, described how to iterate from a simple regression and grid search, to a deep neural network for classification of sample radiological images. He shows how at first, the performance metrics don’t meet business requirements, but SigOpt provides you with tools to provide useful parameter sets and reduce excess training and optimization time.
Here are two quick takeaways:
- Metric Constraints help you infuse your modeling process with validity requirements on certain tracked (but unoptimized) metrics
- Training Monitor with Early Stopping helps you automatically halt your training process when your metric needs are met, saving time and compute budget
And here is a more detailed summary of what Tobias covered:
- Reviewing, in brief, black box optimization, parallelism, and multitask from a theoretical point of view (1:29)
- It’s helpful to consider how you might apply the various advanced features of the optimization engine: let’s focus on experiment insights and the enterprise platform (3:04)
- It makes sense to start with project hygiene, and decide can you set up a repeatable, configurable training process (5:21)
- Tobi sets up a breast cancer radiology dataset as a realistic example with some class imbalance (7:36)
- Starting with a simple regression based classifier and a suitable budget for a simple grid search for comparison with more sophisticated modeling and tuning approaches (10:34)
- Showing how to make an Experiment Create call via API, and then write the optimization loop into your model training code (13:30)
- Using stored metrics to assess model quality after tuning, to look into different quality metrics (15:37)
- Setting up Metric Constraints as guardrails for your model (minimum accuracy for both malignant and benign samples) (19:52)
- Switching to a Keras neural network with 3 parameters as the underlying model to achieve higher accuracy (21:51)
- Setting up Training Monitor to detect convergence, using reports to SigOpt every epoch (24:49)
- Mapping this image classification task to systematic trading (28:03)
- Setting up early stopping criteria to save time and compute cost by stopping training once metric has reached a certain threshold (29:11)
- Drawing conclusions and sharing projects with teammates (29: 51)
If you joined us live, thank you for taking the time to do so. If you’d like to watch the recording, you can find it here, or find just the slides here. This is a monthly series, so join us on June 16th for the next session. We’ll be covering a new topic related to technical use cases, modeling best practices or insights from our research on algorithms that support model optimization. You can learn more about the series and register for the next event here.