Deep Learning, Experiment Management, Natural Language
Lessons from using SigOpt to weigh tradeoffs for BERT size and accuracy
Machine Learning Engineer Meghana Ravikumar explains how she used Experiment Management to track and optimize training runs during her most recent project to effectively reduce the size of BERT.
Advanced Optimization Techniques, Application, Deep Learning, Experiment Management, Modeling Best Practices, Natural Language
Efficient BERT: Find your optimal model with Multimetric Bayesian Optimization
SigOpt ML Engineer Meghana Ravikumar shares how concurrently tuning metrics like model accuracy and number of model parameters allows us to distill BERT and assess the trade-offs between model size and performance.
SigOpt Webinar: Warm Start Tuning with Prior Beliefs
SigOpt is designed to accelerate hyperparameter optimization by intelligently trading off exploration and exploitation with an ensemble of Bayesian and global optimization algorithms that do not require access to proprietary prior insights on a model or parameters. But our users often have deep insights on their models and parameters that could guide this tuning process. Michael McCourt, Head of Research, walks through how SigOpt's newest feature Prior Beliefs to incorporated this type of prior knowledge into SigOpt’s hyperparameter optimization process.
All Model Types, Multimetric Optimization, Training & Tuning
SigOpt Webinar: Introducing Metric Management
SigOpt provides a “human-in-the-loop” process so that you can guide the platform’s discovery of a more effective set of parameters for your model, based on external business factors or domain-specific expertise. This process is called Metric Management, and is enabled by a variety of tools built into our API and web interface.
Advanced Optimization Techniques, Deep Learning, Training & Tuning
Tuning for Systematic Trading: Training, Tuning and Metric Strategy
In deep learning, it can be particularly tough to select the right metric and know when a model has converged during training. In this talk, we discuss ways to monitor convergence, automate early stopping and set the right metric strategy for deep learning training and tuning jobs
Advanced Optimization Techniques, All Model Types, Multimetric Optimization
Tuning for Systematic Trading: Intuition behind Bayesian optimization with and without multiple metrics
To kick off our first session of Tuning for Systematic Trading, SigOpt ML Engineer Tobias Andreasen walks us through an introduction of Bayesian optimization and some advanced features that can help you optimize model performance.
Advanced Optimization Techniques, Data Augmentation, Deep Learning, Modeling Best Practices
SigOpt Webinar: Tuning Data Augmentation to Boost Model Performance
SigOpt ML Engineer Meghana Ravikumar shares her work exploring tradeoffs between transfer learning methods, optimization strategies, and automated data augmentation.
Advanced Optimization Techniques, Data Augmentation, Deep Learning
SigOpt at MLconf: Optimized Image Classification on the Cheap
Abstract: In this talk, we anchor on building an image classifier trained on the Stanford Cars dataset to evaluate two approaches to transfer learning -fine tuning and feature extraction- and the impact of hyperparameter optimization on these techniques.
Advanced Hyperparameter Optimization for Deep Learning with MLflow
Building on the “Best Practices for Hyperparameter Tuning with MLflow” talk, we will present advanced topics in HPO for deep learning, including early stopping, multi-metric optimization, and robust optimization.
SigOpt Webinar: Modeling at Scale in Systematic Trading
SigOpt works with trading firms who represent $300B in AUM. This talk draws lessons from these engagements to provide insights on how to model at scale.
Sneak away and catch up with Scott Clark, Co-Founder and CEO of Sigopt, a company whose software is focused on automatically tuning your model’s parameters through Bayesian optimization.
Using SigOpt and Tensorflow for Convolutional Neural Networks
Short tutorial on how to use SigOpt, AWS and TensorFlow to efficiently build a convolutional neural network for classifying digits in the SVHN dataset.
In this video I’m going to show you how SigOpt can help you amplify your trading models by optimally tuning them using our black-box optimization platform.
Using Optimal Learning to Tune Machine Learning Models
Using Bayesian Optimization to Tune Machine Learning Models: In this talk we briefly introduce Bayesian Global Optimization as an efficient way to optimize machine learning model parameters, especially when evaluating different parameters is time-consuming or expensive.
In this talk we briefly introduce Bayesian Global Optimization as an efficient way to optimize machine learning model parameters, especially when evaluating different parameters is time-consuming or expensive.
Using SigOpt and scikit-learn for machine learning
Short tutorial on how to use SigOpt, AWS and our scikit-learn wrapper to quickly optimize the hyperparameters, train and evaluate several classification models on a given dataset.
Bayesian Optimization methods used by SigOpt, coupled with the incredibly scalable deep learning architecture provided with the Nervana Cloud and neon, allow anyone to easily tune their models to quickly achieve higher accuracy.