SigOpt’s Experiment Management solution helps you track, filter and reproduce your entire modeling workflow, with interactive visuals you can augment your intuition, understand your progress and explain your results.
Keeping track of it all: recording and organizing model training runs
This blog discusses the Runs feature and how to add it via API to track training runs as part of Experiment Management.
Categories
All
Activity Recognition
Advanced Optimization Techniques
All Model Types
Application
Applied
Applied AI Insights
Augmented ML Workflow
BERT
CNN
Company news
Convolutional Neural Networks
Data Augmentation
Deep Learning
Experiment Management
Focus Area
Healthcare
Human Activity Recognition
Hyperparameter Optimization
LSTM
Machine Learning
Materials Science
Methodology
Model Type
Modeling Best Practices
Multimetric Optimization
Natural Language
Preference Optimization
Quasi Monte Carlo
Reinforcement Learning
Research
Robotics
SigOpt 101
SigOpt Company News
Simulations & Backtests
Speech
Time Series
Training & Tuning
Transformers
Vision
Authors
All
Alexandra Johnson
Barrett Williams
Ben Hsu
Dan Anderson
Eric Lee
Fay Kallel
Gustavo Malkomes
Harvey Cheng
Ivy Zhou
Jim Blomo
Meghana Ravikumar
Michael McCourt
Nick Payton
Nicki Vance
Olivia Kim
Patrick Hayes
Ruben Martinez-Cantin
Sarth Frey
Scott Clark
Simon Howey
Taylor Jackle Spriggs
Tiffany Huynh
Tobias Andreasen
Aleksei Sorokin
Brady Neal
David Kozak
Dennis Barbour
Eric Bai
Francisco Merino-Casallo
Fred J. Hickernell
Gustavo Malkomes
Ian Dewancker
JJ Ben-Joseph
Javier García-Barcos
Jungtaek Kim
Katharina Eggensperger
Linda Wang
Marcel Wursch
Marcus Liwicki
Michele Alberti
Mislav Balunovich
Paul Leu
Rafael Gomez-Bombarelli
Raul Astudillo
Rolf Ingold
Roman Garnett
Sathish Nagappan
Shali Jiang
Simon Axelrod
Sou-Cheng Choi
Tosin Adewumi
Vinaychandran Pondenkandath
Deep Learning, Experiment Management, Machine Learning, Modeling Best Practices
This blog post will go over why you should use Bayesian optimization in your modeling process, the basics of Bayesian optimization, and how to effectively leverage Bayesian optimization for your modeling problems.
Experiment Management, Healthcare, Hyperparameter Optimization, Natural Language, Time Series
SigOpt collaborated with MLconf on a webinar to discuss best practices for metrics, training, and hyperparameter optimization for developing high-performing models.
Advanced Optimization Techniques, Applied AI Insights, BERT, Deep Learning, Experiment Management, Hyperparameter Optimization, Natural Language, Transformers
SigOpt is a proud sponsor of and contributor to QMCPy, an open source project and community dedicated to making quasi-Monte Carlo (QMC) methods more accessible that was launched last month.
Application, Applied, Convolutional Neural Networks, Deep Learning, Natural Language
Experimental Design with SigOpt for Natural Language at Luleå Technical University
Just as we’ve recently witnessed broader and broader use cases from some of our industrial customers, numerous academic groups are finding success using SigOpt to optimize and improve the accuracy of Natural Language models.
Models that were trained on pre-pandemic datasets and fine-tuned with pre-pandemic intuition may no longer make relevant predictions. Learn how how you can solve this challenge of model drift.
Advanced Optimization Techniques, Application, Augmented ML Workflow, Deep Learning, Experiment Management, Natural Language, Training & Tuning
As you build out your modeling practice, and the team necessary to support it, how will you know when you need a managed hyperparameter solution to support your team’s productivity? As you first start to optimize your business’s fraud detection algorithm or recommender system, you can tune simpler models with easy-to-code techniques such as grid […].
Deep Learning, Experiment Management, Natural Language
ICYMI Recap: Lessons from using SigOpt to weigh tradeoffs for BERT size and accuracy
In this webinar, Machine Learning Engineer Meghana Ravikumar, shares how she used Experiment Management to guide her model development process for BERT.
Advanced Optimization Techniques, Application, Applied, Applied AI Insights, Augmented ML Workflow, Deep Learning, Focus Area, Model Type, Multimetric Optimization, Natural Language, Research
Efficient BERT with Multimetric Optimization, part 2
This is the second post in this series about distilling BERT with Multimetric Bayesian Optimization.
Advanced Optimization Techniques, Application, Applied, Applied AI Insights, Augmented ML Workflow, Deep Learning, Focus Area, Model Type, Natural Language, Research, Training & Tuning
Efficient BERT with Multimetric Optimization, part 1
This is the third post in this series about distilling BERT with Multimetric Bayesian Optimization.
Application, Augmented ML Workflow, Deep Learning, Experiment Management, Focus Area, Model Type, Modeling Best Practices, Natural Language, SigOpt 101
I think we can all agree that modeling can often feel like a crapshoot (if you didn’t know before, surprise!).
Advanced Optimization Techniques, Application, Applied, Augmented ML Workflow, Convolutional Neural Networks, Deep Learning, Focus Area, Model Type, Natural Language, Research
BERT is a strong and generalizable architecture that can be transferred for a variety of NLP tasks (for more on this see our previous post or Sebastian Ruder’s excellent analysis).
How Teams Use SigOpt to Build Differentiated Models
Researchers Rafael Gomez-Bombarelli from MIT and Simon Axelrod from Harvard collaborated on training and tuning models that generate three-dimensional geometric data based on 2D geometric definition strings for molecules.
In this discussion, Fay Kallel, Head of Product, and Jim Blomo, Head of Engineering, joined Product Marketing Lead Barrett Williams to demo Experiment Management, the latest solution from the SigOpt team.
Advanced Optimization Techniques, Applied AI Insights, Augmented ML Workflow, Experiment Management
SigOpt’s Experiment Management solution helps you track, filter and reproduce your entire modeling workflow, with interactive visuals you can augment your intuition, understand your progress and explain your results.
Why Enterprise AI is Actually Three Markets in One
When it’s time to ensure that your model, whether that’s a recommendation system, computer vision classifier, or trading strategy is performing as well as it possibly can, you’ll need to choose the right hyperparameter optimization strategy.
Advanced Optimization Techniques, Augmented ML Workflow, Modeling Best Practices
In Case You Missed It: Training, Tuning, and Metric Strategy If your business focuses on systematic trading, we’d like to share how you can most effectively re-train, adjust, and tune your deep learning models as you adapt to fluid market conditions.
Deep Learning, Modeling Best Practices, Multimetric Optimization, Natural Language
Researchers Xavier Bouthillier of Inria and Gaël Varoquaux of Mila, surveyed the use of model experimentation methods across NeurIPS 2019 and ICLR 2020, two of the most prestigious international academic ML conferences.
Defining, selecting, and optimizing with the right set of metrics is critical to every modeling process, but these steps are often hard to execute well.
Advanced Optimization Techniques, All Model Types, Augmented ML Workflow, Company news, Focus Area, Multimetric Optimization, Training & Tuning
Metric Constraints help you closely align your models with your business objectives
Defining just the right metrics to assess a model in the context of your business is no easy feat. Metric Constraints helps you establish guardrails for your optimization and experimentation loops.
Applied AI Insights, Augmented ML Workflow, Training & Tuning
How We Scaled SigOpt to Handle the Most Relentless of Workloads: Part 1
To boost any modeling projects that support these efforts, SigOpt is offering our solution for free to any researcher working on a COVID-19 related modeling project.
Advanced Optimization Techniques, Company news, Convolutional Neural Networks, Deep Learning, Focus Area, Methodology
ICYMI Recap: T4ST Talk 2: Efficient Training and Tuning for Deep Learning Models
In Case You Missed It: Efficient Training and Tuning for Deep Learning Models Although much of the world is working from home (where and if possible) due to the COVID-19 pandemic, the markets are still online.
Advanced Optimization Techniques, Applied AI Insights, Company news, Deep Learning, Focus Area, Machine Learning, Modeling Best Practices
ICYMI Recap: Modeling in Place—a panel discussion with Alectio, NVIDIA, Pandora, and Yelp
In Case You Missed It Modeling in Place A panel discussion with Alectio, NVIDIA, Pandora, and Yelp In today’s challenging sheltered environment, work from home is the new norm for many data scientists and engineers building and tuning models.
Advanced Optimization Techniques, Company news, Focus Area, Multimetric Optimization
In Case You Missed It Recap for Tuning for Systematic Trading Talk 1: Intuition behind Bayesian optimization with and without multiple metrics Although much of the world is working from home (where and if possible) due to the COVID-19 pandemic, the markets are still—mostly—online.
Active Differential Inference for Screening Hearing Loss
SigOpt was proud to sponsor ICML for the 4th consecutive year, and we sent several team members to represent, both at the exhibitor booth and at the sessions.
Metric Thresholds, a New Feature to Supercharge Multimetric Optimization
Today, we are excited to announce the general availability of Metric Thresholds, a new feature that supercharges the performance of Multimetric optimization. Metric Thresholds helps you discover better models that meet your problem-specific needs by allowing you to define “thresholds” on your metrics.
At SigOpt, we provide a reliable, scalable service for enterprises and academics. In order to provide uptime that meets the demanding needs of our customers, we use AWS’ “Classic” Elastic Load Balancer (ELB) to distribute incoming requests among several AWS instances.
Highlight: Bayesian Optimization for Antireflective, Superomniphobic Glass
We collaborate with the University of Pittsburgh on fabricating nanostructured glass with ultrahigh transmittance, ultralow haze, and superomniphobicity.
Highlight: SigOpt Collaborates with the University of Pittsburgh
We are excited to release a new SigOpt feature into alpha testing: Training Monitor. This feature was designed to better empower our neural network developers.
Congratulations, Roman Garnett, on your NSF CAREER Award
Machine learning infrastructure tools can help bridge the gap between the modeler and the cluster by inserting an abstraction layer between the model builder and any infrastructure tools used to communicate with the cluster.
We are pleased to announce the addition of Projects, a feature that allows you and your team to organize experiments for easier access, viewing, and sharing.
Integration of in vitro and in silico Models Using Bayesian Optimization With an Application to Stochastic Modeling of Mesenchymal 3D Cell Migration
SigOpt, Inc. (“SigOpt”), a leading provider of solutions that maximize the performance of machine learning, deep learning and simulation models, announced a strategic partnership with Two Sigma, a leading systematic investment manager.
As part of our mission to make hyperparameter optimization accessible to modelers everywhere, we want to share our naming choices with other developers of HPO APIs.
In this latest Academic Interview, Brady Neal from the Mila (Quebec Artificial Intelligence Institute) discuss the paper, “A Modern Take on the Bias-Variance Tradeoff in Neural Networks” and its implications for deep learning practitioners.
The NeurIPS conference is held in Montreal this year, where leaders in the community will present innovative research over the course of tutorials, main conference proceedings and workshops.
Highlight: Bayesian Optimization of High Transparency, Low Haze, and High Oil Contact Angle Rigid and Flexible Optoelectronic Substrates
Our research team at SigOpt has been very fortunate to be able to collaborate with outstanding researchers around the world, including through our internship program.
Highlight: A Nonstationary Designer Space-Time Kernel
This post introduces the article A Nonstationary Designer Space-Time Kernel by Michael McCourt, Gregory Fasshauer, and David Kozak, appearing at the upcoming NeurIPS 2018 spatiotemporal modeling workshop.
Highlight: Automating Bayesian Optimization with Bayesian Optimization
This post introduces the article Automating Bayesian Optimization with Bayesian Optimization by Gustavo Malkomes and Roman Garnett, appearing in the upcoming NeurIPS 2018 proceedings.
Highlight: Efficient Nonmyopic Batch Active Search
This post introduces the article Efficient Nonmyopic Batch Active Search by Shali Jiang, Gustavo Malkomes, Matthew Abbott, Benjamin Moseley and Roman Garnett, appearing in the upcoming NeurIPS 2018 proceedings.
This post is an interview with the DeepDIVA team from the University of Fribourg to learn more about their framework and what it means for the broader community.
Advanced Optimization Techniques, All Model Types
Multimetric Updates in the Experiment Insights Dashboard
Competitions like International Conference on Machine Learning help explore the complexities of developing effective ML pipelines under severe time restrictions and without expert intuition regarding the desired or expected output.
The research team at SigOpt works to provide the best Bayesian optimization platform for our customers. In our spare time, we also engage in research projects in a variety of other fields.
SigOpt 101, Simulations & Backtests
SigOpt Enters Strategic Investment Agreement with In-Q-Tel
This is the first edition of our new quarterly newsletter. In these updates, we will discuss newly released features, showcase content we have produced, published, or been cited in, and share interesting machine learning research that our research team has found. We hope you find these valuable and informative!
Today we announce the availability of SigOpt on AWS Marketplace. Now, with a single click, data scientists and researchers can access SigOpt’s powerful optimization-as-a-service platform designed to automatically fine-tune their machine learning (ML) and artificial intelligence (AI) workloads.
We have revisited one of the traditional acquisition functions of Bayesian optimization, the upper confidence bound and looked to a slightly different interpretation to better inform optimization of it. In our article, we present a number of examples of this clustering-guided UCB method being applied to various optimization problems.
All Model Types, Machine Learning, Training & Tuning
In this blog post, we are going to show solutions to some of the most common problems we’ve seen people run into when implementing hyperparameter optimization.
In this post, we will show you how Bayesian optimization was able to dramatically improve the performance of a reinforcement learning algorithm in an AI challenge.
Advanced Optimization Techniques, All Model Types
Building a Better Mousetrap via Multicriteria Bayesian Optimization
In this post, we want to analyze a more complex situation in which the parameters of a given model produce a random output, and our multicriteria problem involves maximizing the mean while minimizing the variance of that random variable.
All Model Types, Augmented ML Workflow
Solving Common Issues in Distributed Hyperparameter Optimization
SigOpt’s API for hyperparameter optimization leaves us well-positioned to build exciting features for anyone who wants to perform Bayesian hyperparameter optimization in parallel.
In this post we will discuss the topic of multicriteria optimization, when you need to optimize a model for more than a single metric, and how to use SigOpt to solve these problems.
Machine Learning, Training & Tuning
Bayesian Optimization for Collaborative Filtering with MLlib
In this post we will show how to tune an MLlib collaborative filtering pipeline using Bayesian optimization via SigOpt. Code examples from this post can be found on our github repo.
Today is a big day at SigOpt. Since the seed round we secured last year, we’ve continued to build toward our mission to ‘optimize everything,’ and are now helping dozens of companies amplify their research and development and drive business results with our cloud Bayesian optimization platform.
In this post, we review the history of some of the tools implemented within SigOpt, and then we discuss the original solution to this black box optimization problem, known as full factorial experiments or grid search. Finally, we compare both naive and intelligent grid-based strategies to the latest advances in Bayesian optimization, delivered via the SigOpt platform.
Advanced Optimization Techniques, Deep Learning
Deep Neural Network Optimization with SigOpt and Nervana Cloud
In this post, we will detail how to use SigOpt with the Nervana Cloud and show results on how SigOpt and Nervana are able to reproduce, and beat, the state of the art performance in two papers.
Applied AI Insights, Simulations & Backtests
Winning on Wall Street: Tuning Trading Models with Bayesian Optimization
SigOpt provides customers the opportunity to build better machine learning and financial models by providing users a path to efficiently maximizing key metrics which define their success. In this post we demonstrate the relevance of model tuning on a basic prediction strategy for investing in bond futures.
This blog post provides solutions for comparing different optimization strategies for any optimization problem, using hyperparameter tuning as the motivating example.
In this post on integrating SigOpt with machine learning frameworks, we will show you how to use SigOpt and TensorFlow to efficiently search for an optimal configuration of a convolutional neural network (CNN).
Advanced Optimization Techniques, Deep Learning
Unsupervised Learning with Even Less Supervision Using Bayesian Optimization
In this post on integrating SigOpt with machine learning frameworks, we will show you how to use SigOpt and XGBoost to efficiently optimize an unsupervised learning algorithm’s hyperparameters to increase performance on a classification task.
SigOpt allows experts to build the next great model and apply their domain expertise instead of searching in the dark for the best experiment to run next. With SigOpt you can conquer this tedious, but necessary, element of development and unleash your experts on designing better products with less trial and error.
In this first post on integrating SigOpt with machine learning frameworks, we’ll show you how to use SigOpt and scikit-learn to train and tune a model for text sentiment classification in under 50 lines of Python.
SigOpt provides customers the opportunity to define uncertainty with their observations and using this knowledge we can balance observed results against their variance to make predictions and identify the true behavior behind the uncertainty.
Gaussian processes are powerful because they allow you to exploit previous observations about a system to make informed, and provably optimal, predictions about unobserved behavior. They do this by defining an expected relationship between all possible situations; this relationship is called the covariance and is the topic of this post.
Rescale gives users the dynamic computational resources to run their simulations and SigOpt provides tools to optimize them. By using Rescale’s platform and SigOpt’s tuning, efficient cloud simulation is easier than ever.
At SigOpt, we can help you raise this metric automatically and optimally whether you are optimizing an A/B test, machine learning system, or physical experiment.