In Case You Missed It: Training, Tuning, and Metric Strategy If your business focuses on systematic trading, we’d like to share how you can most effectively re-train, adjust, and tune your deep learning models as you adapt to fluid market conditions.
Deep Learning, Modeling Best Practices, Multimetric Optimization, Natural Language
Researchers Xavier Bouthillier of Inria and Gaël Varoquaux of Mila, surveyed the use of model experimentation methods across NeurIPS 2019 and ICLR 2020, two of the most prestigious international academic ML conferences.
In Case You Missed It: Efficient Training and Tuning for Deep Learning Models Although much of the world is working from home (where and if possible) due to the COVID-19 pandemic, the markets are still online.
Advanced Optimization Techniques, Applied AI Insights, Company news, Deep Learning, Focus Area, Machine Learning, Modeling Best Practices
ICYMI Recap: Modeling in Place—a panel discussion with Alectio, NVIDIA, Pandora, and Yelp
In Case You Missed It Modeling in Place A panel discussion with Alectio, NVIDIA, Pandora, and Yelp In today’s challenging sheltered environment, work from home is the new norm for many data scientists and engineers building and tuning models.
Advanced Optimization Techniques, Company news, Focus Area, Multimetric Optimization
In Case You Missed It Recap for Tuning for Systematic Trading Talk 1: Intuition behind Bayesian optimization with and without multiple metrics Although much of the world is working from home (where and if possible) due to the COVID-19 pandemic, the markets are still—mostly—online.
Active Differential Inference for Screening Hearing Loss
Today, we are excited to announce the general availability of Metric Thresholds, a new feature that supercharges the performance of Multimetric optimization. Metric Thresholds helps you discover better models that meet your problem-specific needs by allowing you to define “thresholds” on your metrics.
At SigOpt, we provide a reliable, scalable service for enterprises and academics. In order to provide uptime that meets the demanding needs of our customers, we use AWS’ “Classic” Elastic Load Balancer (ELB) to distribute incoming requests among several AWS instances.
Highlight: Bayesian Optimization for Antireflective, Superomniphobic Glass
Machine learning infrastructure tools can help bridge the gap between the modeler and the cluster by inserting an abstraction layer between the model builder and any infrastructure tools used to communicate with the cluster.
SigOpt, Inc. (“SigOpt”), a leading provider of solutions that maximize the performance of machine learning, deep learning and simulation models, announced a strategic partnership with Two Sigma, a leading systematic investment manager.
In this latest Academic Interview, Brady Neal from the Mila (Quebec Artificial Intelligence Institute) discuss the paper, “A Modern Take on the Bias-Variance Tradeoff in Neural Networks” and its implications for deep learning practitioners.
This post introduces the article A Nonstationary Designer Space-Time Kernel by Michael McCourt, Gregory Fasshauer, and David Kozak, appearing at the upcoming NeurIPS 2018 spatiotemporal modeling workshop.
Highlight: Automating Bayesian Optimization with Bayesian Optimization
This post introduces the article Efficient Nonmyopic Batch Active Search by Shali Jiang, Gustavo Malkomes, Matthew Abbott, Benjamin Moseley and Roman Garnett, appearing in the upcoming NeurIPS 2018 proceedings.
Competitions like International Conference on Machine Learning help explore the complexities of developing effective ML pipelines under severe time restrictions and without expert intuition regarding the desired or expected output.
This is the first edition of our new quarterly newsletter. In these updates, we will discuss newly released features, showcase content we have produced, published, or been cited in, and share interesting machine learning research that our research team has found. We hope you find these valuable and informative!
Today we announce the availability of SigOpt on AWS Marketplace. Now, with a single click, data scientists and researchers can access SigOpt’s powerful optimization-as-a-service platform designed to automatically fine-tune their machine learning (ML) and artificial intelligence (AI) workloads.
We have revisited one of the traditional acquisition functions of Bayesian optimization, the upper confidence bound and looked to a slightly different interpretation to better inform optimization of it. In our article, we present a number of examples of this clustering-guided UCB method being applied to various optimization problems.
All Model Types, Machine Learning, Training & Tuning
In this post, we want to analyze a more complex situation in which the parameters of a given model produce a random output, and our multicriteria problem involves maximizing the mean while minimizing the variance of that random variable.
All Model Types, Augmented ML Workflow
Solving Common Issues in Distributed Hyperparameter Optimization
Today is a big day at SigOpt. Since the seed round we secured last year, we’ve continued to build toward our mission to ‘optimize everything,’ and are now helping dozens of companies amplify their research and development and drive business results with our cloud Bayesian optimization platform.
In this post, we review the history of some of the tools implemented within SigOpt, and then we discuss the original solution to this black box optimization problem, known as full factorial experiments or grid search. Finally, we compare both naive and intelligent grid-based strategies to the latest advances in Bayesian optimization, delivered via the SigOpt platform.
Advanced Optimization Techniques, Deep Learning
Deep Neural Network Optimization with SigOpt and Nervana Cloud
SigOpt provides customers the opportunity to build better machine learning and financial models by providing users a path to efficiently maximizing key metrics which define their success. In this post we demonstrate the relevance of model tuning on a basic prediction strategy for investing in bond futures.
In this post on integrating SigOpt with machine learning frameworks, we will show you how to use SigOpt and TensorFlow to efficiently search for an optimal configuration of a convolutional neural network (CNN).
Advanced Optimization Techniques, Deep Learning
Unsupervised Learning with Even Less Supervision Using Bayesian Optimization
In this post on integrating SigOpt with machine learning frameworks, we will show you how to use SigOpt and XGBoost to efficiently optimize an unsupervised learning algorithm’s hyperparameters to increase performance on a classification task.
SigOpt allows experts to build the next great model and apply their domain expertise instead of searching in the dark for the best experiment to run next. With SigOpt you can conquer this tedious, but necessary, element of development and unleash your experts on designing better products with less trial and error.
In this first post on integrating SigOpt with machine learning frameworks, we’ll show you how to use SigOpt and scikit-learn to train and tune a model for text sentiment classification in under 50 lines of Python.
SigOpt provides customers the opportunity to define uncertainty with their observations and using this knowledge we can balance observed results against their variance to make predictions and identify the true behavior behind the uncertainty.
Gaussian processes are powerful because they allow you to exploit previous observations about a system to make informed, and provably optimal, predictions about unobserved behavior. They do this by defining an expected relationship between all possible situations; this relationship is called the covariance and is the topic of this post.
Rescale gives users the dynamic computational resources to run their simulations and SigOpt provides tools to optimize them. By using Rescale’s platform and SigOpt’s tuning, efficient cloud simulation is easier than ever.