Augmented ML Workflow, Bayesian Optimization, Experiment Management, Intelligent Experimentation
Introducing our Core and AI modulesFirst of all, thank you for using SigOpt; we have had many exciting opportunities arise since our November 2020 acquisition by Intel and we are looking forward to serving both users who are new to the platform and those who have been with us since we started on this journey many years ago.
Earlier this year, SigOpt launched the Experiment Exchange podcast series as a platform for scientists, researchers, engineers and developers to share lessons from how they designed experiments to develop models that solve some of the most pressing real-world problems.
Artificial Intelligence, Deep Learning, Energy Industry, Hyperparameter Optimization, LSTM, Prediction, Predictive Maintenance
How Accenture Minimizes Downtime with Predictive Maintenance Models: Experiment Exchange Episode 8Maintaining oil and gas machinery is expensive—but predictive models can help engineers minimize repairs and downtime.
Bayesian Optimization, Machine Learning, Materials Science, Simulations & Backtests
How Paul Leu is Reinventing Glass with Machine Learning: Experiment Exchange Episode 7How do you design a better glass? In this week’s episode of Experiment Exchange, “How Paul Leu is Reinventing Glass with Advanced Machine Learning,” join Michael McCourt as he interviews Paul Leu, an Associate Professor of Industrial Engineering at the University of Pittsburgh.
Artificial Intelligence, Chemistry, Graph Neural Networks, Hyperparameter Optimization, Intelligent Experimentation, Knowledge Graphs, Materials Science, Natural Language, Research, Simulations & Backtests, Transformers, Vision
How MIT Explores New Materials with Inverse Design ML: Experiment Exchange Episode 6Given a property, what’s the material or the molecule that achieves it? In this week’s episode of Experiment Exchange, “How Rafael Gomez-Bombarelli Explores New Materials with Inverse Design ML,” join Michael McCourt as he interviews Rafael Gomez-Bombarelli, an Assistant Professor of Materials Processing at MIT whose work is focused on the development of machine learning […].
Active Search, Aluminum Design, Graph Neural Networks, Hyperparameter Optimization, Materials Science, Multimetric Optimization, Physics-Based Models, Research, Simulations & Backtests
How Novelis Applies Cutting-Edge Methodologies to Optimize Aluminum Design: Experiment Exchange Episode 5Aluminum design is an incredibly complicated business.
Advanced Optimization Techniques, AI at Scale, Applied AI Insights, CNN, Experiment Management, Intelligent Experimentation, Machine Learning, Neuroscience, Research
Numenta Uses Intelligent Experimentation for NeuroscienceSigOpt hosted our first user conference, the SigOpt AI & HPC Summit, on Tuesday, November 16, 2021.
Advanced Optimization Techniques, Application, Artificial Intelligence, Bayesian Optimization, Deep Learning, Hyperparameter Optimization
ICYMI – SigOpt Summit Recap: Accelerating Model Training with Habana® Gaudi® Processors and SigOpt Hyperparameter OptimizationOverview In this blog post we are reviewing the SigOpt Summit presentation by Habana on how they used SigOpt to improve their “home-grown” optimizer when training a model for an MLPerf submission.
Company news, Gradient Boosting, Hyperparameter Optimization, Intelligent Experimentation, SigOpt Company News
Live Now! New XGBoost IntegrationSigOpt’s XGBoost Integration is now live! This is an enhanced SigOpt API that is dedicated to making the hyperparameter optimization experience for XGBoost users more streamlined, easier to use, and suited to your needs.
Advanced Optimization Techniques, Applied AI Insights, Convolutional Neural Networks, Graph Neural Networks, Hyperparameter Optimization, Machine Learning, Training & Tuning
Representation Learning for Materials Science with MITSigOpt hosted our first user conference, the SigOpt AI & HPC Summit, on Tuesday, November 16, 2021.
AI at Scale, Application, Applied AI Insights, Artificial Intelligence, Augmented ML Workflow, Hyperparameter Optimization, Intelligent Experimentation, Model Type, RNN, Time Series, Training & Tuning
SigOpt Recap with Anastasia AI: Democratizing Time Series Forecasting for Any IndustryOverview In this blog post we are reviewing the SigOpt Summit presentation by Anastasia on how they are democratizing AI for their customers.
Application, Experiment Management, Hyperparameter Optimization, Industry, Intelligent Experimentation, Model Type
SigOpt Recap with Accenture – A Novel Framework for Monitoring the Health of Production Critical MachineryOverview In this blog post we are reviewing the webinar Accenture presented at the SigOpt Summit.
AI at Scale, Artificial Intelligence, CNN, Convolutional Neural Networks, Hyperparameter Optimization, Intelligent Experimentation, Machine Learning, Multimetric Optimization, Neuroscience, Research
Optimizing Efficiency with Sparse Neural NetworksSigOpt hosted our first user conference, the SigOpt AI & HPC Summit, on Tuesday, November 16, 2021.
Application, Hyperparameter Optimization, Industry, Intelligent Experimentation, Model Type, Natural Language
How to achieve 50% faster (and better) Contact Center Bots with MindTreeOverview In this blog post we are displaying a novel approach to fine-tuning natural language processing (NLP) models which ultimately leads to 50% faster chatbots! We are also giving the reader an introduction to the SigOpt Intelligent Experimentation (IE) Platform and talk about some of the examples that we have used to familiarize ourselves with […].
Hugging Face is on a mission to democratize state-of-the-art Machine Learning, and a critical part of their work is to make these state-of-the-art models as efficient as possible, to use less energy and memory at scale, and to be more affordable to run by companies of all sizes.
Artificial Intelligence, Classification, Deep Learning, Hyperparameter Optimization, Machine Learning, ResNet, Supervised, Training & Tuning
OpenVino Quantization with SigOptDeep Learning for classification tasks involves training the parameters of a neural network to identify a variety of object classes.
AI at Scale, Artificial Intelligence, Hyperparameter Optimization, Intelligent Experimentation, Machine Learning, Model Type, Research
SigOpt Summit Keynote Recap: Boost AI Experimentation to Design, Explore, and Optimize Your ModelArtificial intelligence is beginning to provide value in a wide variety of business use cases, but successfully training and deploying a machine learning model is an experimental process that is tough to get right.
AI at Scale, Artificial Intelligence, Clustering, Experiment Management, Hyperparameter Optimization, Intelligent Experimentation, Machine Learning, Simulations & Backtests
Intel HPC Workloads use Intelligent Experimentation for Double Digit ImprovementThe world’s most important scientific discoveries depend on the ability to simulate real-world scenarios using computational resources.
Artificial Intelligence, Deep Learning, Experiment Management, Hyperparameter Optimization, Intelligent Experimentation, Machine Learning, Research, Time Series
Anastasia AI Relies on SigOpt to Make Artificial Intelligence Available for AllAnastasia AI relies on SigOpt to build a world-class time series forecasting platform
Artificial Intelligence, Classification, Experiment Management, Hyperparameter Optimization, Intelligent Experimentation, Machine Learning, Model Type, Research
SigOpt Summit – How to intelligently explore results and decide on the best models to deploySigOpt is hosting our first user conference, the SigOpt AI & HPC Summit, on Tuesday, November 16, 2021.
Advanced Optimization Techniques, Applied AI Insights, Artificial Intelligence, Augmented ML Workflow, Deep Learning, Experiment Management, Hyperparameter Optimization, Intelligent Experimentation, Machine Learning, Modeling Best Practices, Training & Tuning
ICYMI – Recap: Intelligent Experimentation Overview for Ai4 WebinarIn this discussion, Scott Clark, SigOpt Co-founder and General Manager, reviews the Intelligent Experiment framework at the Ai4 Webinar.
AI at Scale, Artificial Intelligence, Bayesian Optimization, Classification, Deep Learning, Experiment Management, Graph Neural Networks, Hyperparameter Optimization, Intelligent Experimentation, Machine Learning, Natural Language, Prediction, Time Series, Vision
How do leading researchers ask the right questions during experiment design?SigOpt is hosting our first user conference, the SigOpt AI & HPC Summit, on Tuesday, November 16, 2021.
Advanced Optimization Techniques, Applied AI Insights, Experiment Management, Hyperparameter Optimization, SigOpt 101, SigOpt Company News
A Better Approach to ExperimentationArtificial intelligence is beginning to provide value in a wide variety of business use cases, but successfully training and deploying a machine learning model is an experimental process that is tough to get right.
Active Search, Advanced Optimization Techniques, AI at Scale, Artificial Intelligence, BERT, Biology, Classification, Clustering, CNN, Convolutional Neural Networks, Deep Learning, Experiment Management, Finance, Fraud Detection, Gradient Boosting, Graph Neural Networks, Healthcare, Human Activity Recognition, Hyperparameter Optimization, Intelligent Experimentation, Knowledge Graphs, LSTM, Machine Learning, Materials Science, Multimetric Optimization, Natural Language, Prediction, Recommendation System, Regression, RNN, Robotics, Segmentation, Simulations & Backtests, Speech, Supervised, Time Series, Topic Modeling, Transformers, Vision
SigOpt Summit 2021Join the free and virtual SigOpt Summit at https://sigopt.com/summit
AI at Scale, Artificial Intelligence, Deep Learning, Experiment Management, Finance, Fraud Detection, Graph Neural Networks, Hyperparameter Optimization, Industry, Intelligent Experimentation, Machine Learning, Payments
Optimizing Graph Neural Networks at PayPal with SigOptPayPal uses SigOpt to scale hyperparameter optimization for graph neural networks
After joining Intel, the SigOpt team has continued to work closely with groups both inside and outside of Intel in order to enable modelers everywhere to accelerate and amplify their impact with the SigOpt intelligent experimentation platform.
Advanced Optimization Techniques, Hyperparameter Optimization, Research
Cost matters: on the importance of cost-aware hyperparameter optimizationWe are excited to present recent research that we published in the proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI) 2021, “A nonmyopic approach to cost-constrained Bayesian optimization.
Active Search, Advanced Optimization Techniques, Materials Science, Multimetric Optimization
Highlight: Beyond the Pareto Efficient Frontier: Constraint Active Search for Multiobjective Experimental Design, ICML 2021We are excited to present our most recent work, “Beyond the Pareto Efficient Frontier: Constraint Active Search for Multiobjective Experimental Design,” at the upcoming International Conference of Machine Learning (ICML) 2021.
Hyperparameter Optimization, Modeling Best Practices, SigOpt 101, Training & Tuning
Bayesian Optimization 101This blog post will go over why you should use Bayesian optimization in your modeling process, the basics of Bayesian optimization, and how to effectively leverage Bayesian optimization for your modeling problems.
Advanced Optimization Techniques, Applied AI Insights, Deep Learning, Experiment Management, Hyperparameter Optimization, Machine Learning
A Better Approach to Metrics, Training, and Tuning with MLconfSigOpt collaborated with MLconf on a webinar to discuss best practices for metrics, training, and hyperparameter optimization for developing high-performing models.
Advanced Optimization Techniques, Deep Learning, Experiment Management, Hyperparameter Optimization, Machine Learning
Dealing with Model Performance Drift in the PandemicModels that were trained on pre-pandemic datasets and fine-tuned with pre-pandemic intuition may no longer make relevant predictions. Learn how how you can solve this challenge of model drift.
Advanced Optimization Techniques, Application, Applied, Applied AI Insights, Augmented ML Workflow, Deep Learning, Focus Area, Model Type, Multimetric Optimization, Natural Language, Research
Efficient BERT with Multimetric Optimization, part 2This is the second post in this series about distilling BERT with Multimetric Bayesian Optimization.
Advanced Optimization Techniques, Application, Applied, Applied AI Insights, Augmented ML Workflow, Deep Learning, Focus Area, Model Type, Natural Language, Research, Training & Tuning
Efficient BERT with Multimetric Optimization, part 1This is the first post in this series about distilling BERT with Multimetric Bayesian Optimization.
Application, Augmented ML Workflow, Deep Learning, Experiment Management, Focus Area, Model Type, Modeling Best Practices, Natural Language, SigOpt 101
Why is Experiment Management Important for NLP?I think we can all agree that modeling can often feel like a crapshoot (if you didn’t know before, surprise!).
Advanced Optimization Techniques, Application, Applied, Augmented ML Workflow, Convolutional Neural Networks, Deep Learning, Focus Area, Model Type, Natural Language, Research
Efficient BERT: An OverviewBERT is a strong and generalizable architecture that can be transferred for a variety of NLP tasks (for more on this see our previous post or Sebastian Ruder’s excellent analysis).
Augmented ML Workflow, Experiment Management, Modeling Best Practices, Training & Tuning
Modeling ManagedSigOpt’s Experiment Management solution helps you track, filter and reproduce your entire modeling workflow, with interactive visuals you can augment your intuition, understand your progress and explain your results.
Augmented ML Workflow, Deep Learning, Machine Learning, Modeling Best Practices, Research, Training & Tuning
The Growing Abundance of Optimization in Peer-Reviewed ResearchResearchers Xavier Bouthillier of Inria and Gaël Varoquaux of Mila, surveyed the use of model experimentation methods across NeurIPS 2019 and ICLR 2020, two of the most prestigious international academic ML conferences.
Advanced Optimization Techniques, All Model Types, Augmented ML Workflow, Company news, Focus Area, Multimetric Optimization, Training & Tuning
Metric Constraints help you closely align your models with your business objectivesDefining just the right metrics to assess a model in the context of your business is no easy feat. Metric Constraints helps you establish guardrails for your optimization and experimentation loops.
Advanced Optimization Techniques, Convolutional Neural Networks, Data Augmentation, Deep Learning, Healthcare, Multimetric Optimization
Parametrizing Data Augmentation in COVID-NetIn this blog post, we discuss our recent collaboration trying to improve performance of a COVID-19 X-ray classification tool.
Today, we are excited to announce the general availability of Metric Thresholds, a new feature that supercharges the performance of Multimetric optimization. Metric Thresholds helps you discover better models that meet your problem-specific needs by allowing you to define “thresholds” on your metrics.
At SigOpt, we provide a reliable, scalable service for enterprises and academics. In order to provide uptime that meets the demanding needs of our customers, we use AWS’ “Classic” Elastic Load Balancer (ELB) to distribute incoming requests among several AWS instances.
SigOpt, Inc. (“SigOpt”), a leading provider of solutions that maximize the performance of machine learning, deep learning and simulation models, announced a strategic partnership with Two Sigma, a leading systematic investment manager.
Our research team at SigOpt has been very fortunate to be able to collaborate with outstanding researchers around the world, including through our internship program.
Advanced Optimization Techniques, Machine Learning
AutoML at ICML 2018Competitions like International Conference on Machine Learning help explore the complexities of developing effective ML pipelines under severe time restrictions and without expert intuition regarding the desired or expected output.
All Model Types, SigOpt 101
SigOpt Fall UpdateThis is the first edition of our new quarterly newsletter. In these updates, we will discuss newly released features, showcase content we have produced, published, or been cited in, and share interesting machine learning research that our research team has found. We hope you find these valuable and informative!
All Model Types, Augmented ML Workflow
SigOpt is Now Available on AWS MarketplaceToday we announce the availability of SigOpt on AWS Marketplace. Now, with a single click, data scientists and researchers can access SigOpt’s powerful optimization-as-a-service platform designed to automatically fine-tune their machine learning (ML) and artificial intelligence (AI) workloads.
All Model Types, Modeling Best Practices
Clustering Applied to Acquisition FunctionsWe have revisited one of the traditional acquisition functions of Bayesian optimization, the upper confidence bound and looked to a slightly different interpretation to better inform optimization of it. In our article, we present a number of examples of this clustering-guided UCB method being applied to various optimization problems.
Advanced Optimization Techniques, All Model Types
Building a Better Mousetrap via Multicriteria Bayesian OptimizationIn this post, we want to analyze a more complex situation in which the parameters of a given model produce a random output, and our multicriteria problem involves maximizing the mean while minimizing the variance of that random variable.
All Model Types, SigOpt 101
We Raised $6.6 Million To Amplify Your ResearchToday is a big day at SigOpt. Since the seed round we secured last year, we’ve continued to build toward our mission to ‘optimize everything,’ and are now helping dozens of companies amplify their research and development and drive business results with our cloud Bayesian optimization platform.
All Model Types, Training & Tuning
Breaking Free of the GridIn this post, we review the history of some of the tools implemented within SigOpt, and then we discuss the original solution to this black box optimization problem, known as full factorial experiments or grid search. Finally, we compare both naive and intelligent grid-based strategies to the latest advances in Bayesian optimization, delivered via the SigOpt platform.
Advanced Optimization Techniques, Deep Learning
Deep Neural Network Optimization with SigOpt and Nervana Cloud In this post, we will detail how to use SigOpt with the Nervana Cloud and show results on how SigOpt and Nervana are able to reproduce, and beat, the state of the art performance in two papers.
Applied AI Insights, Simulations & Backtests
Winning on Wall Street: Tuning Trading Models with Bayesian OptimizationSigOpt provides customers the opportunity to build better machine learning and financial models by providing users a path to efficiently maximizing key metrics which define their success. In this post we demonstrate the relevance of model tuning on a basic prediction strategy for investing in bond futures.
Advanced Optimization Techniques, Deep Learning
TensorFlow ConvNets on a Budget with Bayesian OptimizationIn this post on integrating SigOpt with machine learning frameworks, we will show you how to use SigOpt and TensorFlow to efficiently search for an optimal configuration of a convolutional neural network (CNN).
Advanced Optimization Techniques, Deep Learning
Unsupervised Learning with Even Less Supervision Using Bayesian OptimizationIn this post on integrating SigOpt with machine learning frameworks, we will show you how to use SigOpt and XGBoost to efficiently optimize an unsupervised learning algorithm’s hyperparameters to increase performance on a classification task.
All Model Types, Modeling Best Practices
Intuition behind Gaussian ProcessesSigOpt allows experts to build the next great model and apply their domain expertise instead of searching in the dark for the best experiment to run next. With SigOpt you can conquer this tedious, but necessary, element of development and unleash your experts on designing better products with less trial and error.
All Model Types, Modeling Best Practices
Approximation of DataSigOpt provides customers the opportunity to define uncertainty with their observations and using this knowledge we can balance observed results against their variance to make predictions and identify the true behavior behind the uncertainty.
All Model Types, Modeling Best Practices
Intuition Behind Covariance KernelsGaussian processes are powerful because they allow you to exploit previous observations about a system to make informed, and provably optimal, predictions about unobserved behavior. They do this by defining an expected relationship between all possible situations; this relationship is called the covariance and is the topic of this post.
Applied AI Insights, Simulations & Backtests
Making a Better Airplane using SigOpt and RescaleRescale gives users the dynamic computational resources to run their simulations and SigOpt provides tools to optimize them. By using Rescale’s platform and SigOpt’s tuning, efficient cloud simulation is easier than ever.