Research & Company Blog

Enhance Multi-Model Hardware-Aware Train-Free NAS with SigOpt
Introduction Intel® End-to-End AI Optimization Kit is a composable toolkit developed and open sourced by Intel to make the end-to-end AI pipeline faster, simpler, and more accessible, broadening AI access to everyone, everywhere.
SigOpt Is Now Open Source
SigOpt is now open source
Augmented ML Workflow, Bayesian Optimization, Experiment Management, Intelligent Experimentation
Introducing our Core and AI modules
First of all, thank you for using SigOpt; we have had many exciting opportunities arise since our November 2020 acquisition by Intel and we are looking forward to serving both users who are new to the platform and those who have been with us since we started on this journey many years ago.
Experiment Exchange Season 1: Lessons for AI and HPC Modeling
Earlier this year, SigOpt launched the Experiment Exchange podcast series as a platform for scientists, researchers, engineers and developers to share lessons from how they designed experiments to develop models that solve some of the most pressing real-world problems.
Artificial Intelligence, Deep Learning, Energy Industry, Hyperparameter Optimization, LSTM, Prediction, Predictive Maintenance
How Accenture Minimizes Downtime with Predictive Maintenance Models: Experiment Exchange Episode 8
Maintaining oil and gas machinery is expensive—but predictive models can help engineers minimize repairs and downtime.
Bayesian Optimization, Machine Learning, Materials Science, Simulations & Backtests
How Paul Leu is Reinventing Glass with Machine Learning: Experiment Exchange Episode 7
How do you design a better glass? In this week’s episode of Experiment Exchange, “How Paul Leu is Reinventing Glass with Advanced Machine Learning,” join Michael McCourt as he interviews Paul Leu, an Associate Professor of Industrial Engineering at the University of Pittsburgh.
Artificial Intelligence, Chemistry, Graph Neural Networks, Hyperparameter Optimization, Intelligent Experimentation, Knowledge Graphs, Materials Science, Natural Language, Research, Simulations & Backtests, Transformers, Vision
How MIT Explores New Materials with Inverse Design ML: Experiment Exchange Episode 6
Given a property, what’s the material or the molecule that achieves it? In this week’s episode of Experiment Exchange, “How Rafael Gomez-Bombarelli Explores New Materials with Inverse Design ML,” join Michael McCourt as he interviews Rafael Gomez-Bombarelli, an Assistant Professor of Materials Processing at MIT whose work is focused on the development of machine learning […].
CNN, Deep Learning, Fraud Detection, Graph Neural Networks, Knowledge Graphs
How PayPal Uses Large Graph Neural Networks to Detect Fraud: Experiment Exchange Episode 4
PayPal Director of Data Science Discusses His Use of SigOpt to Scale GNNs
Using Prior Knowledge to Enhance Scikit-Learn Models
Use your modeling knowledge to help the SigOpt optimizer provide you with quality parameters faster than ever.
Modeling Best Practices, Regression, XGBoost
Using SigOpt’s XGBoost Integration for Regression
SigOpt’s XGBoost integration does more than provide a streamlined SigOpt API for your XGBoost modeling.
Classification, Fraud Detection, Modeling Best Practices, XGBoost
Using SigOpt’s XGBoost Integration for Fraud Classification
SigOpt’s XGBoost integration does more than provide a streamlined SigOpt API for your XGBoost modeling.
How Alexander Johansen is Pioneering the Role of ML within Health and Bio Science: Experiment Exchange Episode 3
Machine learning holds significant promise for fields like proteomics, therapeutics, and more—but blockers like access to datasets and issues of health privacy make progress complicated.
How Anastasia AI is Democratizing Access to AI for SMEs: Experiment Exchange Episode 2
Small to medium enterprises make up the majority of the companies in the world, yet they’re often underserved when it comes to accessing AI.
How Numenta Builds Neural Networks Inspired by Sparsity: Experiment Exchange Episode 1
Our brains only use about 30-40 watts of power, yet are more powerful than neural networks which take extensive amounts of energy to run.
Multisolution: A deeper dive
Last week we shared our latest changes and updates to our advanced feature, Multisolution.
Artificial Intelligence, Augmented ML Workflow, Company news, Experiment Management, Hyperparameter Optimization, Intelligent Experimentation, Research
Live Now! Our Updated Take on Multisolution
Here at SigOpt, your thoughts and problems matter to us.
Graph Neural Networks
Overview of Temporal Graph Neural Networks
The popularity of Graph Neural Networks (GNNs) has risen in recent years.
New Hyperopt Integration now Live!  
Last week we touched briefly on what the Hyperopt framework is and some reasons you may choose to experiment with it as an optimizer.
What is Hyperopt?
What is Hyperopt?  Hyperopt is a black-box optimization framework where users specify:  a search space for an optimization problem  an objective function to optimize  a database to store the optimization history  The framework provides several search algorithms.
Recommendation System
SigOpt Powers the Next Generation of Intel
SigOpt was acquired by Intel back in late 2020.
Machine Learning
VAEs Generate Novel Materials
MIT’s Rafael Gomez-Bombarelli works on cutting edge research to develop new materials.
Not All Experiments are Made Equal
Our research team at SigOpt has been working with many customers to improve how they define and optimize their experiments.
ICYMI – SigOpt Summit Recap Democratizing End-to-End Recommendation Systems with Jian Zhang
Overview In this blog post we are reviewing the SigOpt Summit presentation by Intel on how to Democratize End-to-End Recommendation Systems .
Artificial Intelligence, CNN, Convolutional Neural Networks
How to Win a Kaggle Competition with Hyper Parameter Optimization
In this blog post we highlight some of the key takeaways from David Austin’s presentation on how to supercharge a 1st place Kaggle solution to higher performance.
Deep Learning, Recommendation System
SigOpt Recap with Intel AI – Faster, Better Training for Recommendation Systems
SigOpt hosted our first user conference, the SigOpt AI & HPC Summit, on Tuesday, November 16, 2021.
Graph Neural Networks
What is a Relational Graph Convolutional Network (RGCN)?
Like GraphSAGE, Relational Graph Convolutional Networks extend the notion of the Graph Convolution Network (GCN).
ICYMI – MLConf Webinar featuring SigOpt and Habana
Habana and SigOpt, both Intel companies, collaborated with MLConf to host a webinar showing how SigOpt was used to optimize a home-grown Habana optimizer.
Natural Language
ICYMI – SigOpt Summit Recap: Deep Learning for Proteomics and the Future of Medicine
In this blog post we highlight some of the key takeaways from Alexander Rosenberg’s presentation on deep learning for proteomics and the future of medicine at the SigOpt summit.
Graph Neural Networks
What is GraphSAGE?
According to the authors of GraphSAGE: “GraphSAGE is a framework for inductive representation learning on large graphs.
Bayesian Optimization, Experiment Management, Materials Science, Modeling Best Practices
Experimental Design in Materials Science
SigOpt hosted our first user conference, the SigOpt AI & HPC Summit, on Tuesday, November 16, 2021.
Easy Professional Upgrade – Get more out of your SigOpt experience
What’s new? SigOpt is introducing a new option to get more out of your modeling experience: a new professional plan you can upgrade to yourself.
Gradient Boosting, Hyperparameter Optimization, Intelligent Experimentation
Overview of XGBoost
We recently released the XGBoost Integration into SigOpt to make it that much easier to leverage Gradient Boosting on the SigOpt platform.
SigOpt Summit Recap with Novelis: Streamlining Materials Design with Intelligent Experimentation with Novelis
Overview In this blog post we are reviewing the SigOpt Summit presentation by Novelis on how they are Streamlining Materials Design with Intelligent Experimentation with Novelis.
Advanced Optimization Techniques, AI at Scale, Applied AI Insights, CNN, Experiment Management, Intelligent Experimentation, Machine Learning, Neuroscience, Research
Numenta Uses Intelligent Experimentation for Neuroscience
SigOpt hosted our first user conference, the SigOpt AI & HPC Summit, on Tuesday, November 16, 2021.
Advanced Optimization Techniques, Application, Artificial Intelligence, Bayesian Optimization, Deep Learning, Hyperparameter Optimization
ICYMI – SigOpt Summit Recap: Accelerating Model Training with Habana® Gaudi® Processors and SigOpt Hyperparameter Optimization
Overview In this blog post we are reviewing the SigOpt Summit presentation by Habana on how they used SigOpt to improve their “home-grown” optimizer when training a model for an MLPerf submission.
Company news, Gradient Boosting, Hyperparameter Optimization, Intelligent Experimentation, SigOpt Company News
Live Now! New XGBoost Integration
SigOpt’s XGBoost Integration is now live! This is an enhanced SigOpt API that is dedicated to making the hyperparameter optimization experience for XGBoost users more streamlined, easier to use, and suited to your needs.
Experiment Management, Graph Neural Networks, Machine Learning, Modeling Best Practices
Guide to Iteratively Tuning GNNs
This blog walks through a process for experimenting with hyperparameters, training algorithms and other parameters of Graph Neural Networks.
Advanced Optimization Techniques, Applied AI Insights, Convolutional Neural Networks, Graph Neural Networks, Hyperparameter Optimization, Machine Learning, Training & Tuning
Representation Learning for Materials Science with MIT
SigOpt hosted our first user conference, the SigOpt AI & HPC Summit, on Tuesday, November 16, 2021.
Company news, SigOpt Company News
Live Now! New SigOpt Community Page
SigOpt’s Community Page is now live! This is an open space dedicated to connecting, discussing, and discovering the newest developments with SigOpt.
Experiment Management, Intelligent Experimentation
How does Design fit into Intelligent Experimentation?
SigOpt is an Intelligent Experimentation platform designed to accelerate and amplify the impact of modelers everywhere.
AI at Scale, Application, Applied AI Insights, Artificial Intelligence, Augmented ML Workflow, Hyperparameter Optimization, Intelligent Experimentation, Model Type, RNN, Time Series, Training & Tuning
SigOpt Recap with Anastasia AI: Democratizing Time Series Forecasting for Any Industry
Overview In this blog post we are reviewing the  SigOpt Summit presentation by Anastasia on how they are democratizing AI for their customers.
Graph Neural Networks
What is the Open Graph Benchmark (OGB)?
According to the OGB website, the Open Graph Benchmark is a “collection of realistic large-scale, and diverse benchmark datasets for ML on graphs.
Application, Experiment Management, Hyperparameter Optimization, Industry, Intelligent Experimentation, Model Type
SigOpt Recap with Accenture – A Novel Framework for Monitoring the Health of Production Critical Machinery
Overview In this blog post we are reviewing the webinar Accenture presented at the SigOpt Summit.
Materials Science, Multimetric Optimization, Research
Reduce Glass Display Glare with Multimetric Optimization
SigOpt hosted our first user conference, the SigOpt AI & HPC Summit, on Tuesday, November 16, 2021.
AI at Scale, Hyperparameter Optimization, Intelligent Experimentation
Intel Neural Compressor Quantization with SigOpt
We are pleased to share that Intel Neural Compressor (INC) now has easy to use integration with SigOpt.
AI at Scale, Artificial Intelligence, CNN, Convolutional Neural Networks, Hyperparameter Optimization, Intelligent Experimentation, Machine Learning, Multimetric Optimization, Neuroscience, Research
Optimizing Efficiency with Sparse Neural Networks
SigOpt hosted our first user conference, the SigOpt AI & HPC Summit, on Tuesday, November 16, 2021.
Application, Hyperparameter Optimization, Industry, Intelligent Experimentation, Model Type, Natural Language
How to achieve 50% faster (and better) Contact Center Bots with MindTree
Overview In this blog post we are displaying a novel approach to fine-tuning natural language processing (NLP) models which ultimately leads to 50% faster chatbots! We are also giving the reader an introduction to the SigOpt Intelligent Experimentation (IE) Platform and talk about some of the examples that we have used to familiarize ourselves with […].
Artificial Intelligence, Graph Neural Networks, Knowledge Graphs, Machine Learning
Overview of Graph Neural Networks
What is a Graph Neural Network?  Graph Neural Networks are neural networks that operate on graph data.
Mini Batch Sampling with GNNs
SigOpt hosted our first user conference, the SigOpt AI & HPC Summit, on Tuesday, November 16, 2021.
Optimize Hugging Face Transformers with SigOpt
Hugging Face is on a mission to democratize state-of-the-art Machine Learning, and a critical part of their work is to make these state-of-the-art models as efficient as possible, to use less energy and memory at scale, and to be more affordable to run by companies of all sizes.
Artificial Intelligence, Classification, Deep Learning, Hyperparameter Optimization, Machine Learning, ResNet, Supervised, Training & Tuning
OpenVino Quantization with SigOpt
Deep Learning for classification tasks involves training the parameters of a neural network to identify a variety of object classes.
AI at Scale, Artificial Intelligence, Hyperparameter Optimization, Intelligent Experimentation, Machine Learning, Model Type, Research
SigOpt Summit Keynote Recap: Boost AI Experimentation to Design, Explore, and Optimize Your Model
Artificial intelligence is beginning to provide value in a wide variety of business use cases, but successfully training and deploying a machine learning model is an experimental process that is tough to get right.
Lessons from the SigOpt Summit
SigOpt hosted our first user conference, the SigOpt AI & HPC Summit, on Tuesday, November 16, 2021.
AI at Scale, Artificial Intelligence, Clustering, Experiment Management, Hyperparameter Optimization, Intelligent Experimentation, Machine Learning, Simulations & Backtests
Intel HPC Workloads use Intelligent Experimentation for Double Digit Improvement
The world’s most important scientific discoveries depend on the ability to simulate real-world scenarios using computational resources.
Come see who is speaking at the SigOpt Summit
SigOpt is hosting our first user conference, the SigOpt AI & HPC Summit, on Tuesday, November 16, 2021.
Optimizing at AI & HPC Scale
SigOpt Summit SigOpt is hosting our first user conference, the SigOpt AI & HPC Summit, on Tuesday, November 16, 2021.
Artificial Intelligence, Deep Learning, Experiment Management, Hyperparameter Optimization, Intelligent Experimentation, Machine Learning, Research, Time Series
Anastasia AI Relies on SigOpt to Make Artificial Intelligence Available for All
Anastasia AI relies on SigOpt to build a world-class time series forecasting platform
Artificial Intelligence, Classification, Experiment Management, Hyperparameter Optimization, Intelligent Experimentation, Machine Learning, Model Type, Research
SigOpt Summit – How to intelligently explore results and decide on the best models to deploy
SigOpt is hosting our first user conference, the SigOpt AI & HPC Summit, on Tuesday, November 16, 2021.
Advanced Optimization Techniques, Applied AI Insights, Artificial Intelligence, Augmented ML Workflow, Deep Learning, Experiment Management, Hyperparameter Optimization, Intelligent Experimentation, Machine Learning, Modeling Best Practices, Training & Tuning
ICYMI – Recap: Intelligent Experimentation Overview for Ai4 Webinar
In this discussion, Scott Clark, SigOpt Co-founder and General Manager, reviews the Intelligent Experiment framework at the Ai4 Webinar.
Deep Learning, Recommendation System
Optimize the Deep Learning Recommendation Model with Intelligent Experimentation
There is so much content and data these days that users can have a hard time finding what they’re looking for, and they can just give up.
AI at Scale, Artificial Intelligence, Bayesian Optimization, Classification, Deep Learning, Experiment Management, Graph Neural Networks, Hyperparameter Optimization, Intelligent Experimentation, Machine Learning, Natural Language, Prediction, Time Series, Vision
How do leading researchers ask the right questions during experiment design?
SigOpt is hosting our first user conference, the SigOpt AI & HPC Summit, on Tuesday, November 16, 2021.
Advanced Optimization Techniques, Deep Learning, Experiment Management, Hyperparameter Optimization, Intelligent Experimentation, Neuroscience, Research, RNN, Vision
Numenta Brings Brain-Based Principles to AI with SigOpt
Numenta uses SigOpt for AI experimentation and hyperparameter optimization
A Novel Scale-Out Training Solution for Deep Learning Recommender Systems
This post was originally published in the Parallel Universe Magazine.
Advanced Optimization Techniques, Applied AI Insights, Experiment Management, Hyperparameter Optimization, SigOpt 101, SigOpt Company News
A Better Approach to Experimentation
Artificial intelligence is beginning to provide value in a wide variety of business use cases, but successfully training and deploying a machine learning model is an experimental process that is tough to get right.
AI at Scale, Artificial Intelligence, Deep Learning, Experiment Management, Finance, Fraud Detection, Graph Neural Networks, Hyperparameter Optimization, Industry, Intelligent Experimentation, Machine Learning, Payments
Optimizing Graph Neural Networks at PayPal with SigOpt
PayPal uses SigOpt to scale hyperparameter optimization for graph neural networks
Best-On-Intel: Intel Labs On Their Development Of AutoQ, An Efficient Framework For Quantization Using OpenVINO Toolkit And SigOpt
After joining Intel, the SigOpt team has continued to work closely with groups both inside and outside of Intel in order to enable modelers everywhere to accelerate and amplify their impact with the SigOpt intelligent experimentation platform.
Introducing the New SigOpt Experience
We are pleased to announce that today we are releasing a new SigOpt experience! Not to worry, you’ll be able to continue to use SigOpt Python clients < 8.
Advanced Optimization Techniques, Experiment Management, Hyperparameter Optimization, Robotics
Academic Interview: Optimizing a Robotics Use Case with SigOpt
Multimetric optimization for a robotics use case
Advanced Optimization Techniques, Hyperparameter Optimization, Research
Cost matters: on the importance of cost-aware hyperparameter optimization
We are excited to present recent research that we published in the proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI) 2021, “A nonmyopic approach to cost-constrained Bayesian optimization.
Biology, Deep Learning, Experiment Management, Hyperparameter Optimization, RNN, Vision
Academic Interview: NetGPI for GPI Signal Prediction in Biology
NetGPI applies deep learning to a sequence classification task in biology
Advanced Optimization Techniques, Augmented ML Workflow, Hyperparameter Optimization, Machine Learning, Training & Tuning
Simple Neural Architecture Search with SigOpt
Neural Architecture Search with Ray Tune and SigOpt helps you discover better performing model configurations.
All Model Types, SigOpt Company News
Kisaco Research designates SigOpt as a Leader in AI Software Optimization
With more and more businesses building robust and repeatable machine learning pipelines, enhancing business efficiency with AI is no longer merely a research project.
Augmented ML Workflow, Human Activity Recognition, Hyperparameter Optimization, Machine Learning, Recommendation System, SigOpt 101, Training & Tuning
A Simpler Recommendation System with Surprise Lib and SigOpt
Recommendation systems are some of the most fundamental and useful applications that machine learning can deliver to businesses.
Hyperparameter Optimization, Machine Learning, Regression, SigOpt 101, Supervised, Training & Tuning
XGBoost Regression with SigOpt
In this blog post I’m going to contrast our previous classifier example with another gradient-boosted tree model, a regression.
Classification, Gradient Boosting, Hyperparameter Optimization, Machine Learning, SigOpt 101, Supervised, Training & Tuning
XGBoost Classification with SigOpt
Today I’m going to walk you through training a simple classification model.
All Model Types, Augmented ML Workflow, Experiment Management, Hyperparameter Optimization, Modeling Best Practices, Training & Tuning
Webinar in review: MLOps to reduce friction, technical debt, and modeling
Establishing a robust pipeline for successful models requires more than a Jupyter notebook and a GPU instance.
Advanced Optimization Techniques, Methodology, Multimetric Optimization, Quasi Monte Carlo, Research, SigOpt Company News
Naivete be gone: budget-constrained Bayesian Optimization for better results
Each new observation in the process of black-box optimization assumes there will be infinite possibilities to guess again, typically in multidimensional hyperparameter space.
All Model Types, Augmented ML Workflow, Hyperparameter Optimization, Model Type, Modeling Best Practices, Training & Tuning
Model Optimization at the Center of MLOps
When it comes to predictive analytics, there are many factors that influence whether your model is performant for the real-world business problem you are trying to address.
Augmented ML Workflow, Machine Learning, Modeling Best Practices, Training & Tuning
How SigOpt can Anchor your MLOps Strategy
Experimenting with data and machine learning can prove useful, but productively collaborating with a team to hand off models from training to production is an entirely different process, and one that can lead to real business transformation.
Deep Learning, Experiment Management, Hyperparameter Optimization, Machine Learning
Advanced Experimentation: Boost Model Optimization
SigOpt offers the most complete set of advanced experimentation features for model optimization
Deep Learning, Experiment Management, Hyperparameter Optimization, Machine Learning
Metrics: Track, Constrain, Optimize Across Many Metrics
Metric Strategy enables you to track metrics, set them as constraints, or optimize across multiple at once
Advanced Optimization Techniques, Company news, Modeling Best Practices, Training & Tuning
Store Visual Artifacts in SigOpt to See the Bigger Modeling Picture
When designing models, it’s essential to track as many metrics as you find useful, and ultimately pick one or two to optimize when developing your model.
Biology, CNN, Convolutional Neural Networks, Hyperparameter Optimization, Segmentation, Vision
Using SigOpt to Optimize Neural Networks in Biology
SigOpt offers our software free to academics, nonprofits, or labs who are performing research they plan to publish.
Deep Learning, Experiment Management, Hyperparameter Optimization, Machine Learning
Experiments: Automate Intelligent Hyperparameter Optimization
Experiments automates hyperparameter optimization with an ensemble of Bayesian and global optimization algorithms
Deep Learning, Experiment Management, Hyperparameter Optimization, Machine Learning
Dashboard: Organize, Visualize, Compare Models
The SigOpt Dashboard helps you organize model development and visualize training and tuning runs
Deep Learning, Experiment Management, Machine Learning, Modeling Best Practices
Runs: An Easy Way to Track Training
SigOpt Runs allows you to track machine learning training runs with a few lines of code
Deep Learning, Experiment Management, Hyperparameter Optimization, Machine Learning
Exploring the SigOpt Features that Supercharge Model Development
SigOpt includes a unique combination of operational features and intelligent hyperparameter optimization
Deep Learning, Experiment Management, Hyperparameter Optimization, Machine Learning
Insights for Simplifying Hyperparameter Optimization
SigOpt partnered with MLconf on a webinar that focused on practical best practices for metrics, training, and hyperparameter optimization.
Experiment Management, Hyperparameter Optimization, Modeling Best Practices
Tips for Tracking & Analyzing Training Runs
SigOpt partnered with MLconf on a webinar that focused on practical best practices for metrics, training, and hyperparameter optimization.
Advanced Optimization Techniques, Hyperparameter Optimization, SigOpt Company News
Results from the NeurIPS Black Box Optimization Competition
SigOpt Research Lead Michael McCourt discusses results from the NeurIPS Black Box Competition
Applied AI Insights, Deep Learning, Experiment Management, Hyperparameter Optimization, Machine Learning
How to Apply Metrics to See the Bigger Modeling Picture
SigOpt partnered with MLconf on a webinar that focused on practical best practices for metrics, training, and hyperparameter optimization.
Advanced Optimization Techniques, Deep Learning, Experiment Management, Hyperparameter Optimization, Machine Learning
Best Practices for Metrics, Training, Tuning
How to think about the combination of metrics, training, and tuning for machine learning
Hyperparameter Optimization, Modeling Best Practices, SigOpt 101, Training & Tuning
Bayesian Optimization 101
This blog post will go over why you should use Bayesian optimization in your modeling process, the basics of Bayesian optimization, and how to effectively leverage Bayesian optimization for your modeling problems.
Experiment Management, Healthcare, Hyperparameter Optimization, Natural Language, Time Series
How B.Next Applies SigOpt to COVID-19 Research
How B.Next applies SigOpt to optimize models in COVID-19 research
Deep Learning, Experiment Management, Hyperparameter Optimization, Time Series
Insights from Interviewing Dozens of Modelers
SigOpt partnered with MLconf on a webinar that focused on practical best practices for metrics, training, and hyperparameter optimization.
Deep Learning, Experiment Management, Hyperparameter Optimization
Poll Insights: What Makes Modeling Hard?
Learn the results from a recent set of polls during a SigOpt + MLconf Webinar
Advanced Optimization Techniques, Applied AI Insights, Deep Learning, Experiment Management, Hyperparameter Optimization, Machine Learning
A Better Approach to Metrics, Training, and Tuning with MLconf
SigOpt collaborated with MLconf on a webinar to discuss best practices for metrics, training, and hyperparameter optimization for developing high-performing models.
Advanced Optimization Techniques, Applied AI Insights, BERT, Deep Learning, Experiment Management, Hyperparameter Optimization, Natural Language, Transformers
Efficient BERT at Ray Summit
In a talk at Ray Summit, Meghana Ravikumar discusses Efficient BERT
Hyperparameter Optimization, Training & Tuning
Integrating SigOpt and RayTune
Modeling can oftentimes feel like a crapshoot.
AFWERX Selects SigOpt as a Leader in Machine Learning Operations
AFWERX Selects SigOpt as a Machine Learning Operations Leader
Application, Applied AI Insights, Augmented ML Workflow, Multimetric Optimization, Natural Language
Efficient BERT: How to get up and running
This post walks through how to get Efficient BERT up and running in your modeling workflow.
Advanced Optimization Techniques, Applied AI Insights, Augmented ML Workflow, Company news, Deep Learning, Experiment Management
Learn Experiment Management for Image Classification and Fraud Detection at MLconf 2020
When you’re trying to train the best model, especially on a deadline, there should be a method to the “madness.
Quasi Monte Carlo
SigOpt and QMCPy: What Makes a Sequence “Low Discrepancy”?
This work was originally published here, and has been reproduced with the author’s permission.
Quasi Monte Carlo
SigOpt and QMCPy: A QMCPy Quick Start
This work was originally published here, and has been reproduced with the author’s permission.
Quasi Monte Carlo
SigOpt and QMCPy: Why Add Q to MC?
Quasi-Monte Carlo (QMC) methods can sometimes speed up simple Monte Carlo (MC) calculations by orders of magnitude? What makes them work so well?
Quasi Monte Carlo, Research
SigOpt’s Support for QMCPy
SigOpt is a proud sponsor of and contributor to QMCPy, an open source project and community dedicated to making quasi-Monte Carlo (QMC) methods more accessible that was launched last month.
Application, Applied, Convolutional Neural Networks, Deep Learning, Natural Language
Experimental Design with SigOpt for Natural Language at Luleå Technical University
Just as we’ve recently witnessed broader and broader use cases from some of our industrial customers, numerous academic groups are finding success using SigOpt to optimize and improve the accuracy of Natural Language models.
Advanced Optimization Techniques, Deep Learning, Experiment Management, Hyperparameter Optimization, Machine Learning
Dealing with Model Performance Drift in the Pandemic
Models that were trained on pre-pandemic datasets and fine-tuned with pre-pandemic intuition may no longer make relevant predictions. Learn how how you can solve this challenge of model drift.
Advanced Optimization Techniques, Application, Augmented ML Workflow, Deep Learning, Experiment Management, Natural Language, Training & Tuning
Using SigOpt on BERT for an NLP Task
BERT performs well on question-answering tasks, but as a model, it is both structurally large and also resource-consuming.
Augmented ML Workflow, Modeling Best Practices, Training & Tuning
5 Signs You Need to Invest in Hyperparameter Optimization
As you build out your modeling practice, and the team necessary to support it, how will you know when you need a managed hyperparameter solution to support your team’s productivity? As you first start to optimize your business’s fraud detection algorithm or recommender system, you can tune simpler models with easy-to-code techniques such as grid […].
Deep Learning, Experiment Management, Natural Language
ICYMI Recap: Lessons from using SigOpt to weigh tradeoffs for BERT size and accuracy
In this webinar, Machine Learning Engineer Meghana Ravikumar, shares how she used Experiment Management to guide her model development process for BERT.
Advanced Optimization Techniques, Application, Applied AI Insights, Augmented ML Workflow, Deep Learning, Focus Area, Model Type, Natural Language
Efficient BERT with Multimetric Optimization, part 3
This is the third post in this series about distilling BERT with Multimetric Bayesian Optimization.
Advanced Optimization Techniques, Application, Applied, Applied AI Insights, Augmented ML Workflow, Deep Learning, Focus Area, Model Type, Multimetric Optimization, Natural Language, Research
Efficient BERT with Multimetric Optimization, part 2
This is the second post in this series about distilling BERT with Multimetric Bayesian Optimization.
Advanced Optimization Techniques, Application, Applied, Applied AI Insights, Augmented ML Workflow, Deep Learning, Focus Area, Model Type, Natural Language, Research, Training & Tuning
Efficient BERT with Multimetric Optimization, part 1
This is the first post in this series about distilling BERT with Multimetric Bayesian Optimization.
Application, Augmented ML Workflow, Deep Learning, Experiment Management, Focus Area, Model Type, Modeling Best Practices, Natural Language, SigOpt 101
Why is Experiment Management Important for NLP?
I think we can all agree that modeling can often feel like a crapshoot (if you didn’t know before, surprise!).
Advanced Optimization Techniques, Application, Applied, Augmented ML Workflow, Convolutional Neural Networks, Deep Learning, Focus Area, Model Type, Natural Language, Research
Efficient BERT: An Overview
BERT is a strong and generalizable architecture  that can be transferred for a variety of NLP tasks (for more on this see our previous post or Sebastian Ruder’s excellent analysis).
How Teams Use SigOpt to Build Differentiated Models
How teams use SigOpt to explore modeling problems, understand model behavior, and advance models to production at scale
Finding a third dimension for molecules without simulation, with SigOpt
Researchers Rafael Gomez-Bombarelli from MIT and Simon Axelrod from Harvard collaborated on training and tuning models that generate three-dimensional geometric data based on 2D geometric definition strings for molecules.
Modeling with the Modern Machine Learning Stack
Essential capabilities for any modeling team to scale machine learning development and production
Applied, Reinforcement Learning, Robotics
Bayesian optimization for robot planning and reinforcement learning
This blog post presents some results presented by myself and my Ph.
ICYMI Recap: Introducing Experiment Management
In this discussion, Fay Kallel, Head of Product, and Jim Blomo, Head of Engineering, joined Product Marketing Lead Barrett Williams to demo Experiment Management, the latest solution from the SigOpt team.
Advanced Optimization Techniques, Applied AI Insights, Augmented ML Workflow, Experiment Management
Take the Pain out of Training and Tuning
Modeling can be a messy ordeal.
Congratulations to the ICML 2020 Test of Time Award Winners
Srinivas, Krause, Kakade and Seeger's work on UCB in BO has received ICML's 2020 Test of Time award.
Technology Considerations for Machine Learning Operations
Considerations and requirements for any team building their machine learning technology stack
Advanced Optimization Techniques, Company news, Training & Tuning
Finding your way to the finish line: model comparisons and visualizations
If you’re a SigOpt veteran, you’ll know that we provide best-in-class optimization for a wide variety of machine learning models.
All Model Types, Modeling Best Practices, SigOpt 101
A Comparison of Bayesian Packages for Hyperparameter Optimization
This is the second of a three-part series covering different practical approaches to hyperparameter optimization.
Differentiated Models are Eating the World
This post is part of a five-part series.
Applied AI Insights, Natural Language
ICYMI Recap: Efficient BERT
Efficient BERT uses Multimetric Bayesian Optimization to explore the tradeoff between model size and model accuracy
Advanced Optimization Techniques, All Model Types, Augmented ML Workflow, Experiment Management, Modeling Best Practices, Training & Tuning
Keeping track of it all: recording and organizing model training runs
This blog discusses the Runs feature and how to add it via API to track training runs as part of Experiment Management.
Augmented ML Workflow, Experiment Management, Modeling Best Practices, Training & Tuning
Modeling Managed
SigOpt’s Experiment Management solution helps you track, filter and reproduce your entire modeling workflow, with interactive visuals you can augment your intuition, understand your progress and explain your results.
Why Enterprise AI is Actually Three Markets in One
Defining Enterprise AI in terms of the modeling approach companies adopt, and the outcomes the expect
Advanced Optimization Techniques, Applied, Augmented ML Workflow, Convolutional Neural Networks, Deep Learning, Focus Area, Methodology, Vision
ICYMI Recap: Detecting COVID-19 with Deep Learning and DarwinAI
Dr Michael McCourt and Dr Alexander Wong share their experience working on COVID-Net, a neural network to detect COVID-19
Advanced Optimization Techniques, Applied, Augmented ML Workflow, Focus Area, Methodology, Modeling Best Practices
ICYMI Recap: Warm Start Tuning with Prior Beliefs
SigOpt has expanded to include a number of advanced features to help you save time while increasing model performance through experimentation.
Advanced Optimization Techniques, Focus Area, SigOpt 101, Training & Tuning
A Comparison of Hyperparameter Optimization Methods
When it’s time to ensure that your model, whether that’s a recommendation system, computer vision classifier, or trading strategy is performing as well as it possibly can, you’ll need to choose the right hyperparameter optimization strategy.
Advanced Optimization Techniques, Augmented ML Workflow, Modeling Best Practices
ICYMI Recap: Introducing Metric Management
Defining, selecting, and optimizing with the right set of metrics is critical to every modeling process, but can be tough to get right.
Advanced Optimization Techniques, All Model Types, Modeling Best Practices
Applying Prior Beliefs in SigOpt
SigOpt customers can now pass Prior Beliefs to experiments to help inform SigOpt about their expertise.
Advanced Optimization Techniques, Company news, Convolutional Neural Networks, Deep Learning, Focus Area, Methodology
ICYMI Recap: T4ST Talk 3: Training, Tuning, and Metric Strategy
In Case You Missed It: Training, Tuning, and Metric Strategy If your business focuses on systematic trading, we’d like to share how you can most effectively re-train, adjust, and tune your deep learning models as you adapt to fluid market conditions.
Deep Learning, Modeling Best Practices, Multimetric Optimization, Natural Language
Why are Transformers important?
In this post, we explore how Transformers enable developing high-performing NLP models under resource constraints.
Augmented ML Workflow, Deep Learning, Machine Learning, Modeling Best Practices, Research, Training & Tuning
The Growing Abundance of Optimization in Peer-Reviewed Research
Researchers Xavier Bouthillier of Inria and Gaël Varoquaux of Mila, surveyed the use of model experimentation methods across NeurIPS 2019 and ICLR 2020, two of the most prestigious international academic ML conferences.
Advanced Optimization Techniques, Applied, Multimetric Optimization
Feature Walkthrough: Metric Constraints
SigOpt's Metric Constraints feature walkthrough: designing an accurate and efficient convolutional neural network.
Company news, Focus Area, Modeling Best Practices
How We Scaled SigOpt to Handle the Most Relentless of Workloads: Part 2
Scalability is one of the main factors to consider when deciding whether an optimization solution will work for you and your organization.
Advanced Optimization Techniques, All Model Types, Augmented ML Workflow, Company news, Focus Area
Metric Management, a Comprehensive Stack to Track and Adapt Metrics to Fit Your Business
Defining, selecting, and optimizing with the right set of metrics is critical to every modeling process, but these steps are often hard to execute well.
Advanced Optimization Techniques, All Model Types, Augmented ML Workflow, Company news, Focus Area, Multimetric Optimization, Training & Tuning
Metric Constraints help you closely align your models with your business objectives
Defining just the right metrics to assess a model in the context of your business is no easy feat. Metric Constraints helps you establish guardrails for your optimization and experimentation loops.
Applied AI Insights, Augmented ML Workflow, Training & Tuning
How We Scaled SigOpt to Handle the Most Relentless of Workloads: Part 1
Editor’s note: you can find Part 2 in this series here.
Advanced Optimization Techniques, Convolutional Neural Networks, Data Augmentation, Deep Learning, Healthcare, Multimetric Optimization
Parametrizing Data Augmentation in COVID-Net
In this blog post, we discuss our recent collaboration trying to improve performance of a COVID-19 X-ray classification tool.
Company news, Research
Donating SigOpt to the COVID-19 Research Cause
To boost any modeling projects that support these efforts, SigOpt is offering our solution for free to any researcher working on a COVID-19 related modeling project.
Advanced Optimization Techniques, Company news, Convolutional Neural Networks, Deep Learning, Focus Area, Methodology
ICYMI Recap: T4ST Talk 2: Efficient Training and Tuning for Deep Learning Models
In Case You Missed It: Efficient Training and Tuning for Deep Learning Models Although much of the world is working from home (where and if possible) due to the COVID-19 pandemic, the markets are still online.
Advanced Optimization Techniques, Applied AI Insights, Company news, Deep Learning, Focus Area, Machine Learning, Modeling Best Practices
ICYMI Recap: Modeling in Place—a panel discussion with Alectio, NVIDIA, Pandora, and Yelp
In Case You Missed It Modeling in Place A panel discussion with Alectio, NVIDIA, Pandora, and Yelp In today’s challenging sheltered environment, work from home is the new norm for many data scientists and engineers building and tuning models.
Advanced Optimization Techniques, Company news, Focus Area, Multimetric Optimization
Tuning for Systematic Trading
SigOpt works with algorithmic trading firms who represent over $600B in assets under management.
Advanced Optimization Techniques, Company news, Focus Area
ICYMI Recap: T4ST Talk 1: Intuition behind Bayesian optimization with and without multiple metrics
In Case You Missed It Recap for Tuning for Systematic Trading Talk 1: Intuition behind Bayesian optimization with and without multiple metrics Although much of the world is working from home (where and if possible) due to the COVID-19 pandemic, the markets are still—mostly—online.
Active Differential Inference for Screening Hearing Loss
At SigOpt, we are thrilled to collaborate with the outstanding community of experts from around the world.
NeurIPS in Review
The conference on Neural Information Processing Systems (NeurIPS) was held in December in Vancouver, Canada.
Advanced Optimization Techniques, Materials Science, Simulations & Backtests
Reducing Reflection in Additive Manufacturing of Optoelectronics
In this blog post, we discuss our recent progress applying multiobjective Bayesian optimization to minimizing reflectivity of glass.
Advanced Optimization Techniques, Applied, Applied AI Insights, Convolutional Neural Networks, Research
University of Fribourg uses SigOpt to develop a spectral initialization strategy to increase classification accuracy on medical images by 2.2%
Research on neural networks has become an international and fast-moving academic pursuit in the past several years.
Company news, Model Type, SigOpt 101
SigOpt Partners with AI Leaders to Empower their Experts
SigOpt’s mission is to accelerate and amplify the impact of modelers everywhere.
All Model Types, Applied, Modeling Best Practices
Our new SigOpt user experience puts insights at your fingertips
Data science should never be a solo or individual sport.
Black-Box Image Augmentation for Better Classification
This blog was originally published on MLconf.
2019 ICML Takeaways
SigOpt was proud to sponsor ICML for the 4th consecutive year, and we sent several team members to represent, both at the exhibitor booth and at the sessions. 
Metric Thresholds, a New Feature to Supercharge Multimetric Optimization
Today, we are excited to announce the general availability of Metric Thresholds, a new feature that supercharges the performance of Multimetric optimization. Metric Thresholds helps you discover better models that meet your problem-specific needs by allowing you to define “thresholds” on your metrics. 
The Case of the Mysterious AWS ELB 504 Errors
At SigOpt, we provide a reliable, scalable service for enterprises and academics.  In order to provide uptime that meets the demanding needs of our customers, we use AWS’ “Classic” Elastic Load Balancer (ELB) to distribute incoming requests among several AWS instances.
Highlight: Bayesian Optimization for Antireflective, Superomniphobic Glass
We collaborate with the University of Pittsburgh on fabricating nanostructured glass with ultrahigh transmittance, ultralow haze, and superomniphobicity.
Highlight: SigOpt Collaborates with the University of Pittsburgh
SigOpt teams with the University of Pittsburgh for solar panel innovation.
Highlight: Bayesian Optimization of Composite Functions
The research team at SigOpt is fortunate to be part of an outstanding community of experts from around the world.  
Insights for Building High-Performing Image Classification Models
If you’re interested in more things computer vision, NeurIPS is coming up (we’ll be there- come say hi).
New Alpha Feature: Monitor Training Convergence
We are excited to release a new SigOpt feature into alpha testing: Training Monitor. This feature was designed to better empower our neural network developers.
Congratulations, Roman Garnett, on your NSF CAREER Award
t SigOpt is thrilled to congratulate our friend Roman Garnett for his recently being named an NSF CAREER award recipient.
Machine Learning Infrastructure Tools for Hyperparameter Optimization
Machine learning infrastructure tools can help bridge the gap between the modeler and the cluster by inserting an abstraction layer between the model builder and any infrastructure tools used to communicate with the cluster.
Introducing Projects
We are pleased to announce the addition of Projects, a feature that allows you and your team to organize experiments for easier access, viewing, and sharing.
Integration of in vitro and in silico Models Using Bayesian Optimization With an Application to Stochastic Modeling of Mesenchymal 3D Cell Migration
Using SigOpt’s technology we have devised a method to generate high fidelity simulations of biological processes.
SigOpt Partners with Two Sigma to Extend its Leadership in Model Experimentation and Optimization Solutions
SigOpt, Inc. (“SigOpt”), a leading provider of solutions that maximize the performance of machine learning, deep learning and simulation models, announced a strategic partnership with Two Sigma, a leading systematic investment manager.
Augmented ML Workflow, Modeling Best Practices
A New Experiment List and Search Interface
We are excited to announce improvements to our experiment list and search functionality in the Experiment Insights dashboard.
All Model Types, Modeling Best Practices
Experiment, Suggestion, Observation: How We Named Our HPO API Objects
As part of our mission to make hyperparameter optimization accessible to modelers everywhere, we want to share our naming choices with other developers of HPO APIs.
Academic Interview with Brady Neal from Mila
In this latest Academic Interview, Brady Neal from the Mila (Quebec Artificial Intelligence Institute) discuss the paper, “A Modern Take on the Bias-Variance Tradeoff in Neural Networks” and its implications for deep learning practitioners.
Advanced Optimization Techniques, All Model Types
SigOpt at NeurIPS 2018
The NeurIPS conference is held in Montreal this year, where leaders in the community will present innovative research over the course of tutorials, main conference proceedings and workshops.
Highlight: Bayesian Optimization of High Transparency, Low Haze, and High Oil Contact Angle Rigid and Flexible Optoelectronic Substrates
Our research team at SigOpt has been very fortunate to be able to collaborate with outstanding researchers around the world, including through our internship program.  
Highlight: A Nonstationary Designer Space-Time Kernel
 This post introduces the article A Nonstationary Designer Space-Time Kernel by Michael McCourt, Gregory Fasshauer, and David Kozak, appearing at the upcoming NeurIPS 2018 spatiotemporal modeling workshop.
Highlight: Automating Bayesian Optimization with Bayesian Optimization
This post introduces the article Automating Bayesian Optimization with Bayesian Optimization by Gustavo Malkomes and Roman Garnett, appearing in the upcoming NeurIPS 2018 proceedings.
Highlight: Efficient Nonmyopic Batch Active Search
This post introduces the article Efficient Nonmyopic Batch Active Search by Shali Jiang, Gustavo Malkomes, Matthew Abbott, Benjamin Moseley and Roman Garnett, appearing in the upcoming NeurIPS 2018 proceedings.
All Model Types, SigOpt 101
The SigOpt Academic Program
If you are an academic working on peer-reviewed research and need parameter tuning, we would love to help supercharge your research with SigOpt.
Augmented ML Workflow, Deep Learning
Academic Interview with the DIVA team from the University of Fribourg
This post is an interview with the DeepDIVA team from the University of Fribourg to learn more about their framework and what it means for the broader community.
Advanced Optimization Techniques, All Model Types
Multimetric Updates in the Experiment Insights Dashboard
We are excited today to announce improvements in analyzing and viewing Multimetric experiments with the Experiment Insights dashboard.
All Model Types, Modeling Best Practices
Uncertainty 3: Balancing Multiple Metrics with Uncertainty
This is the third of three blog posts during which we explore the concept of uncertainty - or noise - and its implications for Bayesian optimization.
All Model Types, Modeling Best Practices
Uncertainty 2: Bayesian Optimization with Uncertainty
This is the second of three blog posts during which we explore the concept of uncertainty - or noise - and its implications for Bayesian optimization.
All Model Types, Modeling Best Practices
Uncertainty 1: Modeling with Uncertainty
This is the first of three blog posts during which we explore the concept of uncertainty - or noise - and its implications for Bayesian optimization.
Advanced Optimization Techniques, Machine Learning
AutoML at ICML 2018
Competitions like International Conference on Machine Learning help explore the complexities of developing effective ML pipelines under severe time restrictions and without expert intuition regarding the desired or expected output.
Advanced Optimization Techniques, All Model Types
New Advanced Feature: Constraints
SigOpt is constantly working to extend the capabilities of our platform by supporting different types of optimization problems.
All Model Types, Modeling Best Practices
Circulant Binary Embeddings
The research team at SigOpt works to provide the best Bayesian optimization platform for our customers. In our spare time, we also engage in research projects in a variety of other fields.
SigOpt 101, Simulations & Backtests
SigOpt Enters Strategic Investment Agreement with In-Q-Tel
We’re happy to announce a strategic investment and technology development agreement with In-Q-Tel (IQT).
All Model Types, SigOpt 101
SigOpt Winter Update
Happy New Year! 2017 was a great year for SigOpt.
All Model Types, Augmented ML Workflow
SigOpt Working with AWS to Provide PrivateLink Support for Customers on AWS
AWS PrivateLink is a new solution that will enable SigOpt to connect directly with any AWS customer that has an Amazon Virtual Private Cloud (VPC).
All Model Types, SigOpt 101
SigOpt Fall Update
This is the first edition of our new quarterly newsletter. In these updates, we will discuss newly released features, showcase content we have produced, published, or been cited in, and share interesting machine learning research that our research team has found. We hope you find these valuable and informative!
All Model Types, Augmented ML Workflow
SigOpt is Now Available on AWS Marketplace
Today we announce the availability of SigOpt on AWS Marketplace. Now, with a single click, data scientists and researchers can access SigOpt’s powerful optimization-as-a-service platform designed to automatically fine-tune their machine learning (ML) and artificial intelligence (AI) workloads.
All Model Types, SigOpt 101
At SigOpt, We’re Stronger as a Team
No one person is going to have all of the answers, so we're focused on making ourselves stronger as a team.
All Model Types, SigOpt 101
SigOpt Wins Barclays 2017 Open Innovation Challenge
Congratulations to the other innovative startups that were honored this year!
All Model Types, Modeling Best Practices
Covariance Kernels for Avoiding Boundaries
Here at SigOpt, Gaussian processes and reproducing kernel Hilbert spaces (RKHS) are important components of our Bayesian optimization methodology.
All Model Types, SigOpt 101
SigOpt Named A Gartner “Cool Vendor” in AI Core Technologies
Today, we’re honored to share that Gartner has listed us in its Cool Vendor 2017 report for AI Core Technologies.
All Model Types, Modeling Best Practices
Expected Improvement vs. Knowledge Gradient
In this blog post, we discuss another ingredient of Bayesian optimization: the sampling policy (or acquisition function).
All Model Types, Modeling Best Practices
Clustering Applied to Acquisition Functions
We have revisited one of the traditional acquisition functions of Bayesian optimization, the upper confidence bound and looked to a slightly different interpretation to better inform optimization of it. In our article, we present a number of examples of this clustering-guided UCB method being applied to various optimization problems.
All Model Types, Machine Learning, Training & Tuning
Common Problems in Hyperparameter Optimization
In this blog post, we are going to show solutions to some of the most common problems we’ve seen people run into when implementing hyperparameter optimization.
All Model Types, SigOpt 101
Announcing SigOpt Organizations
We are excited to announce SigOpt Organizations, the next step in the evolution of our web dashboard.
Reinforcement Learning, Training & Tuning
Using Bayesian Optimization for Reinforcement Learning
In this post, we will show you how Bayesian optimization was able to dramatically improve the performance of a reinforcement learning algorithm in an AI challenge.
Advanced Optimization Techniques, All Model Types
Building a Better Mousetrap via Multicriteria Bayesian Optimization
In this post, we want to analyze a more complex situation in which the parameters of a given model produce a random output, and our multicriteria problem involves maximizing the mean while minimizing the variance of that random variable.
All Model Types, Augmented ML Workflow
Solving Common Issues in Distributed Hyperparameter Optimization
SigOpt’s API for hyperparameter optimization leaves us well-positioned to build exciting features for anyone who wants to perform Bayesian hyperparameter optimization in parallel.
Advanced Optimization Techniques, All Model Types
Intro to Multicriteria Optimization
In this post we will discuss the topic of multicriteria optimization, when you need to optimize a model for more than a single metric, and how to use SigOpt to solve these problems.
Machine Learning, Training & Tuning
Bayesian Optimization for Collaborative Filtering with MLlib
In this post we will show how to tune an MLlib collaborative filtering pipeline using Bayesian optimization via SigOpt. Code examples from this post can be found on our github repo.
All Model Types, SigOpt 101
We Raised $6.6 Million To Amplify Your Research
Today is a big day at SigOpt. Since the seed round we secured last year, we’ve continued to build toward our mission to ‘optimize everything,’ and are now helping dozens of companies amplify their research and development and drive business results with our cloud Bayesian optimization platform.
All Model Types, Training & Tuning
Breaking Free of the Grid
In this post, we review the history of some of the tools implemented within SigOpt, and then we discuss the original solution to this black box optimization problem, known as full factorial experiments or grid search.  Finally, we compare both naive and intelligent grid-based strategies to the latest advances in Bayesian optimization, delivered via the SigOpt platform.
Advanced Optimization Techniques, Deep Learning
Deep Neural Network Optimization with SigOpt and Nervana Cloud
In this post, we will detail how to use SigOpt with the Nervana Cloud and show results on how SigOpt and Nervana are able to reproduce, and beat, the state of the art performance in two papers.
Applied AI Insights, Simulations & Backtests
Winning on Wall Street: Tuning Trading Models with Bayesian Optimization
SigOpt provides customers the opportunity to build better machine learning and financial models by providing users a path to efficiently maximizing key metrics which define their success.  In this post we demonstrate the relevance of model tuning on a basic prediction strategy for investing in bond futures.
All Model Types, Training & Tuning
Evaluating Hyperparameter Optimization Strategies
This blog post provides solutions for comparing different optimization strategies for any optimization problem, using hyperparameter tuning as the motivating example.
All Model Types, Modeling Best Practices
Dealing with Troublesome Metrics
At SigOpt, our goal is to help our customers build better models, simulations, and processes by maximizing key metrics for them.
Advanced Optimization Techniques, Deep Learning
TensorFlow ConvNets on a Budget with Bayesian Optimization
In this post on integrating SigOpt with machine learning frameworks, we will show you how to use SigOpt and TensorFlow to efficiently search for an optimal configuration of a convolutional neural network (CNN).
Advanced Optimization Techniques, Deep Learning
Unsupervised Learning with Even Less Supervision Using Bayesian Optimization
In this post on integrating SigOpt with machine learning frameworks, we will show you how to use SigOpt and XGBoost to efficiently optimize an unsupervised learning algorithm’s hyperparameters to increase performance on a classification task.
All Model Types, Augmented ML Workflow
Lessons from a RESTful API Redesign
When building the new and improved v1 API, we stepped back and chose to more closely adhere to RESTful architectural principles.
Applied AI Insights, Machine Learning
Using Model Tuning to Beat Vegas
Is it possible to use optimized machine learning models to beat Vegas? The short answer is yes; read on to find out how.
All Model Types, Modeling Best Practices
Intuition behind Gaussian Processes
SigOpt allows experts to build the next great model and apply their domain expertise instead of searching in the dark for the best experiment to run next. With SigOpt you can conquer this tedious, but necessary, element of development and unleash your experts on designing better products with less trial and error.
All Model Types, Modeling Best Practices
Profile Likelihood vs. Kriging Variance
The first post in our SigOpt in Depth series which deals with more technical aspects of topics that appear in the SigOpt Fundamentals series.
All Model Types, Modeling Best Practices
Likelihood for Gaussian Processes
Using the best approximation gives our customers the fastest path to optimal behavior, which minimizes the costs of experimentation.
All Model Types, Modeling Best Practices
Automatically Tuning Text Classifiers
In this first post on integrating SigOpt with machine learning frameworks, we’ll show you how to use SigOpt and scikit-learn to train and tune a model for text sentiment classification in under 50 lines of Python.
All Model Types, Modeling Best Practices
Approximation of Data
SigOpt provides customers the opportunity to define uncertainty with their observations and using this knowledge we can balance observed results against their variance to make predictions and identify the true behavior behind the uncertainty.
All Model Types, Modeling Best Practices
Intuition Behind Covariance Kernels
Gaussian processes are powerful because they allow you to exploit previous observations about a system to make informed, and provably optimal, predictions about unobserved behavior. They do this by defining an expected relationship between all possible situations; this relationship is called the covariance and is the topic of this post.
Applied AI Insights, Simulations & Backtests
Making a Better Airplane using SigOpt and Rescale
Rescale gives users the dynamic computational resources to run their simulations and SigOpt provides tools to optimize them. By using Rescale’s platform and SigOpt’s tuning, efficient cloud simulation is easier than ever.
All Model Types, SigOpt 101
We Raised Seed Financing from Andreessen Horowitz
When we created SigOpt, our mission was simple: optimize everything. What do we mean by that?
All Model Types, Modeling Best Practices
Picking the Right Metric
At SigOpt, we can help you raise this metric automatically and optimally whether you are optimizing an A/B test, machine learning system, or physical experiment.
Machine Learning, Training & Tuning
Tuning Machine Learning Models
SigOpt provides a simple API and web interface for quickly and easily leveraging cutting-edge optimization research to solve this problem for you.