Modeling can oftentimes feel like a crapshoot. It has many moving parts that need to be tracked, including datasets, features, architectures, hyperparameters, infrastructure, and everything in between. And many times, it feels like all of this is happening at once. At each stage of the process, there is a vast amount of discovery to be done and there are a wide variety of critical decisions to be made.
To be effective, modeling requires efficient and rapid experimentation. Together, SigOpt and Ray help address operational burdens involved in modeling including managing experimentation, orchestrating clusters, and intelligently scaling training and tuning. Using SigOpt and Ray together allows you to gather insights, easily scale your modeling, and effectively experiment.
What is SigOpt?
At SigOpt, we understand that modeling is messy, training is difficult to troubleshoot, and hyperparameter optimization is difficult to scale and apply effectively. We’ve created stack agnostic tooling to solve these problems. Enabled via a REST API and interactive web dashboard, we help you make better modeling decisions.
Experiment Management
- Runs: Capture modeling attributes, code snapshots, metadata and more with just a few lines of code using the SigOpt Runs API
- Dashboard: Organize, track, compare, and collaborate on these runs in a web experience designed to capture your full modeling history for reproducibility
- Insights: Visualize, compare, and analyze runs to select the best model
Modeling Insights
- Model training Insights: Visualize, compare, and analyze runs to select the best model
- Hyperparameter space insights: Visualize and understand parameter and hyperparameter importance for your modeling problem
Hyperparameter Optimization at Scale
- Effective hyperparameter optimization: Use our HPO solution to intelligently and automatically optimize your models
- Advanced optimization: Leverage your knowledge of the modeling problem to inform the optimization process
- Scale: Easily scale your hyperparameter optimization with our effective parallelism
What is Ray?
Ray is a fast and simple framework for building and running distributed applications.
Ray accomplishes this mission by:
- Providing simple primitives for building and running distributed applications.
- Enabling end users to parallelize single machine code, with little to zero code changes.
- Including a large ecosystem of applications, libraries, and tools on top of the core Ray to enable complex applications.
Ray Core provides the simple primitives for application building. On top of Ray Core there are several libraries developed to solve scaling problems for machine learning. RayTune is one of these libraries. With RayTune, modelers are able to easily orchestrate their model tuning across a set of machines. RayTune also offers wrappers around a collection of hyperparameter optimization algorithms for modelers to try as their tuning strategies.
Combining SigOpt and Ray
Combining SigOpt and Ray empowers modelers to unlock scale in every aspect of their modeling workflow without sacrificing insight. SigOpt and Ray seamlessly integrate to:
- Enable seamless decision tracing and collaboration
- Uncover modeling and parameter insights
- Analyze training cycles
- Effortlessly scale hyperparameter optimization
- Provide easy cluster orchestration
Using the native SigOpt and Ray Integration
To start using SigOpt and RayTune together, try out RayTune’s native SigOpt integration. To use this integration, you will need a SigOpt account. Sign up for free here.
Here’s an example set up for using the native integration. For an interactive version of the integration, follow this Colab notebook.
Running SigOpt’s Bayesian Optimization with RayTune
Step1 : Set up your hyperparameter space
hyperparameter_space = [ { 'name': 'learning_rate', 'type': 'double', 'bounds': { 'max': np.log(0.01), 'min': np.log(0.0001) }, }, { 'name': 'momentum', 'type': 'double', 'bounds': { 'min': 0.85, 'max': 0.99 }, }, ]
Step 2: Set up your SigOpt connection and create a SigOptSearch object
sigopt_connection = sigopt.Connection(client_token=sigopt_api_token) sigopt_search = SigOptSearch( hyperparameter_space, name="SigOpt Example Run", max_concurrent=1, observation_budget=num_observations, project="sigopt-ray-integration", connection=sigopt_connection, metric="validation_accuracy", mode="max") print("SigOpt experiment dashboard at: %s" %("https://app.sigopt.com/experiment/" + str(sigopt_search.experiment.id)))
Step 3: Set up a RayTune config
config = {"num_epochs": num_epochs, "training_dataloader": training_dataloader, "validation_dataloader": validation_dataloader, "is_multimetric": False}
Step 4: Run Tune
result = tune.run(raytune_train_wrapper, name="sigopt-ray-integration", local_dir=os.path.abspath("./ray_output"), search_alg=sigopt_search, num_samples=num_observations, scheduler=FIFOScheduler(), resources_per_trial=dict(cpu=1, gpu=1), config=config, )
Running SigOpt’s Multimetric Bayesian Optimization with RayTune
Step 1: Set up your hyperparameter space
hyperparameter_space = [ { 'name': 'learning_rate', 'type': 'double', 'bounds': { 'max': np.log(0.01), 'min': np.log(0.0001) }, }, { 'name': 'momentum', 'type': 'double', 'bounds': { 'min': 0.85, 'max': 0.99 }, }, ]
Step 2: Set up your SigOpt connection and create a SigOptSearch object
sigopt_connection = sigopt.Connection(client_token=sigopt_api_token) sigopt_search = SigOptSearch( hyperparameter_space, name="SigOpt Multimetric Example Run", max_concurrent=1, observation_budget=num_observations, project="sigopt-ray-integration", connection=sigopt_connection, metric=["validation_accuracy", "training_loss"], mode=["max", "min"]) print("SigOpt experiment dashboard at: %s" %("https://app.sigopt.com/experiment/" + str(sigopt_search.experiment.id)))
Note: For Multimetric Optimization, you must specify 2 metrics and 2 modes in the SigOptSearch instantiation.
Step 3: Set up RayTune config
config = {"num_epochs": num_epochs, "training_dataloader": training_dataloader, "validation_dataloader": validation_dataloader, "is_multimetric":True}
Step 4: Run Tune
result = tune.run(raytune_train_wrapper, name="sigopt-ray-integration", local_dir=os.path.abspath("./ray_output"), search_alg=sigopt_search, num_samples=num_observations, scheduler=FIFOScheduler(), resources_per_trial=dict(cpu=1, gpu=1), config=config, )
Recommended usage
Currently, the RayTune wrapper does not have extensions for SigOpt’s advanced optimization features and optimization strategies. These advanced features include our features such as Prior Beliefs (use your domain knowledge to inform the optimizer on how metric values behave), Experiment Transfer (use an already executed optimization run to inform your current tuning), Metric constraints (tell the optimizer metric space areas that don’t work for you), and others.To use these features with RayTune, you will need to write your own extension of RayTune’s SigOptSearch. Due to the modularity of both RayTune and SigOpt, this is relatively straightforward.
Best Practices
When using RayTune and SigOpt together, to get the best of both products, we recommend the following best practices:
- Create customized RayTune and SigOpt Integrations based on your optimization needs
- Integrate SigOpt’s Experiment Management into your training to easily troubleshoot
- Watch the Ray dashboard to check for cluster health
- Use RayTune’s FIFO scheduler with SigOpt’s parallelism (SigOpt’s scheduler will schedule jobs for you using our in-house algorithm)
- Store checkpoints and other model files in scalable storage to free up space on your cluster for computation (ex: s3)
- Play around with RayTune’s autoscaling options to find what’s best for you
What’s next?
Try our products out together! Sign up to use SigOpt for free here and get started with Ray here.