# Real-Time SigOpt Demo

SigOpt automates the optimization of your model’s feature, architecture, and hyper-parameters using an ensemble of Bayesian optimization methods.

We can walk you through the SigOpt cloud-based API in just 60 seconds.

Let’s get started!

## Example function

In this tutorial, we’ll use Franke’s function as an example of a two-dimensional optimization problem. (This function stands in for whichever trading, ML, or banking model you’re looking to optimize.)

Franke’s function has two numeric parameters: x and y. So, we start by telling SigOpt to create a two-dimensional Experiment object containing two Parameter objects

(Note: We don’t need to pass additional model data to SigOpt. Models and data stay within your system. Only parameter configuration information is ever sent to SigOpt.)

## Optimization Loop

Next, we start the SigOpt feedback loop to find the maximum of our target metric.

Here’s the loop:

1. Get a Suggestion object from our API. This contains our suggested parameter values.
2. Evaluate your function using the parameters from the Suggestion.
3. Send your function’s output to SigOpt as an Observation object.
4. Repeat steps 1-3 until the function is optimized (we suggest 10-20x the number of parameters).

That’s it.

Let’s visualize this loop. For each step below, read the description then hit the blue button.

(Optionally, to follow along in your development environment, select a language first.)

Setup: Create an Experiment
First, we'll create a SigOpt Experiment with two tunable parameters, x and y.
NameFranke Optimization
Parameters
NameTypeRange
`x`Decimal[0,1]
`y`Decimal[0,1]
Step 1: Get a Suggestion
To start the optimization loop, we ask SigOpt’s API for a Suggestion object, which contains suggested values for parameters x and y.
As you iterate, SigOpt intelligently calculates new values to determine the effect and weighting of each until it achieves the optimal solution.
Parameter `x`:
Parameter `y`:
Step 2: Evaluate our function
Next, we plug these suggested parameter values into our model.
Our model (the Franke function) looks like this:
The metric we’re ultimately evaluating is the output of this model.
(As a reminder, the Franke function is just a placeholder. When you’re up and running with SigOpt, you’ll be substituting this with a model running on your own system.)
`Value`:
Step 3: Send SigOpt your results
Next, we submit our results to SigOpt: We send an Observation object containing the value we computed in the previous step.
(Typically, you’d report something like the accuracy, AUC, or error of a machine learning model. Or, the measured output of a physical process — whatever output you would like to optimize.)
Step 4: Repeat steps 1-3 until fully optimized
After submitting the results, SigOpt determines the next-best configuration for your parameters’ values. This brings us back to Step 1 (except one iteration closer to full optimization!).
In practice, a total iteration count equal to 10-20x the number of parameters in your model is required. So, for our Franke function, that’s between 20 and 40 iterations for us to get to optimal values.
Let’s go ahead and run our optimization loop:

### Reported Observations

#`x``y``Value`
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Here’s context on what we’ve done in this demo: We used an ensemble of state-of-the-art Bayesian optimization methods to find an optimal parameter configuration for this optimization problem 100x faster than random search, and 10-15x faster than tuned grid search.
We also outperformed traditional and alternative Bayesian techniques on a collection of benchmarks and real-world problems. And we outperform MOE, spearmint, SMAC, bayesopt, and hyperopt.
(In fact, we’re the creators of MOE and bayesopt!)
Read more in our peer-reviewed paper presented at ICML ’16.