The fast-paced initial design of our v0 API left us with a few outstanding issues related to data representation, authentication, and resource management. When building the new and improved v1 API, we stepped back and chose to more closely adhere to RESTful architectural principles. Structured resource management improved the clarity of our core endpoints, but unexpected benefits also came to a feature we call parallelism. The v0 implementation required additional parameters, endpoints, and resources; in v1, parallelism can be handled with normal RESTful resource management.
Background of our original API
The most important resource in SigOpt is an experiment. An experiment encapsulates an objective that SigOpt is optimizing, the relevant parameters, and the underlying data. After creating an experiment, you run an optimization loop (shown in the code snippet below): receive a suggestion for the experiment, evaluate your metric on the suggested parameters, and report back an observation.
The v0 API was organized around the experiment. The HTTP verbs were restricted to GET and POST, and most endpoints were structured as:
GET /v0/experiments/id/FUNCTION
POST /v0/experiments/id/FUNCTION
Thus, the v0 optimization loop was as follows:
v0 Optimization Loop
Step 1: receive a suggestion
curl -X POST "https://api.sigopt.com/v0/experiments/ID/suggest" \
-d 'client_token=client_token'
Step 2: Evaluate your metric (implement a function)
Step 3: report an observation
curl -X POST "https://api.sigopt.com/v0/experiments/ID/report" \
-d 'client_token=client_token' \
-d 'data={
"assignments": {
"param2": "XXXX",
"param1": "XXXX"
},
"value": "YYY"
}'
In the basic use case, the user runs one iteration of the optimization loop after another and evaluates their metric sequentially. When clients have the necessary computational resources, multiple evaluations of the metric in parallel can be advantageous. Organization of the work (running the optimization loop) follows the standard thread pool design pattern.
The report endpoint creates an observation and extends naturally to the parallel case.
The suggest endpoint was trickier. Originally created as a POST request, this endpoint didn’t necessarily create a new suggestion. This endpoint may return a stored suggestion or may create a new one, depending on availability. There was no way to re-request a specific suggestion that you previously saw. To support parallelism, we introduced a system to “claim” suggestions. The claiming system necessitated the creation of several new endpoints to manage its infrastructure. Between resource administration, the optimize loop, and parallelism, we had almost 20 function-endpoints for parallel experiments.
Switchover
We chose REST for our public API because we wanted an architectural style that was clear, easy to follow, and familiar to developers1. REST is structured, meaning that knowledge about how one resource is manipulated easily transfers to other resources.
Our first goal was to improve the optimization loop. To be truly RESTful, we needed to stop thinking of observations and suggestions as objects returned by functions, and start thinking of them as resources. We focused on keeping experiments, observations and suggestions at predictable URIs based on unique identifiers, and using the set of HTTP verbs POST, GET, PUT, and DELETE for performing creates, fetches, updates and deletes. An experiment now lives at /v1/experiments/id, while observations and suggestions are nested beneath their respective experiments.
Overall, we reduced our list of endpoints from around 20 experiment-function endpoints to 6 supported endpoints for each of these objects (suggestions currently do not support an update, because there are no fields that can be updated). In the table below, you can see how resources are manipulated with different HTTP verbs.

Table 1: v1 API endpoints organized around the observation resource
Parallelism
Extending to parallelism, we scrapped the claiming system and started over from scratch. We needed to support multiple workers each running their own optimization loops at the same time for a given experiment.
The RESTful redesign makes suggestions resources that are manipulated by the user. Similar to experiments and observations, the initial GET /v1/experiments/id/suggestions returns an empty list, because no resources have been created yet. Unlike the old POST /v0/experiments/id/suggest, each call to POST /v1/experiments/id/suggestions creates a brand new suggestion resource for the user.
As it turns out, since resource management was built into the RESTful redesign, we did not need to build any additional API infrastructure to handle parallelism. Because each POST creates an independent resource, individual threads, processes, or machines can create their own suggestions without any additional API calls or parameters. Should one or more threads fail, all open suggestions can be viewed or deleted with one of the following two requests:
GET /v1/experiments/id/suggestions?state=open
DELETE /v1/experiments/id/suggestions?state=open
allowing the user to easily recover from the failure. The v1 optimization loop reflects this resource management:
v1 Optimization Loop
Suggestions, observations, and experiments are all resources.
Step 1: Receive a suggestion
curl -X POST "https://api.sigopt.com/v1/experiments/ID/suggestions" \
-u $CLIENT_TOKEN:
Step 2: Evaluate your metric (implement a function)
Step 3: Report an observation
curl -X POST "https://api.sigopt.com/v1/experiments/ID/observations" \
-u $CLIENT_TOKEN: \
-H "Content-Type: application/json" \
-d '{
"suggestion":"SUGGESTION_ID",
"value":"YYY"
}'
Conclusion
Imagine you have a cluster of machines on which you can conduct experiments. An effective use of that pool of resources would be to have machines examining suggested configurations in parallel. Our v1 API has, through its RESTful design, the clarity and structure to facilitate these simultaneous optimization loops. Resource management is now under the user’s control, which gives more freedom but also the responsibility of cleaning up the created resources. Endpoints and resources are more easily explained, and parallelism, an important feature for advanced customers, no longer requires additional API infrastructure to manage.
To learn more about our API, check out our API Docs. We also have an official python client!
Use SigOpt free. Sign up today.