This is the first of three blog posts during which we explore the concept of uncertainty – or noise – and its implications for Bayesian optimization. This is part of our series of blog content on research that informs our product and methodologies.
- Uncertainty 1: Modeling with Uncertainty
- Uncertainty 2: Bayesian Optimization with Uncertainty
- Uncertainty 3: Balancing Multiple Metrics with Uncertainty
A Stroll to Work
In everyday life we encounter uncertainty; for example, the trip to work might take between 15 and 30 minutes instead of a consistent 22 minutes each day. The exact time depends on a number of random unpredictable factors — unexpected rain could cause a traffic jam, your train might be delayed because of a system failure, or you could just walk slower at an inconsistent pace.
Of course, the idea that uncertainty exists is neither surprising nor intrinsically problematic. The complication from uncertainty appears when trying to predict the transit time so as to plan your morning out. If a 15 minute commute were predicted, planning a meeting in 15 minutes might be reasonable. Should the actual commute time turn out to be 30 minutes, the meeting may be concluded prior to arrival.
In this blog post, we briefly consider why uncertainty requires attention during modeling, and how predictions from such models will vary. In particular, when uncertainty is present, reliable predictions are those predictions which follow the average, or expected, behavior of the process under observation, without chasing uncertainty present in the observations. We also mention some ML specific issues and common practices for building trustworthy models.
Sources of Uncertainty
As suggested earlier, one goal of better understanding the commute time is to be able to make predictions using that understanding. In that context, it can be beneficial to consider two possible sources of uncertainty.
- Intrinsic uncertainty – The uncertainty fundamentally present in the situation. This may result from uncertainty in the traffic density or the weather, potential car crashes, police activity, etc.1 Another name for this might be process uncertainty.
- Observation uncertainty – The uncertainty which occurs during the measurement of the situation. For the commute time situation, looking at a mobile phone at the beginning and end of a trip would yield accuracy on the order of 1 minute; using a stopwatch might improve the accuracy to 1 second. Another name for this might be noise.
When recording outcomes, it is generally impossible2 to distinguish between these two (though appropriate calibration of measurement equipment can help understand observation uncertainty). For modeling purposes, decomposing uncertainty into components can be beneficial.
Uncertainty in Approximation/Regression
In one of our previous blog posts, we talked about approximation of data, both with and without uncertainty. We revisit this now to see the potential impact of modeling the sources of uncertainty distinctly.
Imagine if each day for 18 days we measured the time required for us to walk to work; on each day we leave 12 minutes later than the last day, starting at 6:30am. The results could be plotted in a graph like the one below.

Figure 1: We have walked to work several consecutive days and recorded the transit times as a function of when we departed. This data is interpolated by the simplest strategy possible, connect-the-dots.3

Figure 2: We have fit a selection of possible models of the observed transit times, one in each plot. The blue lines represent possible predicted outcomes, to be used for estimating the time of a future trip. Because these blue lines have some transparency, the more heavily blue regions are predicted to be more likely to occur.
Clearly, the models presented above have a wide variety of possible behaviors. In the top left, the model very closely follows the connect-the-dots plot from earlier; in the bottom right, the model seems to be nearly flat and apathetic to the observations. Of course, none of these models is “right“, but some of them are more likely than others, given certain assumptions that could be made regarding the process of walking to work. Additionally, a desire for good predictions from such a statistical model is often formalized in the bias-variance tradeoff to prevent overfitting to randomness.
In the context of creating models of appropriate complexity for a given purpose, the scale for which predictions are desired is relevant. The figure below shows data where different goals can yield different models. In particular, the goal here is that stating the expected certainty with which measurements can be made (or with which predictions must be made) can help inform the modeling (and data collection) process and potentially any subsequent statistical analysis (discussed here in a case study in an adjacent context).

Figure 3: Data is observed at random times from a voltmeter. left: If the goal is to generate a model for this data on the order of 1 V, then the simplest strategy is to fit a constant model and assume variations are only observation uncertainty. right: If the goal is to generate a model on the order of .01 V, then a more interesting model may be warranted.
Uncertainty in Machine Learning
In the context of discriminative4 machine learning for classification, uncertainty plays a significant role. The bias-variance tradeoff must still be considered, but it can be less easily analyzed (see this article for a discussion on the topic). The sense of accuracy or fidelity of the model is often complicated by three sources of uncertainty.
- How representative are the individuals comprising the dataset of the actual population of interest?
- Any finite dataset is just an approximation of the true population upon which we hope to make predictions. An appropriate model should have a good generalization capacity.
- How much uncertainty was present in measuring their features?
- Much as was described in the approximation discussion above, actual measurements are subject to uncertainty (someone’s height, the temperature, etc).
- With what certainty was the desired model found?
- Even the approximation situation above will have inaccuracy in the desired outcome on the order of at least machine precision, if not much higher. If a method such as stochastic gradient descent is used to fit a model, the result will likely be random and thus will have some amount of uncertainty.
Numerous strategies exist to combat this overfitting of classification models. Some of them have a theoretical genesis (such as penalizing the residual/loss function for having models which are too complex5, e.g., Tikhonov regularization). Others have very practical motivation (such as using dropout to miscompute quantities and eventually learn a better model). A method such as cross-validation may have both practical motivations and theoretical support. For probabilistic models, Bayesian decision theory might suggest the model evidence be used to prefer simpler models when possible. This website provides a good write-up and literature review for mining noisy data.
In an interesting twist, adding noise/mistakes to datasets (data augmentation) can actually improve the performance of the models on which it is trained. On one hand, it can increase the size of the training data (now including both the original observation as well as several copies with perturbed features). On the other hand, as shown notably in analyzing ImageNet, data augmentation can also produce models which are more robust to overfitting the actual data present in the dataset.
Conclusion
Here at SigOpt, our customers live in the real world. Data in the real world has uncertainty: surveys have uncertainty, user opinions have uncertainty, political polls have uncertainty, even an adult’s height could have uncertainty. Rather than being crippled by this uncertainty, we want to help our customers produce models which are robust enough that they can be used to make effective decisions based on reliable predictions.
We have discussed the need to consider the uncertainty present in data when producing models. Both in approximation/regression, as well as classification, building statistical models which are accurate and well-behaved, will yield better predictions. Our following post, the second in the uncertainty series, focuses on the implications of uncertainty in Bayesian optimization, a key engine powering SigOpt’s enterprise solution. We hope that you will continue on next week to learn how we benefit from effective modeling in the presence of uncertainty.
Use SigOpt free. Sign up today.