More precisely, the purpose of our OO study is to quantify, make explicit, and
further exploit the implications of "softening". and the replacement
of value by order. To do this let us introduce some definitions and notations.
Let G denote the good enough subset of a search space
Q. To fix ideas, let us say G is top-n% of the
search space and |G| = g, the size of G. Let S be
the selected set which again could be the observed top-m% of the
samples taken from the search space and |S| = s, the size of
S. Note S and G are generalizations of the concept of
optimum and the estimated optimum in traditional optimization. There G
and S are singletons. In our case, the softened and analogous
requirement is that || not equal to 0. Again the metaphors
in the transparency emphasizes the advantages of softening which we shall
quantify shortly.
Now for a given problem, a given parameter search space Q, a
given sample size N, a given model for estimating the performance of
the system (which implies certain error of the estimate sigma2), a
given "good enough" set G, and a given procedure for
selecting S, we define the alignment probability as ...