realworld — Examples of Real-world Problems

Some real-world problems that can be easily defined and quickly evaluated.

We do not really care about the meaningfulness of the actual application here, but are rather interested in the landscapes of the generated objective functions.

Algorithm Configuration

class optproblems.realworld.GradientMethodConfiguration(num_objectives=1, noisy=False, algo_max_it=100, params_to_optimize=None, internal_problem=None, start_point=None, phenome_preprocessor=None, **kwargs)

An algorithm configuration problem.

This problem interprets the parameters of a simple gradient method as the search space of an optimization problem. If we fix the starting point, the resulting optimization problem is deterministic thanks to the deterministic behavior of the gradient method. Alternatively, we can generate a noisy problem by drawing a random starting point for each evaluation. This problem can either act as a single-objective problem (only objective value) or as a bi-objective problem (objective value and consumed function evaluations). Both quantities must be minimized. Also the search space dimension can be varied by selecting between one and four algorithm parameters to adjust.

__init__(num_objectives=1, noisy=False, algo_max_it=100, params_to_optimize=None, internal_problem=None, start_point=None, phenome_preprocessor=None, **kwargs)

Constructor.

Note

If the starting point is not given, this constructor creates a randomly initialized problem instance by selecting the starting point randomly.

Parameters:
  • num_objectives (int, optional) – The number of objectives to consider. Must be one (default) or two. The first objective is the best objective value obtained by the algorithm, the second one is the number of function evaluations consumed.
  • noisy (bool, optional) – If True, a new starting point is drawn for each algorithm run. Otherwise, the same starting point is used for all runs.
  • algo_max_it (int, optional) – The maximal number of iterations to execute for the algorithm. This should be kept relatively low to keep the function evaluation of this problem cheap.
  • params_to_optimize (list, optional) – This problem has at most four real-valued decision variables. By providing this argument, a subset and the order can be selected. The object provided here must have the format described in get_default_params.
  • internal_problem (optproblems.Problem, optional) – The problem the gradient method has to optimize. By default, the Himmelblau problem is chosen.
  • start_point (list, optional) – The starting point for the gradient method. If none is provided, the starting point is drawn with the method create_start_point. If noisy == True, this argument is ignored.
  • phenome_preprocessor (callable, optional) – A callable potentially applying transformations or checks to the phenome. Modifications should only be applied to a copy of the input. The (modified) phenome must be returned. When this pre-processing raises an exception, no function evaluations are counted. By default, no pre-processing is applied.
  • kwargs – Arbitrary keyword arguments, passed through to the constructor of the super class.
create_start_point()

Create a random starting point.

This method draws points uniformly from [-6, 6]^n. If a different internal problem is provided and this setting does not make sense, this function should be overridden.

static get_default_params()

Return the list of default decision variables.

The returned list contains a tuple of the variable name and lower and upper bounds for each decision variable. The default setting is [("step_size", [0.0, None]), ("shrink_factor", [0.0, 1.0]), ("grad_step", [0.0, None]), ("stop_norm", [0.0, None])]

obj_function(phenome)

The objective function of this configuration problem.

This method runs the gradient_method on the chosen problem and returns some statistics on the run.

Parameters:phenome (list) – The configuration setting for the gradient method.
Returns:
  • opt_obj_value (float) – The best found objective value.
  • evaluations (float, optional) – If two objectives were requested, also the number of consumed objective function evaluations is returned.
optproblems.realworld.gradient_method(obj_function, start_point, init_step_size, shrink_factor, grad_function=<function approx_gradient>, step_function=<function descent_step>, max_iterations=inf, stop_norm=1e-08)

A simple gradient method.

Parameters:
  • obj_function (callable) – The objective function.
  • start_point (list) – The starting point for the optimization.
  • init_step_size (float) – Indicates how far we move into the descent direction.
  • shrink_factor (float) – In case of an unsuccessful move, the step size is reduced by multiplying with this value.
  • grad_function (callable, optional) – A function returning the gradient.
  • step_function (callable, optional) – A function carrying out one optimization step.
  • max_iterations (int, optional) – The maximal number of iterations to carry out.
  • stop_norm (float, optional) – Optimization stops when the norm of the gradient goes below this value.
Returns:

  • current_pos (list) – The current incumbent solution.
  • current_obj (float) – The objective value of the current solution.

optproblems.realworld.descent_step(position, obj_at_pos, function, grad_function, step_size)

Do a small step in the direction of steepest descent.

Parameters:
  • position (list) – The current incumbent solution.
  • obj_at_pos (float) – The objective value at this position (to save one evaluation).
  • function (callable) – The objective function.
  • grad_function (callable) – A function returning the gradient.
  • step_size (float) – Step size for each dimension.
Returns:

  • proposed_point (list) – A point which is hopefully an improvement.
  • norm (float) – The length of the gradient vector (useful as stopping criterion).

optproblems.realworld.approx_gradient(position, obj_at_pos, function, step_size)

Approximate gradient with forward differences.

Parameters:
  • position (list) – The position where the gradient is approximated.
  • obj_at_pos (float) – The objective value at this position (to save one evaluation).
  • function (callable) – The objective function.
  • step_size (float) – Step size for each dimension.
Returns:

grad – Approximated gradient.

Return type:

list

Uniformity Optimization

class optproblems.realworld.UniformityOptimization(num_points=10, dimension=2, existing_points=None, noise_strength=0.0, dist_matrix_function=None, phenome_preprocessor=None, **kwargs)

Optimize the uniformity of a point set in the unit hypercube.

To make this problem interesting, it was given some unconventional properties: 1) existing points can be incorporated into the distance computations, to make several different instances possible, and 2) the objective function does (in general) not return a single scalar, but a list of sorted nearest neighbor distances. So, it can be treated as a multiobjective problem or, with lexicographic ordering of objective values, a single-objective problem. Random noise can be added by perturbing the points before computing distances.

__init__(num_points=10, dimension=2, existing_points=None, noise_strength=0.0, dist_matrix_function=None, phenome_preprocessor=None, **kwargs)

Constructor.

Note

If existing points are not given, this constructor creates a randomly initialized problem instance by drawing num_points randomly. (Note that in general, the number of existing points does not necessarily have to be equal to the number of optimized points.)

Parameters:
  • num_points (int, optional) – The number of points whose uniformity is to be optimized.
  • dimension (int, optional) – The dimension of the unit cube.
  • existing_points (list, optional) – A set of existing points influencing the optimized points. By default num_points are drawn randomly.
  • noise_strength (float, optional) – If this value is larger than zero, points are perturbed by a multivariate Gaussian distribution with this standard deviation, before the distances are measured.
  • dist_matrix_function (callable, optional) – A callable that returns a distance matrix for two sequences of points as input. Default is Euclidean distance.
  • phenome_preprocessor (callable, optional) – A callable potentially applying transformations or checks to the phenome. Modifications should only be applied to a copy of the input. The (modified) phenome must be returned. When this pre-processing raises an exception, no function evaluations are counted. By default, no pre-processing is applied.
  • kwargs – Arbitrary keyword arguments, passed through to the constructor of the super class.
create_existing_points()

Create random existing points in the unit hypercube.

obj_function(phenome)

The objective function of this problem.

Parameters:phenome (list) – The concatenated list of points.
Returns:nn_dists – A list of negated nearest-neighbor distances, sorted from absolute largest to absolute smallest. These can be interpreted as objective values by applying a lexicographic order.
Return type:list