base
— Base Classes and Helpers¶
Infrastructure to define optimization problems.
Foundation¶
-
class
optproblems.base.
Problem
(objective_functions, num_objectives=None, max_evaluations=inf, worker_pool=None, mp_module=None, phenome_preprocessor=None, name=None)¶ The base class for problems to be solved.
In the simplest case you can use this class directly by wrapping a single objective function or a list of objective functions. For more sophisticated cases, creating a subclass may be necessary.
-
__call__
(phenome)¶ Evaluate a solution and return objective values.
Also checks budget and counts the evaluation.
Raises: ResourcesExhausted
– If the budget of function evaluations is exhausted.
-
__init__
(objective_functions, num_objectives=None, max_evaluations=inf, worker_pool=None, mp_module=None, phenome_preprocessor=None, name=None)¶ Constructor.
Parameters: - objective_functions (callable or sequence of callables) – If this argument is simply a function, it is taken ‘as-is’.
If a sequence of callables is provided, these are wrapped in a
BundledObjectives
helper object, so that a single function call returns a list of objective values. - num_objectives (int, optional) – The number of objectives. If omitted, this number is guessed from the number of objective functions.
- max_evaluations (int, optional) – The maximum budget of function evaluations. By default there is no restriction.
- worker_pool (Pool, optional) – A pool of worker processes. Default is None (no parallelization).
- mp_module (module, optional) – Either multiprocessing, multiprocessing.dummy (default),
or a MockMultiProcessing instance. This is only used to create
an internal lock around bookkeeping code in various places. The
lock is only required for asynchronous parallelization, but not
for the parallelization with a worker pool in
batch_evaluate()
. - phenome_preprocessor (callable, optional) – A callable potentially applying transformations or checks to the phenome. Modifications should only be applied to a copy of the input. The (modified) phenome must be returned. When this pre-processing raises an exception, no function evaluations are counted. By default, no pre-processing is applied.
- name (str, optional) – A nice name for humans to read.
- objective_functions (callable or sequence of callables) – If this argument is simply a function, it is taken ‘as-is’.
If a sequence of callables is provided, these are wrapped in a
-
__str__
()¶ Return the name of this problem.
-
batch_evaluate
(individuals)¶ Evaluate a batch of individuals.
Objective values are written directly into the individuals’ corresponding attributes.
Raises: ResourcesExhausted
– If the budget is not sufficient to evaluate all individuals, this exception is thrown.
-
-
class
optproblems.base.
Individual
(phenome=None, objective_values=None)¶ A data structure to store objective values together with the solution.
Some methods of the problem classes expect objects with the two attributes phenome and objective_values. The exact type of these objects is irrelevant, but this class would be the obvious fit. The term ‘phenome’ stems from biology and means the whole set of phenotypic entities, in other words the form of appearance, of an animate being. So, it matches quite well as description of what is evaluated by the objective function.
-
__init__
(phenome=None, objective_values=None)¶ Constructor.
Parameters: - phenome (object, optional) – An arbitrary object which somehow can be evaluated by an objective function.
- objective_values (float or list, optional) – A single float or a list of objective values.
-
-
class
optproblems.base.
MockMultiProcessing
¶ Mocks part of the interface of the multiprocessing module.
This class provides no functionality except for calls to Pool.map, which are directed to the built-in
map()
function. Factory methods Manager(), Lock(), and Pool() all simply return this instance. The implemented interface is the subset required by the classProblem
. The motivation is to provide very lightweight code for problem instances that do not need parallelism or concurrency, and thus no synchronization.
-
class
optproblems.base.
BundledObjectives
(objective_functions)¶ Helper class to let several distinct functions appear as one.
-
__call__
(phenome)¶ Collect objective values from objective functions.
Objective values are returned as flattened list.
-
Parallelism Example¶
In this example, solutions are evaluated with four processes in parallel. Unfortunately, multiprocessing in Python is very brittle and incurs considerable overhead, so it is not activated by default. If you have a real-world problem and the objective function simply calls a subprocess anyway, multiprocessing.dummy would be sufficient to save time.
import math
import random
import multiprocessing as mp
from optproblems import *
# define an objective function
def example_function(phenome):
# some expensive calculations
for i in range(10000000):
math.sqrt(i)
return sum(x ** 2 for x in phenome)
# use multiprocessing with four worker processes
pool = mp.Pool(processes=4)
problem = Problem(example_function, worker_pool=pool, mp_module=mp)
# generate random solutions
solutions = [Individual([random.random() * 5 for _ in range(5)]) for _ in range(50)]
# evaluate solutions in parallel
problem.batch_evaluate(solutions)
# objective values were stored together with decision variables
for solution in solutions:
print(solution.phenome, solution.objective_values)
# show counted evaluations
print(problem.consumed_evaluations, problem.remaining_evaluations)
pool.close()
pool.join()
Caching¶
-
class
optproblems.base.
Cache
(problem, hash_function=<class 'tuple'>)¶ Wrapper to save objective function evaluations of duplicates.
This wrapper stores evaluations in an archive to reuse them for potentially occurring duplicates of previously seen solutions. This is especially useful in discrete search spaces.
Note
This wrapper lets you limit the number of true evaluations, but not the number of ‘virtual’ ones. This may lead to infinite loops if your optimization algorithm contains bugs. For limiting the number of virtual evaluations, a construct such as
Problem(Cache(Problem(...)), max_evaluations=x)
would work.-
__call__
(phenome)¶ Cached evaluation of one solution.
Look up solution in archive, if not found delegate to original problem and store result in archive, if valid.
-
__init__
(problem, hash_function=<class 'tuple'>)¶ Constructor.
Parameters: - problem (Problem) – A problem instance to be wrapped by this object.
- hash_function (callable, optional) – A function that gets the phenome as input and returns a hashable key for storing the objective values in a dictionary. By default we convert the phenome into a tuple to obtain this functionality.
-
__str__
()¶ Return the name of this problem.
-
static
are_objectives_valid
(objective_values)¶ Guess if a variable contains valid objective values.
Objective values are only stored in the archive if this method returns True. In particular, None, strings, and empty lists are considered invalid. A list of arbitrary objects except for None and the empty list is considered valid, however. So, you may have to override this method if you have very exotic objective values.
-
batch_evaluate
(individuals)¶ Evaluate a batch of individuals, trying to avoid re-evaluations.
If there is an entry for an individual in the archive, the values from there are reused as objective values. Else, the evaluation is delegated to the original problem. Finally, the new evaluations are stored in the archive, if they are valid.
Warning
If
worker_pool is not None
, duplicates in a batch will not be recognized, leading to an expensive function evaluation for each one.
-
Example¶
Illustration of how to save expensive evaluations by identifying duplicates.
import random
from optproblems import *
# define an objective function
def example_function(phenome):
return sum(x ** 2 for x in phenome)
problem = Problem(example_function)
print(str(problem))
# try to save function evaluations by caching
problem = Cache(problem)
print(str(problem))
# generate random solutions
solutions = [Individual([random.random() * 5 for _ in range(5)]) for _ in range(10)]
# evaluate batch of solutions (by default in sequential order)
problem.batch_evaluate(solutions)
# objective values were stored together with decision variables
for solution in solutions:
print(solution.phenome, solution.objective_values)
# delete objective values
solution.objective_values = None
# show counted evaluations
print(problem.consumed_evaluations, problem.remaining_evaluations)
# evaluating again is free thanks to the cache
problem.batch_evaluate(solutions)
for solution in solutions:
print(solution.phenome, solution.objective_values)
print(problem.consumed_evaluations, problem.remaining_evaluations)
Treatment of Bound Constraints¶
-
class
optproblems.base.
ScalingPreprocessor
(from_cuboid, to_cuboid, previous_preprocessor=None)¶ Translation and linear transformation between arbitrary cuboids.
This class is useful to normalize the search space, while the problem can keep using its “native” units.
Note
This class makes use of the decorator design pattern for potential chaining of pre-processors, see https://en.wikipedia.org/wiki/Decorator_pattern
-
__call__
(phenome)¶ Transform from one space to the other.
This function does not check if the phenome is actually inside from_cuboid.
-
__init__
(from_cuboid, to_cuboid, previous_preprocessor=None)¶ Constructor.
Parameters: - from_cuboid (tuple of sequence) – This is the space an optimization algorithm would use. The first sequence contains the lower bounds, the second sequence contains the upper bounds for each variable.
- to_cuboid (tuple of sequence) – This is the domain of the optimization problem. The first sequence contains the lower bounds, the second sequence contains the upper bounds for each variable.
- previous_preprocessor (callable, optional) – Another callable that processes the phenome before this one does.
-
-
class
optproblems.base.
BoundConstraintError
(value, min_bound, max_bound, variable_name=None)¶ Used to report violations of bound constraints.
Inherits from
ValueError
.
-
optproblems.base.
min_bound_violated
(value, min_bound)¶ Return True if
min_bound != None and value < min_bound
.
-
optproblems.base.
max_bound_violated
(value, max_bound)¶ Return True if
max_bound != None and value > max_bound
.
-
optproblems.base.
project
(value, min_bound, max_bound)¶ Clip the value to the feasible range.
Parameters: Returns: value – The repaired decision variable.
Return type:
-
optproblems.base.
reflect
(value, min_bound, max_bound)¶ Reflect the value between the two bounds until it lies inside them.
Parameters: Returns: value – The repaired decision variable.
Return type:
-
optproblems.base.
wrap
(value, min_bound, max_bound)¶ Treats the feasible space as a torus.
Parameters: Returns: value – The repaired decision variable.
Return type:
-
class
optproblems.base.
BoundConstraintsChecker
(bounds, previous_preprocessor=None)¶ A pre-processor raising exceptions if bound constraints are violated.
Note
This class makes use of the decorator design pattern for potential chaining of pre-processors, see https://en.wikipedia.org/wiki/Decorator_pattern
-
__call__
(phenome)¶ Check the bound constraints and raise exception if necessary.
Raises: BoundConstraintError
– If any bound constraint is violated.
-
__init__
(bounds, previous_preprocessor=None)¶ Constructor.
Parameters: - bounds (tuple of sequence) – The first sequence contains the lower bounds, the second sequence contains the upper bounds for each variable.
- previous_preprocessor (callable, optional) – Another callable that processes the phenome before this one does.
-
-
class
optproblems.base.
BoundConstraintsRepair
(bounds, repair_modes, previous_preprocessor=None)¶ A pre-processor that repairs violations of bound constraints.
More information about the available repair methods can be found in [Wessing2013].
Note
This class makes use of the decorator design pattern for potential chaining of pre-processors, see https://en.wikipedia.org/wiki/Decorator_pattern
References
[Wessing2013] S. Wessing (2013). Repair Methods for Box Constraints Revisited. In: Applications of Evolutionary Computation. Vol. 7835 of Lecture Notes in Computer Science, pp. 469-478, Springer. https://dx.doi.org/10.1007/978-3-642-37192-9_47 -
__call__
(phenome)¶ Return a repaired version of this phenome.
Does not modify the original.
-
__init__
(bounds, repair_modes, previous_preprocessor=None)¶ Constructor.
Parameters: - bounds (tuple of sequence) – The first sequence contains the lower bounds, the second sequence contains the upper bounds for each variable.
- repair_modes (sequence) – Contains an individual repair mode for each variable. The methods projection, reflection, and wrapping are supported. Values of None indicate that repair is not possible/desired. If a constraint violation is detected in this case, an exception is raised.
- previous_preprocessor (callable, optional) – Another callable that processes the phenome before this one does.
-
Example¶
The following example shows the repair of bound constraint violations by reflection.
import random
from optproblems import *
# define an objective function
def example_function(phenome):
return sum(x ** 2 for x in phenome)
# assume the following bounds
bounds = ([0.0] * 5, [1.0] * 5)
# before evaluations, possible constraint violations will be repaired
repair = BoundConstraintsRepair(bounds, ["reflect"] * 5)
problem = Problem(example_function, phenome_preprocessor=repair)
# generate random solutions with violated constraints
solutions = [Individual([random.random() * 5 for _ in range(5)]) for _ in range(10)]
# evaluate batch of solutions (by default in sequential order)
problem.batch_evaluate(solutions)
# objective values were stored together with decision variables
# repaired phenome was not stored!
for solution in solutions:
print(solution.phenome, solution.objective_values)
# use repair explicitly
for solution in solutions:
solution.phenome = repair(solution.phenome)
print(solution.phenome, solution.objective_values)
# show counted evaluations
print(problem.consumed_evaluations, problem.remaining_evaluations)
Benchmarking¶
For comparing optimization algorithms experimentally, it is common to use “artificial” problems, whose true optima are known.
-
class
optproblems.base.
TestProblem
(objective_functions, num_objectives=None, max_evaluations=inf, worker_pool=None, mp_module=None, phenome_preprocessor=None, name=None)¶ Abstract base class for artificial test problems.
-
get_locally_optimal_solutions
(max_number=None)¶ Return locally optimal solutions.
This is an abstract method. Implementations must be deterministic. This method should be most useful for single-objective, continuous problems.
-
get_optimal_solutions
(max_number=None)¶ Return globally optimal or Pareto-optimal solutions.
This is an abstract method. Implementations must be deterministic. In the multi-objective case, the generated solutions should be evenly distributed over the whole Pareto-set.
-