:mod:`base` --- Base Classes and Helpers ======================================== .. automodule:: optproblems.base Optimization-related Exceptions ------------------------------- .. autoclass:: optproblems.base.Aborted :members: .. autoclass:: optproblems.base.ResourcesExhausted :members: .. autoclass:: optproblems.base.Solved :members: .. autoclass:: optproblems.base.Stalled :members: Foundation ---------- .. autoclass:: optproblems.base.Problem :special-members: __init__ :members: .. autoclass:: optproblems.base.Individual :special-members: __init__ :members: .. autoclass:: optproblems.base.MockMultiProcessing :members: .. autoclass:: optproblems.base.BundledObjectives :members: Parallelism Example ^^^^^^^^^^^^^^^^^^^ In this example, solutions are evaluated with four processes in parallel. Unfortunately, multiprocessing in Python is very brittle and incurs considerable overhead, so it is not activated by default. If you have a real-world problem and the objective function simply calls a subprocess anyway, multiprocessing.dummy would be sufficient to save time. .. code:: python import math import random import multiprocessing as mp from optproblems import * # define an objective function def example_function(phenome): # some expensive calculations for i in range(10000000): math.sqrt(i) return sum(x ** 2 for x in phenome) # use multiprocessing with four worker processes pool = mp.Pool(processes=4) problem = Problem(example_function, worker_pool=pool, mp_module=mp) # generate random solutions solutions = [Individual([random.random() * 5 for _ in range(5)]) for _ in range(50)] # evaluate solutions in parallel problem.batch_evaluate(solutions) # objective values were stored together with decision variables for solution in solutions: print(solution.phenome, solution.objective_values) # show counted evaluations print(problem.consumed_evaluations, problem.remaining_evaluations) pool.close() pool.join() Caching ------- .. autoclass:: optproblems.base.Cache :special-members: __init__ :inherited-members: evaluate :members: Example ^^^^^^^ Illustration of how to save expensive evaluations by identifying duplicates. .. code:: python import random from optproblems import * # define an objective function def example_function(phenome): return sum(x ** 2 for x in phenome) problem = Problem(example_function) print(str(problem)) # try to save function evaluations by caching problem = Cache(problem) print(str(problem)) # generate random solutions solutions = [Individual([random.random() * 5 for _ in range(5)]) for _ in range(10)] # evaluate batch of solutions (by default in sequential order) problem.batch_evaluate(solutions) # objective values were stored together with decision variables for solution in solutions: print(solution.phenome, solution.objective_values) # delete objective values solution.objective_values = None # show counted evaluations print(problem.consumed_evaluations, problem.remaining_evaluations) # evaluating again is free thanks to the cache problem.batch_evaluate(solutions) for solution in solutions: print(solution.phenome, solution.objective_values) print(problem.consumed_evaluations, problem.remaining_evaluations) Treatment of Bound Constraints ------------------------------ .. autoclass:: optproblems.base.ScalingPreprocessor :special-members: __init__ :members: .. autoclass:: optproblems.base.BoundConstraintError :members: .. autofunction:: optproblems.base.min_bound_violated .. autofunction:: optproblems.base.max_bound_violated .. autofunction:: optproblems.base.project .. autofunction:: optproblems.base.reflect .. autofunction:: optproblems.base.wrap .. autoclass:: optproblems.base.BoundConstraintsChecker :special-members: __init__ :members: .. autoclass:: optproblems.base.BoundConstraintsRepair :special-members: __init__ :members: Example ^^^^^^^ The following example shows the repair of bound constraint violations by reflection. .. code:: python import random from optproblems import * # define an objective function def example_function(phenome): return sum(x ** 2 for x in phenome) # assume the following bounds bounds = ([0.0] * 5, [1.0] * 5) # before evaluations, possible constraint violations will be repaired repair = BoundConstraintsRepair(bounds, ["reflect"] * 5) problem = Problem(example_function, phenome_preprocessor=repair) # generate random solutions with violated constraints solutions = [Individual([random.random() * 5 for _ in range(5)]) for _ in range(10)] # evaluate batch of solutions (by default in sequential order) problem.batch_evaluate(solutions) # objective values were stored together with decision variables # repaired phenome was not stored! for solution in solutions: print(solution.phenome, solution.objective_values) # use repair explicitly for solution in solutions: solution.phenome = repair(solution.phenome) print(solution.phenome, solution.objective_values) # show counted evaluations print(problem.consumed_evaluations, problem.remaining_evaluations) Benchmarking ------------ For comparing optimization algorithms experimentally, it is common to use "artificial" problems, whose true optima are known. .. autoclass:: optproblems.base.TestProblem :members: