:mod:`algo` --- Evolutionary Algorithms ======================================= .. automodule:: evoalgos.algo .. autoclass:: evoalgos.algo.EvolutionaryAlgorithm :special-members: __init__ :members: .. autoclass:: evoalgos.algo.CommaEA :special-members: __init__ :members: .. autoclass:: evoalgos.algo.PlusEA :special-members: __init__ :members: .. autoclass:: evoalgos.algo.CMSAES :special-members: __init__ :members: .. autoclass:: evoalgos.algo.NSGA2b :special-members: __init__ :members: .. autoclass:: evoalgos.algo.SMSEMOA :special-members: __init__ :members: Examples -------- In this section we show how to set up some of the provided algorithms. CMSA-ES ^^^^^^^ The CommaEA together with the CMSAIndividual yields the covariance matrix self-adaptation evolution strategy (CMSA-ES). Note that CMSA-ES is not to be confused with CMA-ES. .. code:: python import array import random from optproblems.continuous import DoubleSum from evoalgos.algo import CommaEA from evoalgos.individual import CMSAIndividual dim = 10 problem = DoubleSum(dim, max_evaluations=5000) popsize = 5 num_offspring = 4 * popsize population = [] for _ in range(popsize): ind = CMSAIndividual(num_parents=popsize, num_offspring=num_offspring) ind.genome = [random.uniform(-20.0, 20.0) for _ in range(dim)] # for lowest CPU time, array.array is recommended ind.genome = array.array("d", ind.genome) population.append(ind) ea = CommaEA(problem, population, popsize, num_offspring) ea.run() print(ea.population[0].objective_values, problem.consumed_evaluations) However, there is also a convenience class CMSAES that sets several arguments with default values. The easiest way to use it is via the :func:`minimize` classmethod that mimics the interface of SciPy optimizers. .. code:: python from evoalgos.algo import CMSAES def sphere(phenome): return sum(x * x for x in phenome) result = CMSAES.minimize(sphere, [1.333, 1.999]) SMS-EMOA ^^^^^^^^ In this example, we create an instance of the SMS-EMOA and let it run on a toy problem. .. code:: python import math import random from optproblems import Problem from evoalgos.algo import SMSEMOA from evoalgos.individual import ESIndividual def obj_function(phenome): return sum(x ** 2 for x in phenome), sum((x - 2) ** 2 for x in phenome) problem = Problem(obj_function, num_objectives=2, max_evaluations=1000, name="Example") dim = 10 popsize = 10 population = [] init_step_sizes = [0.25] for _ in range(popsize): population.append(ESIndividual(genome=[random.random() * 5 for _ in range(dim)], learning_param1=1.0/math.sqrt(dim), learning_param2=0.0, strategy_params=init_step_sizes, recombination_type="none", num_parents=1)) ea = SMSEMOA(problem, population, popsize, num_offspring=40) ea.run() for individual in ea.population: print(individual) .. _parallelization: Parallelization ^^^^^^^^^^^^^^^ All EAs in this package are prepared for parallel execution. The example below contains both synchronous and asynchronous parallelization of the SMS-EMOA. In this example, a problem with two objectives is optimized with a parallelization degree of four and 1000 evaluations. The code works the same way for the steady-state NSGA2 by simply substituting the appearances of :class:`SMSEMOA ` with :class:`NSGA2b `. Also the adaptation to the single-objective case should be straightforward. Note that the asynchronous implementation uses the :mod:`threading`-API, so a real-world objective function must be a wrapper to a blocking system call to obtain truly parallel execution of evaluations. The parallelization in the synchronous case is provided by :class:`optproblems.base.Problem`, which alternatively supports :mod:`multiprocessing`. .. code:: python import time from threading import Lock, Thread from multiprocessing.dummy import Pool from random import random from optproblems import Problem from evoalgos.algo import SMSEMOA from evoalgos.individual import SBXIndividual def obj_function(p): time.sleep(0.01) return sum(x ** 2 for x in p), sum((x - 2) ** 2 for x in p) # initialization parallelism = 4 popsize = 100 population = [] pool = Pool(processes=parallelism) problem = Problem(obj_function, num_objectives=2, worker_pool=pool) for _ in range(popsize): population.append(SBXIndividual(genome=[random() * 5 for _ in range(8)])) problem.batch_evaluate(population) # asynchronous optimization problem.remaining_evaluations = 1000 lock = Lock() eas = [] for _ in range(parallelism): eas.append(SMSEMOA(problem, population, len(population), 1, lock=lock)) threads = [Thread(target=ea.run) for ea in eas] for thread in threads: thread.start() for thread in threads: thread.join() # synchronous optimization problem.remaining_evaluations = 1000 ea = SMSEMOA(problem, population, len(population), parallelism) ea.run() pool.close() pool.join()