Model-based hypervolume maximization: This was a rather greedy approach avoiding expected hypervolume improvement.
All algorithms are implemented in Python. The source code is provided here. Additionally, the packages numpy, scipy, cma, optproblems, dfoalgos, and evoalgos must be installed.
This algorithm is a further development of last year's entry. Based on data from a preliminary experiment on MPM2 test problems, a decision tree was built to decide which two-stage variant to apply on which dimension. The two-stage options were restarted local search and clustering method, the local search algorithms Nelder-Mead and CMA-ES. Before the two-stage algorithm starts, a singular L-BFGS-B run is tried.
The main changes compared to last year are:
The same code as for BBComp2016-1OBJ, but with a slightly different configuration. Local search algorithms are the same, but only restarted local search is used. The second difference is that all ever sampled points are put into an archive and considered in the global stage.
Standing 2OBJ: 1/7, Standing 3OBJ: 2/7
This approach recycles the existing code for the single-objective tracks. The first 75% of the budget are spent on single-objective restarted local search. An objective of the problem is randomly chosen and a local search is run on it. Then, another objective is randomly chosen and the previously found optimum is used as starting point for this objective. This strategy is iterated until the allocated budget is exhausted. The motivation is to find some hopefully Pareto-optimal local optima, under the assumption that the problem is multimodal. The last 25% of the budget are spent on a (100 + 5)-SMS-EMOA with self-adaptive Gaussian mutation and no recombination, started from the non-dominated points found so far. This algorithm explicitly optimizes hypervolume and thus is hopefully able to provide some refinement.
The core concept of the submitted entry is to use a sequence of local searches. Every optimization begins with an L-BFGS-B (SciPy) run starting at the centroid of the search space. Starting points and obtained optima are generally recorded in an archive. Starting points for the remaining local searches are not chosen randomly, but with an adaptive sampling algorithm maximizing the minimal distance of the starting point to points in the archive. This sampling algorithm is called maximin reconstruction (MmR, Python implementation). As a distance metric we consider the Manhattan distance on a torus.
The search space dimension decides how the budget is spent exactly:
The initial step size is chosen in any case as min(0.1, max(0.005, 0.5 * minDistance)), where minDistance is the distance to the start point's nearest neighbor in the archive. The parameter tolfun of CMA-ES is set to 1e-6. All other algorithm parameters are kept at their respective default levels.
Details about the sampling algorithm MmR and more information about two-stage algorithms in general can be found in my dissertation. A Python implementation of MmR is available in the package diversipy.