Differences

This shows you the differences between two versions of the page.

Link to this comparison view

staff:wessing:bbcomp [2016-07-26 14:03]
staff:wessing:bbcomp [2017-06-23 16:49]
Line 1: Line 1:
 ====== BBComp ====== ====== BBComp ======
 +
 +
 +
 +===== GECCO 2017 =====
 +
 +
 +==== 1OBJ ====
 +
 +
 +
 +==== 1OBJ-expensive ====
 +
 +
 +==== 2OBJ/3OBJ ====
 +
 +
 +==== 2OBJ-expensive ====
 +
 +
 +===== EMO 2017 =====
 +
 +[[https://​bbcomp.ini.rub.de/​results/​BBComp2017EMO/​summary.html|Standing:​ 1/10]]
 +
 +Model-based hypervolume maximization:​ This was a rather greedy approach avoiding expected hypervolume improvement.
  
 ===== GECCO 2016 ===== ===== GECCO 2016 =====
Line 7: Line 31:
 ==== 1OBJ ==== ==== 1OBJ ====
  
-[[http://​bbcomp.ini.rub.de/​results/​BBComp2016-1OBJ/​summary.html|Standing:​ 1/14]]+[[https://​bbcomp.ini.rub.de/​results/​BBComp2016-1OBJ/​summary.html|Standing:​ 1/14]]
  
 This algorithm is a further development of last year's entry. Based on data from a preliminary experiment on [[https://​ls11-www.cs.tu-dortmund.de/​people/​swessing/​optproblems/​doc/​mpm.html|MPM2]] test problems, a decision tree was built to decide which two-stage variant to apply on which dimension. The two-stage options were restarted local search and clustering method, the local search algorithms Nelder-Mead and CMA-ES. Before the two-stage algorithm starts, a singular L-BFGS-B run is tried. This algorithm is a further development of last year's entry. Based on data from a preliminary experiment on [[https://​ls11-www.cs.tu-dortmund.de/​people/​swessing/​optproblems/​doc/​mpm.html|MPM2]] test problems, a decision tree was built to decide which two-stage variant to apply on which dimension. The two-stage options were restarted local search and clustering method, the local search algorithms Nelder-Mead and CMA-ES. Before the two-stage algorithm starts, a singular L-BFGS-B run is tried.
Line 23: Line 47:
 ==== 1OBJ-expensive ==== ==== 1OBJ-expensive ====
  
-[[http://​bbcomp.ini.rub.de/​results/​BBComp2016-1OBJ-expensive/​summary.html|Standing:​ 3/15]]+[[https://​bbcomp.ini.rub.de/​results/​BBComp2016-1OBJ-expensive/​summary.html|Standing:​ 3/15]]
  
 The same code as for BBComp2016-1OBJ,​ but with a slightly different configuration. Local search algorithms are the same, but only restarted local search is used. The second difference is that all ever sampled points are put into an archive and considered in the global stage. The same code as for BBComp2016-1OBJ,​ but with a slightly different configuration. Local search algorithms are the same, but only restarted local search is used. The second difference is that all ever sampled points are put into an archive and considered in the global stage.
Line 29: Line 53:
 ==== 2OBJ/3OBJ ==== ==== 2OBJ/3OBJ ====
  
-[[http://​bbcomp.ini.rub.de/​results/​BBComp2016-2OBJ/​summary.html|Standing 2OBJ: 1/7]],  +[[https://​bbcomp.ini.rub.de/​results/​BBComp2016-2OBJ/​summary.html|Standing 2OBJ: 1/7]],  
-[[http://​bbcomp.ini.rub.de/​results/​BBComp2016-3OBJ/​summary.html|Standing 3OBJ: 2/7]]+[[https://​bbcomp.ini.rub.de/​results/​BBComp2016-3OBJ/​summary.html|Standing 3OBJ: 2/7]]
  
 This approach recycles the existing code for the single-objective tracks. The first 75% of the budget are spent on single-objective restarted local search. An objective of the problem is randomly chosen and a local search is run on it. Then, another objective is randomly chosen and the previously found optimum is used as starting point for this objective. This strategy is iterated until the allocated budget is exhausted. The motivation is to find some hopefully Pareto-optimal local optima, under the assumption that the problem is multimodal. This approach recycles the existing code for the single-objective tracks. The first 75% of the budget are spent on single-objective restarted local search. An objective of the problem is randomly chosen and a local search is run on it. Then, another objective is randomly chosen and the previously found optimum is used as starting point for this objective. This strategy is iterated until the allocated budget is exhausted. The motivation is to find some hopefully Pareto-optimal local optima, under the assumption that the problem is multimodal.
Line 37: Line 61:
 ===== CEC 2015 ===== ===== CEC 2015 =====
  
-[[http://​bbcomp.ini.rub.de/​results/​BBComp2015CEC/​summary.html|Standing:​ 2/25]]+[[https://​bbcomp.ini.rub.de/​results/​BBComp2015CEC/​summary.html|Standing:​ 2/25]] 
  
 The core concept of the submitted entry is to use a sequence of local searches. Every optimization begins with an [[http://​docs.scipy.org/​doc/​scipy/​reference/​generated/​scipy.optimize.fmin_l_bfgs_b.html|L-BFGS-B (SciPy)]] run starting at the centroid of the search space. Starting points and obtained optima are generally recorded in an archive. Starting points for the remaining local searches are not chosen randomly, but with an adaptive sampling algorithm maximizing the minimal distance of the starting point to points in the archive. This sampling algorithm is called maximin reconstruction (MmR, [[https://​ls11-www.cs.tu-dortmund.de/​people/​swessing/​diversipy/​doc/​hycusampling.html#​diversipy.hycusampling.maximin_reconstruction|Python implementation]]). As a distance metric we consider the Manhattan distance on a torus. The core concept of the submitted entry is to use a sequence of local searches. Every optimization begins with an [[http://​docs.scipy.org/​doc/​scipy/​reference/​generated/​scipy.optimize.fmin_l_bfgs_b.html|L-BFGS-B (SciPy)]] run starting at the centroid of the search space. Starting points and obtained optima are generally recorded in an archive. Starting points for the remaining local searches are not chosen randomly, but with an adaptive sampling algorithm maximizing the minimal distance of the starting point to points in the archive. This sampling algorithm is called maximin reconstruction (MmR, [[https://​ls11-www.cs.tu-dortmund.de/​people/​swessing/​diversipy/​doc/​hycusampling.html#​diversipy.hycusampling.maximin_reconstruction|Python implementation]]). As a distance metric we consider the Manhattan distance on a torus.
 
Last modified: 2017-06-23 16:51 (external edit)
DokuWikiRSS-Feed