[IEEE 2013 IEEE Congress on Evolutionary Computation (CEC) - Cancun, Mexico (2013.06.20-2013.06.23)]...

8
Density estimation for selecting leaders and mantaining archive in MOPSO Wang Hu School of Computer Science & Engineering University of Electronic Science and Technology of China Chengdu, Sichuan 610054, CHINA [email protected] Gary G. Yen School of Electrical and Computer Engineering Oklahoma State University Stillwater, OK 74078, U.S.A [email protected] Abstract—Leader selection and archive maintenance are the two key issues, which have an important impact on the performance of the obtained approximate Pareto front, to be tackled when extending Single-Objective Particle Swarm Optimization to Multi-Objective Particle Swarm Optimization (MOPSO). In this paper, a new method of density estimation is proposed for selecting leaders and maintaining archive in MOPSO. The density of a nondominated solution in archive is calculated according to the Parallel Cell Distance after the archive is mapped from Cartesian Coordinate System into Parallel Cell Coordinate System. A new MOPSO is proposed based on this method of density estimation for selecting leaders and maintaining archive to improve the performance of convergence and diversity. The experimental results show that the proposed algorithm is significantly superior to the five chosen state-of-the- art MOPSOs on 12 test problems in term of hypervolume performance indicator. Keywords—particle swarm optimization; multiobjective optimization problem; evolutionary computation; multiobjective particle swarm optimization I. INTRODUCTION Multiobjective optimization problems (MOPs) consist of several objectives that are to be optimized simultaneously. In MOPs, the objectives to be optimized are normally in conflict with respect to each other, which means an improvement in one of the objectives will incur a degradation in one or more of the remaining objectives. As a result, there is no single ideal optimal solution rather a set of good tradeoff solutions, known as Pareto-optimal solutions, for MOPs to represent the best possible compromises among the objectives. Evolutionary algorithm (EA) which is a class of stochastic optimization methods based on population is especially suitable and already popular to solve MOPs because it can deal simultaneously with a set of possible solutions in a single run instead of a series of separate runs as in the traditional optimization techniques. Particle Swarm Optimization (PSO), meta-heuristically inspired by social behavior of bird flocking or fish schooling, is a population based stochastic optimization technique developed by Eberhart and Kennedy in [1]. Especially, PSO is characterized as simple in concept, easy to implement, and computationally efficient when compared with the other heuristic techniques such as Genetic Algorithm [2]. So, the relative simplicity and the success as a single-objective optimizer have motivated researchers to extend PSO from Single-objective Problems (SOPs) to MOPs. However, there are two key issues, which have an important impact on the performance of the approximate Pareto front, to be tackled when applying PSO to MOPs. The first issue is archive maintenance. External elitist archive, which stores the nondominated solutions found by an algorithm so far and filtered by a certain quality measure, such as density, is an important feature in Multiobjective Particle Swarm Optimizations (MOPSOs). Although there are several proposals for MOPSO where the archive size is unconstrained [3], [4], the pre-fixed maximum size of an archive is widely applied because the number of nondominated solutions can grow very fast, which increases quickly the computation cost of updating archive. Besides, the physical memory is always finite in size. So, an appropriate archiving strategy for obtaining an accurate and well-distributed approximate Pareto front is necessarily required to filter those nondominated solutions with a lower quality measures when there is no more room in archive to host the new qualified solution. The second issue is the global best solution selection, referred to as leader selection. There is no absolute best solution but rather a set of nondominated solutions in MOPSO. In Single-objective Particle Swarm Optimization (SOPSO), the gBest for the whole population and the pBest for a particle are both unique solutions with the smallest objective value in minimizing problems (for instance). Yet in MOPSO, the diverse candidates, who can be selected from the nondominated set according to different strategies, for the gBest and the pBest result in the different flight directions for a particle, which have an important effect on the convergence and diversity of a MOPSO algorithm. Although several strategies, such as Crowding Distance [5], [6] and Adaptive Grid [7], were proposed for selecting leaders and maintaining archive in the existing MOPSOs, there is room to pursue much improvement on the performance of MOPSO. In this paper, we propose a new method to estimate density of the nondominated solutions in the archive for selecting leaders and maintaining archive. Based on this method of density estimation, a new MOPSO is proposed for improving the performance in term of convergence and diversity. 2013 IEEE Congress on Evolutionary Computation June 20-23, Cancún, México 978-1-4799-0454-9/13/$31.00 ©2013 IEEE 181

Transcript of [IEEE 2013 IEEE Congress on Evolutionary Computation (CEC) - Cancun, Mexico (2013.06.20-2013.06.23)]...

Density estimation for selecting leaders and mantaining archive in MOPSO

Wang Hu School of Computer Science & Engineering

University of Electronic Science and Technology of China Chengdu, Sichuan 610054, CHINA

[email protected]

Gary G. Yen School of Electrical and Computer Engineering

Oklahoma State University Stillwater, OK 74078, U.S.A

[email protected]

Abstract—Leader selection and archive maintenance are the two key issues, which have an important impact on the performance of the obtained approximate Pareto front, to be tackled when extending Single-Objective Particle Swarm Optimization to Multi-Objective Particle Swarm Optimization (MOPSO). In this paper, a new method of density estimation is proposed for selecting leaders and maintaining archive in MOPSO. The density of a nondominated solution in archive is calculated according to the Parallel Cell Distance after the archive is mapped from Cartesian Coordinate System into Parallel Cell Coordinate System. A new MOPSO is proposed based on this method of density estimation for selecting leaders and maintaining archive to improve the performance of convergence and diversity. The experimental results show that the proposed algorithm is significantly superior to the five chosen state-of-the-art MOPSOs on 12 test problems in term of hypervolume performance indicator.

Keywords—particle swarm optimization; multiobjective optimization problem; evolutionary computation; multiobjective particle swarm optimization

I. INTRODUCTION Multiobjective optimization problems (MOPs) consist of

several objectives that are to be optimized simultaneously. In MOPs, the objectives to be optimized are normally in conflict with respect to each other, which means an improvement in one of the objectives will incur a degradation in one or more of the remaining objectives. As a result, there is no single ideal optimal solution rather a set of good tradeoff solutions, known as Pareto-optimal solutions, for MOPs to represent the best possible compromises among the objectives.

Evolutionary algorithm (EA) which is a class of stochastic optimization methods based on population is especially suitable and already popular to solve MOPs because it can deal simultaneously with a set of possible solutions in a single run instead of a series of separate runs as in the traditional optimization techniques.

Particle Swarm Optimization (PSO), meta-heuristically inspired by social behavior of bird flocking or fish schooling, is a population based stochastic optimization technique developed by Eberhart and Kennedy in [1]. Especially, PSO is characterized as simple in concept, easy to implement, and computationally efficient when compared with the other heuristic techniques such as Genetic Algorithm [2]. So, the

relative simplicity and the success as a single-objective optimizer have motivated researchers to extend PSO from Single-objective Problems (SOPs) to MOPs.

However, there are two key issues, which have an important impact on the performance of the approximate Pareto front, to be tackled when applying PSO to MOPs. The first issue is archive maintenance. External elitist archive, which stores the nondominated solutions found by an algorithm so far and filtered by a certain quality measure, such as density, is an important feature in Multiobjective Particle Swarm Optimizations (MOPSOs). Although there are several proposals for MOPSO where the archive size is unconstrained [3], [4], the pre-fixed maximum size of an archive is widely applied because the number of nondominated solutions can grow very fast, which increases quickly the computation cost of updating archive. Besides, the physical memory is always finite in size. So, an appropriate archiving strategy for obtaining an accurate and well-distributed approximate Pareto front is necessarily required to filter those nondominated solutions with a lower quality measures when there is no more room in archive to host the new qualified solution. The second issue is the global best solution selection, referred to as leader selection. There is no absolute best solution but rather a set of nondominated solutions in MOPSO. In Single-objective Particle Swarm Optimization (SOPSO), the gBest for the whole population and the pBest for a particle are both unique solutions with the smallest objective value in minimizing problems (for instance). Yet in MOPSO, the diverse candidates, who can be selected from the nondominated set according to different strategies, for the gBest and the pBest result in the different flight directions for a particle, which have an important effect on the convergence and diversity of a MOPSO algorithm.

Although several strategies, such as Crowding Distance [5], [6] and Adaptive Grid [7], were proposed for selecting leaders and maintaining archive in the existing MOPSOs, there is room to pursue much improvement on the performance of MOPSO. In this paper, we propose a new method to estimate density of the nondominated solutions in the archive for selecting leaders and maintaining archive. Based on this method of density estimation, a new MOPSO is proposed for improving the performance in term of convergence and diversity.

2013 IEEE Congress on Evolutionary Computation June 20-23, Cancún, México

978-1-4799-0454-9/13/$31.00 ©2013 IEEE 181

The remainder of this paper is organized as follows. The related works are surveyed in Section II. The new algorithm for MOPSO is presented in Section III. The experimental results are analyzed in Section IV. The conclusions are summarized in the last section.

II. THE RELATED WORKS Since the first MOPSO proposed by Moore and Chapman in

1999 [8], there are many versions of MOPSOs published in the last decade. A comprehensive survey of the existing MOPSOs up to 2006 was well carried out by Reyes-Sierra and Coello in [9]. In addition, some special issues in recently years, such as leader selection and archive updating, were empirically investigated in some comparative studies [8]-[10]. Here, we focus on two well-regarded methods, Crowding Distance and Adaptive Grid, which are widely used as the density estimators not only for maintaining archive but also for selecting leaders in MOPSOs.

A. Crowding Distance Density is the most popular quality measure of the

nondominated solutions in archive. Crowding Distance [5], [6] is a density estimation method based on nearest neighbors which gives us an idea of how crowded are the closest neighbors of a given particle in objective space. This measure estimates the perimeter of the hyper-cube formed by the vertices of the nearest neighbors. Usually the Crowding Distance is calculated after normalization of objective values. In detail, Crowding Distance is calculated by the first sorting of solutions in ascending objective function values. The Crowding Distance value of a particular solution is the average distance of its two neighboring solutions. The boundary solutions which have the lowest and highest objective function values are given an infinite Crowding Distance values so that they are always selected. This process is done for each objective function. The final Crowding Distance value of a solution is computed by adding the entire individual Crowding Distance values in each objective function.

Crowding Distance technique in MOPSO can be applied not only to update archive [11]-[13] but also to select gBest [14], [15]. When maintaining an archive, it is straightforward to throw away the nondominated solution with the worst density measured by Crowding Distance when the archive is no more room to host new qualified solutions. Yet the global best of the particles is selected from those nondominated solutions with the highest Crowding Distance values. Different leader for different particle is selected randomly from a specified top part of the sorted repository based on a decreasing Crowding Distance. However, Crowding Distance may lead to premature convergence because solutions close to extremes are retained in the archive [10]. At the same time, the extreme nondominated solutions with infinite value of Crowd Distance are given the highest priority to be gBests, which decreases the diversity in the swarm.

B. Adaptive Grid Adaptive Grid was proposed in [7] to estimate crowded

region. In an archive, the objective space is divided into KM

hypercubes, where M is the number of objectives and K, defined by user, is the number of divisions at each objective dimension. If a new solution inserted in the archive lies outside the current bounds of the grid, then the grid has to be recalculated and each member in it has to be relocated. Each hypercube can be interpreted as a geographical region with a label number. Adaptive Grid is used to distribute the largest possible amount of hypercubes in a uniform way.

Adaptive Grid was used for updating archive in MOPSO [12]. When updating archive, those nondominated solutions located in less crowded regions in objective space are given priority to retain in archive over those lying in highly crowed regions. Adaptive Grid was used for selecting gBest of MOPSO in [11], [12]. A fitness value is assigned for each hypercube in inverse proportion to the number of elite particles lying in it. One of the hypercubes is selected by the roulette-wheel and a particle in the chosen hypercube is selected randomly to serve as gBest for this particle. Therefore, this method biases the selection toward under-represented areas of the estimated Pareto front.

However, the need of data storage and computational time increases exponentially in Adaptive Grid. At the same time, it is necessary to provide and obtain certain information which is problem dependant, such as the number of grid subdivisions.

From above, there are some drawbacks in the two most popular methods, Crowding Distance and Adaptive Grid. In this paper, we propose a new method of density estimation for selecting leaders and updating archive in MOPSO.

III. THE PROPOSED ALGORITHM In order to obtain a good performance, in term of

convergence and diversity, of the approximate Pareto front found by an algorithm, a mechanism is needed to assess the environmental fitness of each nondominated solution in the external archive and then to decide which solutions is more suitable to be selected as leader or to be discarded when the archive is no more room to host a new qualified solution.

Parallel coordinates [16] is a popular way of visualizing high-dimensional geometry and analyzing multivariate data. Inspired by this technique, we propose a new mechanism named Parallel Cell Coordinate System (PCCS for short) to assess the density of each nondominated solution in an archive. Then, the density estimation method is integrated into a new MOPSO for selecting leaders and updating the archive.

A. A New Method for Estimating Density The m-th objective of the k-th nondominated solution in

archive, fk,m, k=1,2,…,K, m=1,2,…,M, is mapped to an integer label number within a 2-dimension grid with K × M cells according to (1), where K is the current size of archive and M is the number of objectives in MOP,

⎥⎥⎥

⎢⎢⎢

−−

= minmax

min,

,mm

mmkmk ff

ffKL . (1)

182

here, ⎡ ⎤x is a ceiling operator that returns the smallest integer which is not less than x. mkkm ff ,

max max= and mkkm ff ,min min=

are the maximum and minimum, respectively, of the m-th objective in archive. Lk,m∈{1,2,…,K} is an integer number transformed from the real number, fk,m , after normalized. Lk,m is set to one if min

, mmk ff = to avoid zero as a denominator in special cases.

It is noted that K, changed dynamically with the size of archive, is not a user-defined parameter. In a nondominated archive, each solution is expected to share alone a cell in each dimension if all the nondominated solutions are well-distributed perfectly in the approximate Pareto front. So, in this method, the length of cell is automatically adjusted once any one of max

mf , minmf , and the size of archive is changed.

Any set of solutions in Cartesian Coordinate System can be represented by “Cell Coordinate” in a 2-D grid which can be visualized intuitively by the style of parallel axes. So, here we call it Parallel Cell Coordinate System (PCCS). An example for mapping the nondominated solutions from Cartesian Coordinate System into PCCS is illustrated in Fig. 1. On the left side of Fig. 1, there are seven nondominated solutions sampled arbitrarily from the Pareto front of DTLZ2 with three objectives in Cartesian Coordinate System. On the right side of Fig. 1, each objective of a solution on left side is mapped into a unique cell within a 2-D grid with seven rows and three columns corresponding to seven solutions in archive and three objectives, respectively. All components represented by the label number of corresponding “cell coordinates” of a point are linked by a dash dotted line to display clearly. For example, the cell coordinate of P7 is (6,2,4). For comparison, the Cartesian Coordinates and Parallel Cell Coordinates of these seven solutions in Fig. 1 are both listed in Table I.

The distance between two vectors in this method, named Parallel Cell Distance, is measured by the sum of numbers of

cells away from each other in all objectives. The Parallel Cell Distance of two nondominated solutions Pi and Pj, PCD(Pi, Pj), can be calculated according to (2) after they are mapped to Li,m and Lj,m, respectively, by (1):

⎪⎩

⎪⎨

=∀=∑

=

M

mmjmi

mjmi

ji LL

LLmPPPCD

1,,

,,

otherwise

if 5.0),( . (2)

In (2), if Pi and Pj are mapped into the same cells across all dimensions, a penalty equal to 0.5 is set to their Parallel Cell Distance because they share the cells in the m-th dimensions with each other.

The density of Pi, in the hyper-space formed by the archive can be measured by the Parallel Cell Distance between Pi and all other members, Pj (j=1,2,…,K, j≠i), in the archive according to (3):

∑≠=

=K

ijj ji

i PPPCDPDensity

12),(

1)( . (3)

Fig. 1. An example for mapping nondominated solutions in archive from Cartesian Coordinate System into PCCS. Left side: seven nondominated solutions sampled arbitrarily from the Pareto front of DTLZ2 with three objectives. Right side: the solutions with Parallel Cell Coordinates in PCCS.

TABLE I MAPPING NONDOMINATED SOLUTIONS IN ARCHIVE FROM CARTESIAN

COORDINATE SYSTEM INTO PARALLEL CELL COORDINATE SYSTEM

Code Cartesian Coordinate System Parallel Cell Coordinate System

f1 f2 f3 f1 f2 f3

P1 1.00 0.00 0.00 7 1 1

P2 0.00 1.00 0.00 1 7 1

P3 0.00 0.00 1.00 1 1 7

P4 0.50 0.50 0.71 4 4 5

P5 0.40 0.80 0.48 3 6 4

P6 0.20 0.40 0.89 2 3 7

P7 0.80 0.20 0.57 6 2 4

183

The smaller the distance in the unit of cell between two solutions in archive is, the more the contribution of the density to each other will be, and vice versa.

The complexity of this method for estimating density is O(MK2), where K is the number of solutions in archive and M is the number of objectives.

B. The Proposed Algorithm For solving a MOP, a PSO population with N particles

intends to search for a set of nondominated solutions to be stored in an archive with a pre-defined maximal size to present the approximate Pareto front. The top M (the number of objectives) best nondominated solutions with the minimal PCD density in the archive are selected as the candidates of leaders. The global best (gBest) for each particle at a generation is randomly selected from the M candidates. The nondominated solution with the largest PCD density is thrown away when the archive is too full to host the new qualified solution found by the population. The complete MOPSO algorithm based on PCD density is described in detail as follows.

Step 1 (Initialize population) Step 1.1 (Initialize position and velocity) Each particle, Pi, in

a population with the size N is randomly initialized with the position xi=[xi1,…,xid,…,xiD]T subject to [Xd,min Xd,max]T in the search space, i=1,2,…,N, d=1,2,…,D, D is the number of decision variables, and the velocity vi=[vi1,…,vid,…,viD]T subject to [Vd,min Vd,max]T.

Step 1.2 (Evaluate objective values) The objective function values of xi, F(xi)=[fi1(xi),…,fim(xi),…,fiM(xi)]T, m=1, 2,…,M, M is the number of objective functions, is evaluated for each particle Pi in the initial population.

Step 1.3 (Initialize personal best) For particle Pi, the position xi and objective values F(xi) are set to pBesti as its initial personal best solution.

Step 1.4 (Initialize archive) Firstly, set the archive A=Ø. Secondly, for each particle Pi, set the position xi and objective values F(xi) to a new solution s, if F(xi) is not dominated by any member of A, then A=A-{aj} ∪ {s}, here, {aj} is the members in A who is dominated by s, if any.

Step 2 (Update time step) Set t = t+1. Step 3 (Select candidates for leaders)

Step 3.1 (Evaluate density) For each member ak, k=1, 2,…K, K is the size of the archive, in A, calculate its density according to (3).

Step 3.2 (Select candidates) Sort A in ascending order by the densities of its members, then select the top M (the number of objectives) solutions as candidates for leaders. Here, the purpose of the M selected candidates is to increase the diversity and to decrease the selection pressure of gBest.

Step 4 (Update population) For each particle Pi, do: Step 4.1 (Select gBest) Randomly select a member from

candidates as the gBest for Pi.

Step 4.2 (Update position and velocity) Update vi and xi by:

⎪⎩

⎪⎨

++=+

−+−+=+

)1()()1(

))(( ))(()()1(

22

11

ttt

trctrctt

iii

i

iiii

vxx

xgBestxpBestvv ω

. (4)

Step 4.3 (Perturb particle) Randomly select a nondominated

solution from the archive as elitist to perturb the particle with a Gaussian mutation of variable range at a random dimension by Elitism Learning Strategy [17] if a random value is less than the linear decreasing learning rate from 1.0 to 0.

Step 4.4 (Evaluate objective values) Evaluate F(xi)= [fi1(xi),…fim(xi),…fiM(xi)]T.

Step 4.5 (Update pBest) Replace F(pBesti) and pBesti with xi and F(xi), respectively, if F(xi) dominates F(pBesti), or F(xi) and F(pBesti) are nondominated with each other but a random value is less than 0.5.

Step 4.6 (Update archive) Set xi and F(xi) to a new solution s, if F(xi) is not dominated by any member of A, then A=A-{aj}∪ {s}, here, {aj} is those members in A who is dominated by s. Then discard the member in A with the maximal density calculated by (3) if the size of current A is larger than the pre-defined maximal size of A.

Step 5 (Check exit condition) Report the contents in A if t is larger than the maximal generation T, otherwise, go to Step 2.

IV. EXPERIMENTAL RESULTS AND DISCCUSIONS

A. Benchmark Problems To compare the performance with respect to some chosen

state-of-the-art MOPSOs and to assess the effectiveness of the proposed algorithm, a group of benchmark test problems including five widely used bi-objective test instances from ZDT series [18] and seven 3-objective instances from DTLZ series [19]. The test problem ZDT5 is omitted because it is a Boolean function and requires binary encoding. All these test instances are minimization of the objectives. The number of decision variables of ZDT1~3 is set to 30, and that of DTLZ7 is set to 20, and that of ZDT4, ZDT6, and DTLZ1~6 is set to 10.

Among these test problems, ZDT3, ZDT4, DTLZ1, DTLZ3, and DTLZ7 are multi-modal problems which include many local Pareto fronts in their objective space. For ZDT3, ZDT6, and DTLZ7, there are some disconnected segments in their Pareto fronts. What’s more, the Pareto fronts of ZDT6, DTLZ4, and DTLZ6 are nonuniform. So, the two series of test problems stands for different kinds of multiobjective optimization problems to examine the abilities of a MOPSO.

B. Peer Algorithms In order to validate the proposed algorithm in this paper, five

other state-of-the-art MOPSOs are selected as the competitors in the experiments. These algorithms are sigmaMOSPO [20], agMOPSO [12], cdMOPSO [13], clusterMOPSO [21], and pdMOPSO [4].

184

Fig. 2. Box-plots of hypervolume indicator for comparing the performance of the proposed algorithm against the other five MOPSOs. 1-the proposed algorithm; 2-sigmaMOPSO; 3-agMOPSO; 4-cdMOPSO; 5-clusterMOPSO; and 6-pdMOPSO

185

TABLE II COMPARISONS OF HYPERVOLUME (IH) BETWEEN THE PROPOSED ALGORITHM AND OTHER MOPSOS ON T-TESTS

Proposed MOPSO sigmaMOPSO [20]

agMOPSO [12]

cdMOPSO [13]

clusterMOPSO [21]

pdMOPSO [4]

ZDT1

Mean 0.99721 0.99634 0.98391 0.99711 0.99582 0.93562 Std. 0.00013 0.00051 0.00914 0.00015 0.00089 0.02122

t-test 8.26000 7.27000 2.30000 7.69000 14.50000

+ + + + +

ZDT2

Mean 0.99440 0.94299 0.99176 0.99434 0.98993 0.97684 Std. 0.00030 0.04197 0.00227 0.00032 0.00370 0.01036

t-test 6.12000 5.76000 0.61100 6.02000 8.47000

+ + = + +

ZDT3

Mean 0.99831 0.99830 0.97678 0.99835 0.99200 0.98641 Std. 0.00015 0.00013 0.01953 0.00014 0.00998 0.01275

t-test 0.39400 5.51000 -0.91800 3.16000 4.67000

= + = + +

ZDT4

Mean 0.99644 0.87032 0.45967 0.47831 0.61511 0.60018 Std. 0.00033 0.24400 0.23410 0.30610 0.22140 0.15190 t-test 2.58000 11.50000 8.46000 8.61000 13.00000

+ + + + +

ZDT6

Mean 0.96912 0.96923 0.91411 0.96902 0.91039 0.71737 Std. 0.00053 0.00065 0.07576 0.00045 0.00112 0.07311

t-test -0.63700 3.63000 0.72000 238.00000 17.20000

= + = + +

DTLZ1

Mean 0.99995 0.04327 0.17689 0.06707 0.05165 0.84026 Std. 0.00004 0.07545 0.19290 0.10440 0.11960 0.17200

t-test 63.40000 21.30000 44.70000 39.60000 4.64000

+ + + + +

DTLZ2

Mean 0.99956 0.99939 0.99919 0.99947 0.99732 0.98596 Std. 0.00007 0.00008 0.00027 0.00008 0.00159 0.00642 t-test 7.70000 6.68000 4.37000 7.02000 10.60000

+ + + + +

DTLZ3

Mean 0.99891 0.00043 0.01310 0.00000 0.00000 0.04301 Std. 0.00142 0.00214 0.04937 0.00000 0.00000 0.13500

t-test 1940.00000 99.80000 3520.00000 3520.00000 35.40000

+ + + + +

DTLZ4

Mean 0.99953 0.99915 0.99873 0.99901 0.98934 0.99577 Std. 0.00007 0.00023 0.00124 0.00034 0.01665 0.00397

t-test 7.91000 3.20000 7.54000 3.06000 4.74000

+ + + + +

DTLZ5

Mean 0.99098 0.99079 0.99013 0.99088 0.98902 0.97957 Std. 0.00032 0.00038 0.00141 0.00028 0.00186 0.00831 t-test 1.88000 2.94000 1.12000 5.19000 6.86000

= + = + +

DTLZ6

Mean 0.99109 0.98403 0.98758 0.99108 0.97851 0.96596 Std. 0.00024 0.00561 0.00470 0.00026 0.02206 0.01084

t-test 6.29000 3.73000 0.13000 2.85000 11.60000

+ + = + +

DTLZ7

Mean 0.74397 0.68818 0.61468 0.74372 0.63500 0.68033 Std. 0.00156 0.08904 0.05224 0.00206 0.07256 0.03016

t-test 3.13000 12.40000 0.48400 7.51000 10.50000

+ + = + + Better(+) 9 12 6 12 12 Same(=) 3 0 6 0 0 Worse(-) 0 0 0 0 0

Score 9 12 6 12 12 The performance of the benchmark is better (“+”), same (“=”), or worse (“-“) than/as that of other algorithms if the t-value generated by the benchmark and its competitor is larger, equal, or smaller than/to a standard tabulated value of t-test at a significance level of 0.05 by a two-tailed test. “Score” represents the difference between the number of “+” and the number of “–”, which is used to give an overall comparison between the benchmark and each other algorithm on all test problems.

186

We developed the proposed algorithm and the other five comparative algorithms in Matlab according to those original papers. The inertia weight, ω, is set to 0.4 according to [12] for all test algorithms except for pdMOPSO with ω=0.5 according to [4], the acceleration coefficient, c1 and c2, are both set to 1.429 according to [12] for all test algorithms except for pdMOPSO with c1= c2=1.0 according to [4]. The number of divisions of each objective for Adaptive Grid in agMOPSO is set 10 according to [12]. The number of sub-swarms is set to 5 in clusterMOPSO according to [21]. Other parameters are set according to their original papers.

C. Simulation Settings The size of population and the maximal size of external

archive are set to 100 for all test algorithms. The number of function evaluations is fixed at 30,000. Since the test algorithms are stochastic, their performances on each test instance are obtained from 30 independent runs. All simulations are performed on a notebook PC with 1.2 GHz CPU and 4GB memory.

D. Performance Metrics The performance metric used here is the hypervolume

indicator, IH, [22], [23] which measures the volume of the dominated portion of the objective space. Considering two Pareto sets A and B, the hypervolume indicator values of A is higher than that of B if the Pareto set A dominates the Pareto set B. This property makes it well suited for MOPs to compare the performance of peer algorithms. Hypervolume indicator evaluates how well the algorithm converges and produces nondominated solutions that are well distributed and well extended along the Pareto front. A higher IH value indicates that the solutions found by an algorithm are able to dominate a larger region in the objective space. All reference points to calculate hypervolume indicator IH for bi-objective problems and 3-objective problems are set to (11,11) and (11,11,11), respectively.

E. Experimental Results The experimental results are shown in Fig. 2 in the form of

box-plot. From Fig. 2, the proposed algorithm obtains the relative hypervolume values near to 1 on all test problems except for DTLZ7 on which the IH value is 0.7439. On all test problems, the proposed algorithm outperforms clusterMOPSO and pdMOPSO because the IH of the former is apparently higher than the latters.

Compare the proposed algorithm against cdMOPSO which uses Crowding Distance as density estimator for selecting leaders and updating archive, they are almost on par on ZDT1~3, ZDT6, DTLZ2, and DTLZ4~7, yet the proposed algorithm produces much better results than cdMOPSO on the problems with many local Pareto fronts, such as ZDT4, DTLZ1, and DTLZ3. Also, comparing the proposed algorithm against agMOPSO which uses Adaptive Grid to indirectly assess density, the former is better than the latter on all test problems.

To measure the performance of an algorithm in a more general manner, the mean and standard deviation of IH generated by the six peer algorithms over 12 test problems are

listed in Table II with a two-sample t-test [24] to show the significance between the proposed algorithm and the chosen competitors. Two-sample t-test is a hypothesis testing method for determining the statistical significance of the difference between two independent samples of an equal sample size. We assume that the performance metric of each peer algorithm in 30 independent trials is obeyed to the normal distribution because these algorithms are stochastic optimization methods with the random operators of uniform distribution. The best value (maximum) of IH among those test algorithms is highlighted in boldface in each test problems. From Table II, the proposed algorithm is superior to other MOPSOs in term of the number of the best IH because the proposed algorithm obtains 10 out of 12 best values of IH, while sigmaMOPSO and cdMOPSO get only one best value respectively.

The performance of the proposed algorithm with respect to IH on each test problem is better (“+”), same (“=”), or worse (“-“) than/as that of other algorithms if the t-value by the proposed algorithm and its competitor is larger, equal, or smaller than/to a standard tabulated value of t-test at a significance level of 0.05 by a two-tailed test. A row “Score” in Table II shows the difference between the number of “+” and the number of “–”, which is used to give an overall comparison between the proposed algorithm and each other competitor on all test problems. For example, comparing the proposed algorithm and the sigmaMOPSO, the former significantly outperforms the latter on nine test problems (ZDT1~2, ZDT4, DTLZ1~4, and DTLZ6~7), does almost the same as the latter on three instance (ZDT3, ZDT6, and DTLZ5), and does no worse than the latter on any problems. So the score merits, listed in the last row and the column of sigmaMOPSO, is 9 calculated from 9-0 in this case. From Table II, the proposed algorithm obtained the full score (12) over three competitors (agMOPSO, clusterMOPSO, and pdMOPSO). The minimal score is six when comparing

Fig. 3. Plot of mean and variance of ranks in term of IH. Accuracy andstability of an algorithm are indicated by the mean and variance, respectively,of its ranks among all peer algorithms on all test instances. The proposedMOPSO is significantly superior to all other algorithms on the performancesof accuracy and stability because it remarkably dominates all otheralgorithms in terms of the mean and variance of ranks on the 12 testinstances.

187

with cdMOPSO, which indicates the proposed algorithm generally outperforms all peer algorithms on most test problems.

Furthermore, the comparative experiment between the proposed algorithm and the other Multiobjective Genetic Algorithms, NSGAII and SPEA2, is performed in this study. The scores of our proposed algorithm are ten and five when competing with NSGAII and SPEA2, respectively. Therefore, our proposed algorithm is more effective than the state-of-the-art MOEAs.

To measure the performance of an algorithm in a general manner, accuracy and stability should be considered simultaneously [25]. Given a rank set R={r1,r2,…,r|R|}, |R| is the number of test instances, of an algorithm A, the accuracy of A can be represented by the mean rank ∑ ∈

=RrRA r1μ whilst

the stability of A can be expressed by the variance of the ranks ∑ ∈

−=Rr ARA r 21 )( μσ . An algorithm A is said to be

better if the corresponding Aμ and Aσ are lower than its competitors. Fig. 3 plots Aσ against Aμ of the six MOPSOs considered on the 12 test instances. From Fig. 3, the proposed MOPSO is significantly superior to other five algorithms on the performances of accuracy and stability because its mean and variance of ranks are the lowest. In other words, the proposed MOPSO locating at lower left corner of Fig. 3 dominates all other algorithms on the performance of accuracy and stability in terms of IH.

V. CONCLUSIONS A new method of density estimation is proposed for selecting

leaders and maintaining archive in MOPSO. The density of a nondominated solution is calculated according to the Parallel Cell Distance between this solution and all other solutions in an archive after the archive is mapped from Cartesian Coordinate System into Parallel Cell Coordinate System. The gBest of a particle is randomly selected from M (the number of objectives) candidates which are the top M solutions with the minimal density. Those nondominated solutions with the worst density are discarded from the archive when there is no room to host a new qualified solution. The experiment shows that the proposed algorithm significantly superior to the other five state-of-the-art MOPSOs on 12 test problems in term of hypervolume performance metric.

ACKNOWLEDGMENT The first author as a visiting scholar in Oklahoma State

University would like to thank China Scholarship Council for financial support.

REFERENCES [1] J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proc. 4th

IEEE Int. Conf. Neural Networks, 1995, pp.1942-1948.

[2] J. Kennedy and R. Eberhart, Swarm Intelligence, San Francisco: Morgan Kaufmann Publishers, 2001.

[3] J. E. Fieldsend and S. Singh, “A multiobjective algorithm based upon particle swarm optimisation, an efficient data structure and turbulence,” in Proc. Comput. Intell., 2002, pp. 37-44.

[4] J. E. Alvarez-Ben´ıtez, R. M. Everson, and J. E. Fieldsend, “A MOPSO algorithm based exclusively on Pareto dominance concepts,” in Proc. Evol. Multi-Criterion Optimization, 2005, pp. 459-473.

[5] K. Deb, Multiobjective Optimization Using Evolutionary Algorithms, New York: Wiley, 2001.

[6] Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Trans. Evol. Comput, vol. 6, no. 2, pp. 182-197, Apr. 2002.

[7] J. D. Knowles and D.W. Corne, “Approximating the nondominated front using the Pareto archived evolution strategy,” Evol. Comput., vol. 8, pp. 149-172, Jun. 2000.

[8] J. Moore and R. Chapman, “Application of particle swarm to multiobjective optimization,” Department of Computer Science and Software Engineering, Auburn University, 1999.

[9] M. Reyes-Sierra and C. A. Coello Coello, “Multi-objective particle swarm optimizers: a survey of the state-of-the-art,” Int. J. Comput. Intell. Res. vol. 2, no.3, pp. 287-308, 2006.

[10] J. E. Fieldsend, “Multi-objective particle swarm optimization methods,” Department of Computer Science, University of Exter, Technical Report 418, 2004.

[11] N. Padhye, J. Branke, S. Mostaghim, “Empirical comparison of MOPSO methods - Guide selection and diversity preservation,” in Proc. Congr. Evol. Comput., 2009, pp. 2516-2523.

[12] N. Padhye, “Comparison of archiving methods in multi-objective particle swarm optimization (MOPSO): empirical study,” in Proc. Genetic Evol. Comput., 2009, pp. 1755-1756.

[13] C. A. Coello Coello, G. T. Pulido, and M. S. Lechuga, “Handling multiple objectives with particle swarm optimization,” IEEE Trans. Evol. Comput., vol. 8, no. 3, pp. 256-279, Jun. 2004.

[14] C. R. Raquel and P. C. Nava, “An Effective Use of Crowding Distance in Multiobjective Particle Swarm Optimization,” in Proc. Genetic Evol. Comput., 2005, pp. 257-264.

[15] M. Reyes Sierra and C. A. Coello Coello, “Improving PSO-based multi-objective optimization using crowding, mutation and �-Dominance,” in Proc. Evol. Multi-Criterion Optimization, 2005, pp. 505-519.

[16] A. Inselberg, “The plane with parallel coordinates,” Visual Computer. vol. 1, no. 2, pp. 69-91, Aug. 1985.

[17] Z. H. Zhan, J. Zhang, Y. Li, and H. S. Chung, “Adaptive particle swarm optimization,” IEEE Trans. Syst. Man, Cybern. B, Cybern., vol. 39, no. 6, pp.1362-1381, Dec. 2009.

[18] E. Zitzler, K. Deb, and L. Thiele, “Comparison of multiobjective evolutionary algorithms: Empirical results,” Evol. Comput., vol. 8, no. 2, pp. 173-195, 2000.

[19] K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, “Scalable multiobjective optimization test problems,” in Proc. Congr. Evol. Comput., 2002, pp. 825-830.

[20] S. Mostaghim and J. Teich, “Strategies for finding good local guides in multi-objective particle swarm optimization (MOPSO),” in Proc. IEEE Swarm Intell. Symp., 2003, pp. 26-33.

[21] G. T. Pulido and C. A. Coello Coello, “Using clustering techniques to improve the performance of a particle swarm optimizer,” in Proc. Genetic Evol. Comput., 2004, pp. 225-237.

[22] E. Zitzler, ‘‘Evolutionary algorithms for multiobjective optimization: Methods and applications,’’ Ph.D. dissertation, Swiss Federal Inst. Technol., Zurich, Switzerland, 1999.

[23] E. Zitzler and L. Thiele, “Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach,” IEEE Trans. Evol. Comput, vol. 3, no. 4 pp. 257-271, 1999.

[24] R. E. Walpole and R. H. Myers, Probability and statistics for engineers and scientists. New York: Macmillan, 1978.

[25] C. K. Chow and S. Y. Yuen, “A multiobjective evolutionary algorithm that diversifies population by its density,” IEEE Trans. Evol. Comput., vol. 16, no. 2, pp. 149-172, Apr. 2012.

188