中南大学学报(英文版)

J. Cent. South Univ. (2018) 25: 107-120

DOI: https://doi.org/10.1007/s11771-018-3721-z

An enhanced artificial bee colony optimizer and its application to multi-level threshold image segmentation

GAO Yang(高扬)1, LI Xu(李旭)2, DONG Ming(董明)1, LI He-peng(李鹤鹏)3

1. Academy of Information Technology, Northeastern University, Shenyang 110018, China;

2. Benedictine University, Lisle, IL, US;

3. Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China

Central South University Press and Springer-Verlag GmbH Germany, part of Springer Nature 2018

Abstract:

A modified artificial bee colony optimizer (MABC) is proposed for image segmentation by using a pool of optimal foraging strategies to balance the exploration and exploitation tradeoff. The main idea of MABC is to enrich artificial bee foraging behaviors by combining local search and comprehensive learning using multi-dimensional PSO-based equation. With comprehensive learning, the bees incorporate the information of global best solution into the solution search equation to improve the exploration while the local search enables the bees deeply exploit around the promising area, which provides a proper balance between exploration and exploitation. The experimental results on comparing the MABC to several successful EA and SI algorithms on a set of benchmarks demonstrated the effectiveness of the proposed algorithm. Furthermore, we applied the MABC algorithm to image segmentation problem. Experimental results verify the effectiveness of the proposed algorithm.

Key words:

artificial bee colony; local search; swarm intelligence; image segmentation

Cite this article as:

GAO Yang, LI Xu, DONG Ming, LI He-peng. An enhanced artificial bee colony optimizer and its application on multilevel threshold image segmentation [J]. Journal of Central South University, 2018, 25(1): 107–120.

DOI:https://dx.doi.org/https://doi.org/10.1007/s11771-018-3721-z

1 Introduction

Image segmentation is considered useful method to separate objects from background that has distinct gray levels. Among existing segmentation techniques, multi-level threshold is a simple but effective tool and requires multiple threshold values to accomplish segmentation. This approach can be classified into optimal threshold methods [1–4] and property based threshold methods [5–7]. The first category searches for the optimal thresholds which make the threshold classes on the histogram reach the desired characteristics. The second category detects the thresholds by measuring some property of the histogram. Property-based threshold methods are fast and suitable for the case of multilevel threshold, while the number of thresholds is hard to be determined.

Several algorithms have been proposed in literatures for optimal threshold [8–12]. In Refs. [4, 8, 9], some novel methods, derived from optimizing an objective function for bi-level and multi-level threshold, were proposed. These methods suffer from a common drawback that the computational complexity raises exponentially when the problem is extended to multi-level threshold. Recently, swarm intelligence (SI) algorithms have been introduced to image segmentation [12–16]. Among them, artificial bee colony algorithm (ABC) is one popular member of the SI family [17]. Due to its good robustness, the ABC has been widely employed to solve many engineering optimization problems [18–21]. Especially, Refs. [19, 20] have proposed and developed the novel and effective

ABC variants by using a hybridization of life- cycle and optimal search strategies have obtained significant performance improvement. However, when tackling complex problems, these ABCs still suffer from the drawbacks of poor exploitation [18].

Aiming to conquer above drawbacks to some extent, this paper presents a modified artificial bee colony algorithm (MABC) for image segmentation. In our proposed MABC model, the local search operation is activated when a bee finds promising area and the comprehensive learning is used to facilitate more information shared in bee colony. By this hybrid mechanism, the proposed MABC can be claimed very powerful due to the fact that the exploitation and exploration can be elaborately balanced.

2 Standard ABC algorithm

The recently introduced artificial bee colony (ABC) algorithm is motivated by intelligent social behaviors of three types of bees [17]. In ABC, there are three groups of bees: employed bees, onlookers and scouts. The employed bees explore the food sources and transmit related information to onlooker bees. The onlooker bees select good food sources, and these food sources with higher quality will have a bigger probability to be chosen. If a food source found by employed bee is exhausted, the corresponding employed bee will be transformed to a random scout. The detailed procedures are given as follows.

Step 1: Initialization

In initialization phase, a group of food sources representing possible solutions are generated randomly by the following equation:

               (1)

where i=1, 2, …, SN; j=1, 2, …, D; SN is the population size (the number of solutions); D donates the number of variables, i.e. problem dimension; and represent the lower upper and upper bounds of the jth variable, respectively.

Step 2: Sending employed bees

In this phase, the neighbor food source (candidate solution) can be generated from the old food source of each employed bee in its memory using the following expression:

                       (2)

where xk is a randomly selected individual as a neighbor bee and is different from current bee; xi,j is another randomly chosen index donating a random dimension; f is a number randomly falling into [–1,1].

Step 3: Sending onlooker bees

In this phase, an onlooker bee selects a food source lying on the probability value linked with that corresponding food source; Pi can be defined as following expression:

                           (3)

where fitnessi donates the fitness value of the jth solution.

Step 4: Sending scout bees

In the scout bees’ phase, once a food source cannot be ameliorated further during a predetermined cycle (defined as “limit” in ABC), the food source should be replaced with a new one while the employed bee associated with it subsequently becomes a scout. The new food source is generated randomly according to Eq. (1).

Those procedures from Step 2 to Step 4 will be carried out repetitively until the termination condition is met.

3 Modified artificial bee colony algorithm

3.1 Local search

The Powell’s local search algorithm is an extension of basic pattern search method, and has a merit of tackling the non-differentiable objective functions without derivatives [22]. This algorithm searches the objective optima bi-directionally along each vector, alternately. Then the new point is donated as a linear combinational vector as a new member added to the search vector list. Accordingly, the most successful search vector with most contribution to the new direction is removed from this list. This process is iterated until no significant improvement is achieved. The detailed implantation of this algorithm can refer to Ref. [22].

3.2 Comprehensive learning based on multi- dimensional best-solution information

In the original ABC version, the search equation (i.e., Eq. (2)) is used to generate a new position by a random-dimension disturbance, whereas this approach is similar to a blind mutation operator. That is, this equation drives the old individual bee towards (or away from) its randomly-selected neighbor at a random dimension. This inevitably causes the inefficiency of information exchange at the individual-level and population-level because the useful information of elites is not utilized fully and the dimension of learning is not enough.

Inspired by the social learning in PSO model [23], a new learning strategy is employed in search equation of ABC (i.e., Eq. (2)). To learn fully from the best individual in current bee population, assume that individuals exchange information to other individuals in a full-dimension manner. Specifically, in the employed or onlooker stage, the foraging direction of a bee is governed by the information combination of its randomly-selected neighbor and the best individual in the population (i.e., gbest). And this search equation is modified as follows:

          (4)

where xgbest is the best member from current population; xk is randomly chosen neighbor individual (note that k is different with i); l1 and l2 is a random number within the scope of [–1, 1].

According to Eq. (4), the gbest term can drive the new candidate solution towards the global best solution, as well as the full-dimension learning can enhance the efficacy of information exchange.

3.3 Proposed algorithm

The balance between exploration of the search space and exploitation of potentially good solutions is considered a fundamental problem in population- based optimization algorithms. In practice, the exploration and exploitation contradict with each other. By using the local search and comprehensive learning, the proposed MABC will act as the main optimizer for searching the near-optimal position while the local search will make fine tune the best solutions obtained by the MABC in each iteration. The main steps of the proposed algorithm are given as the following processes. The following is the proposed MABC algorithm.

Step 1) Initialization.

Step 1.1) Randomly generate SN food sources in the search space to form an initial population by Eq. (1).

Step 1.2) Evaluate the fitness of each bee.

Step 1.3) Set the maximum cycle (LimitC).

Step 2) Iteration=0.

Step 3) Employ bee phase. Loop over each food source.

Step 3.1) Generate a candidate solution Vi by Eq. (4) and evaluate f (Vi).

Step 3.2) Greedy selection and memorize the better solution.

Step 4: Calculate the probability value Pi by Eq. (3).

Step 5: Onlooker bee phase.

Step 5.1) Generate a candidate solution Vi by Eq. (4) and evaluate f (Vi).

Step 5.2) Greedy selection and memorize the better solution.

Step 6: Powell’s search phase.

If mode (Iteration, Tp) ==0, randomly choose m∈{1, …, SN} that has to be different from the best one, Xbest, and generate a new solution Vs by Eq. (4). Use the Vs as a starting point and generate a new solution Vm by Powell’s method as illustrated in standard ABC algorithm.

Step 7) Iteration= Iteration +1.

Step 8) If the iteration is greater than LimitC, stop the procedure; otherwise, go to Step 3).

Step 9) Output the best solution achieved.

4 Benchmark test

In the experimental studies, according to the no free lunch (NFL) theorem [24], a suit of 15 benchmark functions are employed to fully evaluate the performance of the MABC algorithm without a biased conclusion towards some chosen problems [25–28]. The involved benchmark functions can be classified as basic continuous benchmarks (f1–f8), CEC2005 benchmarks (f9–f15). The formula for each basic benchmarks and CEC2005 test functions is shown in Tables 1 and 2. In order to compare the different algorithms fairly, the number of function evaluations (FEs) is adopted as a time measure substituting the number of iterations, due to the fact that the algorithms do differing amounts of work in their inner loops.

Table 1 Classical test suite

Table 2 CEC 2005 test suite

4.1 Parameters settings for involved algorithms

Experiment was conducted to compare with original artificial bee colony algorithm (ABC) [18], canonical PSO with constriction factor (PSO) [23], genetic algorithm with elitism (EGA) [29] and covariance matrix adaptation evolution strategy (CMA-ES) [30]. All algorithms were run 30 times respectively on each benchmark and the maximum evaluation number (FEs) was set at 100000. For involved benchmarks, the dimensions are all set as 30. All the control parameters for the EA and SI algorithms are set to be default of their original literatures: initialization conditions of CMA-ES are the same as those in Ref. [30]; the number of offspring candidate solutions generated per time step is λ=4μ, where μ is a adjustable parameter defined in Ref. [30]; the limit parameter of ABC is set to be SN×D, where D is the dimension of the problem and SN is the number of employed bees [18]. For canonical PSO, the learning rates c1 and c2 are both set as 2.05 and the constriction factor c=0.729 [23]. For EGA, crossover rate of 0.8, mutation rate of 0.01, and the global elite with a rate of 0.06 are adopted [29]. For the proposed MABC, the control parameter Tp can be empirically set as 90 in the experiments and other parameters can be referred the setting of original ABC [18].

4.2 Numerical results and comparison

4.2.1 Results on classical benchmarks

The means and stand deviations of the 30 run times of involved algorithms on classical test functions are listed in Table 3 where the best results are highlighted in bold. From Table 3, MABC and ABC obtain satisfactory results on the unimodal f1, f2, f3 in terms of accuracy and convergence. MABC performs a little worse than ABC on these functions, but significantly better than other algorithms. f5–f8 are the most commonly used test multimodal functions, and an algorithm can be easily trapped in a local minimum. As expected, the MABC gets more favorable results than the compared algorithms on all these cases. The superior performance of MABC on these multimodal functions suggests that MABC is good at a fine- gained search. The performance improvement is mainly due to its Powell’s search and improved search equation in MABC. That is, the ABC guided by so-far-best information will act as the main optimizer for exploration while the Powell’s method aims to fine exploitation. From computation results on these classical functions, MABC performs most powerful on most test cases due to its using the proposed foraging strategies.

4.2.2 Results on CEC2005 benchmarks

Benchmarks f9–f15 from CEC 2005 test bad are employed in this section and correlative computation results are presented in Table 4. From these results, it can be observed that MABC performs best on five out of the seven functions. ABC and CMA-ES achieve similar ranking, only worse than MABC. It is clearly visible and proven that MABC performs more powerful on CEC 2005 benchmarks than on basic benchmarks. This means that MABC with the proposed effective strategies is more competent in tackling complex problems.

Table 3 Results obtained by all algorithms on classical test suite

Table 4 Results obtained by all algorithms on CEC05 benchmarks

5 Multilevel threshold for image segmentation by MABC

5.1 Kapur criterion

The Kapur multi-threshold entropy measure [31] has been popularly employed in determining whether the optimal threshold method can provide image segmentation with satisfactory results. It is aimed to find the optimal thresholds that can yield the maximum entropy. For multilevel threshold, Kaptur’s entropy may be described as follows.

Consider an image containing N pixels of gray levels from 0 to L. H(i) represents the number of the ith gray level pixel and P(i) represents the probability of i. Then, we obtain:

                          (5)

Assuming that there are M–1 thresholds {t1, t2, …, tM–1} that divide the original image into M classes (C1 for [0, t1], C2 for [t1, t2], and CM for [tM–1, L]), the optimal thresholds {t1, t2, …, tM–1} selected by the Kapurmethod are depicted as follows:

            (6)

Equation (6) is used as the objective function for the proposed MABC based procedure which is to be optimized (minimized). A close look into this equation will show that it is very similar to the expression for uniformity measure.

5.2 Experiment setup

The datasets involve a set of popular tested images used in previous studies [32], including avion.ppm, house.ppm, lena.ppm, peppers.ppm, safari04.ppm and hunter.pgm. The size of each involved image is 512×512. The proposed algorithm and compared algorithms are evaluated based on Kapur. The parameters of these algorithms including MABC, ABC, PSO, EGA and CMA_ES are set as described in Section 4.1. We strived to utilize the proposed algorithm to obtain multiple thresholds with larger fitness values and fast computation ability. The numbers of thresholds M-1 investigated in the experiments are 2, 3, 4, 5, 7, and 9. The population size is set to 20 and the maximum FE is set to 2000. All the experiments are repeated 30 times.

5.3 Experimental results of multilevel threshold

Table 5 gives the fitness, mean computation time, and optimal thresholds with M–1=2, 3, and 4 obtained by Kapur. From Table 5, we can see that Kapur takes too long computation time on these cases. From computation results in Table 6, it can be observed that population- based methods consume similar CPU time, which exhibits superior performance to pure Kapur. As can be seen form Table 7, the proposed MABC algorithm generally performs satisfactory fitness values with M–1=2, 3 and 4, and consumes less time than Kapur. This is mainly due to the fact that the comprehensive learning strategy using improved PSO-based search equation enables the proposed algorithm obtain faster convergence speed. Furthermore, the MABC- based algorithm achieves the best achievements among the population-based methods in most cases. Moreover, the differences between the MABC and the other algorithms are more evident as the segmentation level increases.

To further investigate the population-based methods over high-dimensional segmentation, we conduct these algorithms on image segment with M–1=5, 7 and 9. Table 8 gives the average fitness and standard deviation obtained by each population-based algorithm. From Table 8, it can be observed that MABC demonstrates the best performance and stability on these high- dimensional functions, which is more efficient than the conventional ABC and other classical population-based algorithms, which proves that the MABC-based algorithm is more suitable for resolving multilevel image segmentation problems.

6 Conclusions

In order to apply artificial bee colony algorithm to solve complex optimization problems efficiently, this paper proposes a modified artificial bee colony algorithm, namely MABC. The potential of the proposed MABC to balance the exploration and exploitation tradeoff is achieved by combining local search and comprehensive learning strategies. In MABC, each individual can be characterized by focused and deeper exploitation of the promising regions and wider exploration of other regions of the search space. The algorithm achieves this by employing local search to encourage fine exploitation when it enters the promising region with high fitness, while enhance information sharing between excellent bees to improve the exploration when the individual finds difficulties during exploitation.

Table 5 Objective values and thresholds by Kapur method

Table 6 Mean CPU time of compared population-based methods on Kapur algorithm

Table 7 Objective value and standard deviation by compared population-based methods on Kapur algorithm

Continued

Table 8 Objective value and standard deviation by compared population-based methods on Kapur algorithm

Continued

Continued

Finally, the MABC algorithm is applied in the real-world image segmentation problems. The correlative results obtained by MABC-based method on each image indicate a significant improvement compared to several other popular population-based methods. As an effective population-based method, the MABC algorithm can be incorporated to other popular threshold segmentation methods based on optimizing the fitness function.

References

[1] KITTLER J, ILLINGWORTH J. Minimum error threshold [J]. Pattern Recognition, 1986, 19: 41–47.

[2] PUN T. Entropic thresholding, a new approach [J]. Computer Graphics & Image Processing, 1981, 16(3): 210–239.

[3] OTSU N. A threshold selection method from gray-level histograms [J]. IEEE Transactions on Systems Man & Cybernetics, 2007, 9(1): 62–66.

[4] KAPUR J N, SAHOO P K, WONG A K C. A new method for gray-level picture thresholding using the entropy of the histogram [J]. Computer Vision Graphics & Image Processing, 1985, 29(3): 273–285.

[5] LIM Y W, SANG U L. On the color image segmentation algorithm based on the thresholding and the fuzzy c-means techniques [J]. Pattern Recognition, 1990, 23(9): 935–952.

[6] TSAI D M. A fast thresholding selection procedure for multimodal and unimodal histograms [J]. Pattern Recognition Letters, 1995, 16(6): 653–666.

[7] YIN P Y, CHEN L H. A new method for multilevel thresholding using symmetry and duality of the histogram [J]. Journal of Electronic Imaging, 1993, 2(4): 337–345.

[8] BRINK A D. Minimum spatial entropy threshold selection [J]. IEE Proceedings-Vision, Image and Signal Processing, 1995, 142(3): 128–132.

[9] CHENG H D, CHEN J R, LI J. Threshold selection based on fuzzy c-partition entropy approach [J]. Pattern Recognition, 1998, 31(7): 857–870.

[10] HUANG L K, WANG M J J. Image thresholding by minimizing the measures of fuzziness [J]. Pattern Recognition, 1995, 28(1): 41–51.

[11] CHANDER A, CHATTERJEE A, SIARRY P. A new social and momentum component adaptive PSO algorithm for image segmentation [J]. Expert Systems with Applications, 2011, 38(5): 4998–5004.

[12] MA L, HU K, ZHU Y, et al. A hybrid artificial bee colony optimizer by combining with life-cycle, Powell’s search and crossover [J]. Applied Mathematics & Computation, 2015, 252: 133–154.

[13] GAO H, XU W, SUN J, et al. Multilevel thresholding for image segmentation through an improved quantum-behaved particle swarm algorithm [J]. IEEE Transactions on Instrumentation & Measurement, 2010, 59(4): 934–946.

[14] GHAMISI P, COUCEIRO M S, MARTINS F M L, et al. Multilevel image segmentation based on fractional-order Darwinian particle swarm optimization [J]. IEEE Transactions on Geoscience & Remote Sensing, 2014, 52(5): 2382–2394.

[15] CUEVAS E, ZALDIVAR D, PREZ-CISNEROS M. A novel multi-threshold segmentation approach based on differential evolution optimization [J]. Expert Systems with Applications, 2010, 37(7): 5265–5271.

[16] GAO H, KWONG S, YANG J, et al. Particle swarm optimization based on intermediate disturbance strategy algorithm and its application in multi-threshold image segmentation [J]. Information Sciences, 2013, 250(11): 82–112.

[17] KARABOGA D. An idea based on honey bee swarm for numerical optimization [R]. Technical report-tr06. Erciyes University, Computer Engineering Department, 2005.

[18] KARABOGA D, BASTURK B. A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm [J]. Journal of Global Optimization, 2007, 39(3): 459–471.

[19] MA L, HU K, ZHU Y, et al. A hybrid artificial bee colony optimizer by combining with life-cycle, Powell’s search and crossover [J]. Applied Mathematics & Computation, 2015, 252: 133–154.

[20] MA L, HU K, ZHU Y, et al. Cooperative artificial bee colony algorithm for multi-objective RFID network planning [J]. Journal of Network & Computer Applications, 2014, 42: 143–162.

[21] MA L, ZHU Y, ZHANG D, et al. A hybrid approach to artificial bee colony algorithm [J]. Neural Computing & Applications, 2016, 27(2): 387–409.

[22] POWELL M J D. Restart procedures for the conjugate gradient method [J]. Mathematical Programming, 1977, 12(1): 241–254.

[23] SUMATHI S, HAMSAPRIYA T, SUREKHA P. Evolutionary intelligence: An introduction to theory and applications with Matlab [M]. Springer Science & Business Media, 2008.

[24] WOLPERT D H, MACREADY W G. No free lunch theorems for optimization [J]. IEEE Transactions on Evolutionary Computation, 1997, 1(1): 67–82.

[25] MA L, HU K, ZHU Y, et al. Discrete and continuous optimization based on hierarchical artificial bee colony optimizer [J]. Journal of Applied Mathematics, 2014, 2014(1): 1–20.

[26] MA L, ZHU Y, LIU Y, et al. A novel bionic algorithm inspired by plant root foraging behaviors [J]. Applied Soft Computing, 2015, 37(C): 95–113.

[27] LIANG J J, QIN A K, SUGANTHAN P N, et al. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions [J]. IEEE Transactions on Evolutionary Computation, 2006, 10(3): 281–295.

[28] CLERC M, KENNEDY J. The particle swarm - explosion, stability, and convergence in a multidimensional complex space [J]. IEEE Transactions on Evolutionary Computation, 2002, 6(1): 58–73.

[29] HANSEN N, OSTERMEIER A. Completely derandomized self-adaptation in evolution strategies [J]. Evolutionary Computation, 2001, 9(2): 159–195.

[30] KAPUR J N, SAHOO P K, WONG A K C. A new method for gray- level picture thresholding using the entropy of the histogram [J]. Computer Vision Graphics & Image Processing, 1985, 29(3): 273–285.

[31] YIN P. Multilevel minimum cross entropy threshold selection based on particle swarm optimization [J]. Applied Mathematics & Computation, 2007, 184(2): 503–513.

[32] CAO L, BAO P, SHI Z. The strongest schema learning GA and its application to multilevel thresholding [J]. Image & Vision Computing, 2008, 26(5): 716–724.

(Edited by YANG Hua)

中文导读

增强性人工蜂群算法及在多阀值图像分割中的应用

摘要:提出了一种改进的人工蜂群算法来处理图像分割问题,具体采用一系列群体优化觅食策略来平衡开发和探测寻优模式。该算法的主要思想是将局部搜索策略和基于多维粒子群方程的复杂学习策略相结合,可丰富人工蜂群觅食行为模式。通过全局学习,蜂群把全局最优信息整合到搜索方程中以提高探测搜索能力,同时局部搜索使蜂群能更深层探索优势区域,最终取得开发和探索平衡。通过比较该改进蜂群算法和进化算法、群智能算法在一系列基准函数上的实验结果,表明本文所提出的算法的有效性。将改进蜂群算法应用于处理图像分割问题,实验结果也证明了该算法的有效性

关键词:人工蜂群算法;局部搜索;群体智能;图像分割

Received date: Projects(6177021519,61503373) supported by National Natural Science Foundation of China; Project(N161705001) supported by Fundamental Research Funds for the Central University,China

Received date: 2016-04-25; Accepted date: 2017-12-01

Corresponding author: GAO Yang, Master; Tel: +86-13842023316; E-mail: gaoy@mail.neu.edu.cn; ORCID: 0000-0002-1858-6324

Abstract: A modified artificial bee colony optimizer (MABC) is proposed for image segmentation by using a pool of optimal foraging strategies to balance the exploration and exploitation tradeoff. The main idea of MABC is to enrich artificial bee foraging behaviors by combining local search and comprehensive learning using multi-dimensional PSO-based equation. With comprehensive learning, the bees incorporate the information of global best solution into the solution search equation to improve the exploration while the local search enables the bees deeply exploit around the promising area, which provides a proper balance between exploration and exploitation. The experimental results on comparing the MABC to several successful EA and SI algorithms on a set of benchmarks demonstrated the effectiveness of the proposed algorithm. Furthermore, we applied the MABC algorithm to image segmentation problem. Experimental results verify the effectiveness of the proposed algorithm.

[1] KITTLER J, ILLINGWORTH J. Minimum error threshold [J]. Pattern Recognition, 1986, 19: 41–47.

[2] PUN T. Entropic thresholding, a new approach [J]. Computer Graphics & Image Processing, 1981, 16(3): 210–239.

[3] OTSU N. A threshold selection method from gray-level histograms [J]. IEEE Transactions on Systems Man & Cybernetics, 2007, 9(1): 62–66.

[4] KAPUR J N, SAHOO P K, WONG A K C. A new method for gray-level picture thresholding using the entropy of the histogram [J]. Computer Vision Graphics & Image Processing, 1985, 29(3): 273–285.

[5] LIM Y W, SANG U L. On the color image segmentation algorithm based on the thresholding and the fuzzy c-means techniques [J]. Pattern Recognition, 1990, 23(9): 935–952.

[6] TSAI D M. A fast thresholding selection procedure for multimodal and unimodal histograms [J]. Pattern Recognition Letters, 1995, 16(6): 653–666.

[7] YIN P Y, CHEN L H. A new method for multilevel thresholding using symmetry and duality of the histogram [J]. Journal of Electronic Imaging, 1993, 2(4): 337–345.

[8] BRINK A D. Minimum spatial entropy threshold selection [J]. IEE Proceedings-Vision, Image and Signal Processing, 1995, 142(3): 128–132.

[9] CHENG H D, CHEN J R, LI J. Threshold selection based on fuzzy c-partition entropy approach [J]. Pattern Recognition, 1998, 31(7): 857–870.

[10] HUANG L K, WANG M J J. Image thresholding by minimizing the measures of fuzziness [J]. Pattern Recognition, 1995, 28(1): 41–51.

[11] CHANDER A, CHATTERJEE A, SIARRY P. A new social and momentum component adaptive PSO algorithm for image segmentation [J]. Expert Systems with Applications, 2011, 38(5): 4998–5004.

[12] MA L, HU K, ZHU Y, et al. A hybrid artificial bee colony optimizer by combining with life-cycle, Powell’s search and crossover [J]. Applied Mathematics & Computation, 2015, 252: 133–154.

[13] GAO H, XU W, SUN J, et al. Multilevel thresholding for image segmentation through an improved quantum-behaved particle swarm algorithm [J]. IEEE Transactions on Instrumentation & Measurement, 2010, 59(4): 934–946.

[14] GHAMISI P, COUCEIRO M S, MARTINS F M L, et al. Multilevel image segmentation based on fractional-order Darwinian particle swarm optimization [J]. IEEE Transactions on Geoscience & Remote Sensing, 2014, 52(5): 2382–2394.

[15] CUEVAS E, ZALDIVAR D, PREZ-CISNEROS M. A novel multi-threshold segmentation approach based on differential evolution optimization [J]. Expert Systems with Applications, 2010, 37(7): 5265–5271.

[16] GAO H, KWONG S, YANG J, et al. Particle swarm optimization based on intermediate disturbance strategy algorithm and its application in multi-threshold image segmentation [J]. Information Sciences, 2013, 250(11): 82–112.

[17] KARABOGA D. An idea based on honey bee swarm for numerical optimization [R]. Technical report-tr06. Erciyes University, Computer Engineering Department, 2005.

[18] KARABOGA D, BASTURK B. A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm [J]. Journal of Global Optimization, 2007, 39(3): 459–471.

[19] MA L, HU K, ZHU Y, et al. A hybrid artificial bee colony optimizer by combining with life-cycle, Powell’s search and crossover [J]. Applied Mathematics & Computation, 2015, 252: 133–154.

[20] MA L, HU K, ZHU Y, et al. Cooperative artificial bee colony algorithm for multi-objective RFID network planning [J]. Journal of Network & Computer Applications, 2014, 42: 143–162.

[21] MA L, ZHU Y, ZHANG D, et al. A hybrid approach to artificial bee colony algorithm [J]. Neural Computing & Applications, 2016, 27(2): 387–409.

[22] POWELL M J D. Restart procedures for the conjugate gradient method [J]. Mathematical Programming, 1977, 12(1): 241–254.

[23] SUMATHI S, HAMSAPRIYA T, SUREKHA P. Evolutionary intelligence: An introduction to theory and applications with Matlab [M]. Springer Science & Business Media, 2008.

[24] WOLPERT D H, MACREADY W G. No free lunch theorems for optimization [J]. IEEE Transactions on Evolutionary Computation, 1997, 1(1): 67–82.

[25] MA L, HU K, ZHU Y, et al. Discrete and continuous optimization based on hierarchical artificial bee colony optimizer [J]. Journal of Applied Mathematics, 2014, 2014(1): 1–20.

[26] MA L, ZHU Y, LIU Y, et al. A novel bionic algorithm inspired by plant root foraging behaviors [J]. Applied Soft Computing, 2015, 37(C): 95–113.

[27] LIANG J J, QIN A K, SUGANTHAN P N, et al. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions [J]. IEEE Transactions on Evolutionary Computation, 2006, 10(3): 281–295.

[28] CLERC M, KENNEDY J. The particle swarm - explosion, stability, and convergence in a multidimensional complex space [J]. IEEE Transactions on Evolutionary Computation, 2002, 6(1): 58–73.

[29] HANSEN N, OSTERMEIER A. Completely derandomized self-adaptation in evolution strategies [J]. Evolutionary Computation, 2001, 9(2): 159–195.

[30] KAPUR J N, SAHOO P K, WONG A K C. A new method for gray- level picture thresholding using the entropy of the histogram [J]. Computer Vision Graphics & Image Processing, 1985, 29(3): 273–285.

[31] YIN P. Multilevel minimum cross entropy threshold selection based on particle swarm optimization [J]. Applied Mathematics & Computation, 2007, 184(2): 503–513.

[32] CAO L, BAO P, SHI Z. The strongest schema learning GA and its application to multilevel thresholding [J]. Image & Vision Computing, 2008, 26(5): 716–724.