Kalpana Kalpana (Editor)

Ant colony optimization algorithms

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit
Ant colony optimization algorithms

In computer science and operations research, the ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs.

Contents

This algorithm is a member of the ant colony algorithms family, in swarm intelligence methods, and it constitutes some metaheuristic optimizations. Initially proposed by Marco Dorigo in 1992 in his PhD thesis, the first algorithm was aiming to search for an optimal path in a graph, based on the behavior of ants seeking a path between their colony and a source of food. The original idea has since diversified to solve a wider class of numerical problems, and as a result, several problems have emerged, drawing on various aspects of the behavior of ants. From a broader perspective, ACO performs a model-based search and share some similarities with Estimation of Distribution Algorithms.

Overview

In the natural world, ants of some species (initially) wander randomly, and upon finding food return to their colony while laying down pheromone trails. If other ants find such a path, they are likely not to keep travelling at random, but instead to follow the trail, returning and reinforcing it if they eventually find food (see Ant communication).

Over time, however, the pheromone trail starts to evaporate, thus reducing its attractive strength. The more time it takes for an ant to travel down the path and back again, the more time the pheromones have to evaporate. A short path, by comparison, gets marched over more frequently, and thus the pheromone density becomes higher on shorter paths than longer ones. Pheromone evaporation also has the advantage of avoiding the convergence to a locally optimal solution. If there were no evaporation at all, the paths chosen by the first ants would tend to be excessively attractive to the following ones. In that case, the exploration of the solution space would be constrained. The influence of pheromone evaporation in real ant systems is unclear, but it is very important in artificial systems.

The overall result is that when one ant finds a good (i.e., short) path from the colony to a food source, other ants are more likely to follow that path, and positive feedback eventually leads to all the ants following a single path. The idea of the ant colony algorithm is to mimic this behavior with "simulated ants" walking around the graph representing the problem to solve.

Common extensions

Here are some of the most popular variations of ACO algorithms.

Elitist ant system

The global best solution deposits pheromone on every iteration along with all the other ants.

Max-min ant system (MMAS)

Added maximum and minimum pheromone amounts [τmaxmin]. Only global best or iteration best tour deposited pheromone <MAZ>. All edges are initialized to τmin and reinitialized to τmax when nearing stagnation.

Ant colony system

It has been presented above.

Rank-based ant system (ASrank)

All solutions are ranked according to their length. The amount of pheromone deposited is then weighted for each solution, such that solutions with shorter paths deposit more pheromone than the solutions with longer paths.

Continuous orthogonal ant colony (COAC)

The pheromone deposit mechanism of COAC is to enable ants to search for solutions collaboratively and effectively. By using an orthogonal design method, ants in the feasible domain can explore their chosen regions rapidly and efficiently, with enhanced global search capability and accuracy.

The orthogonal design method and the adaptive radius adjustment method can also be extended to other optimization algorithms for delivering wider advantages in solving practical problems.

Recursive ant colony optimization

It is a recursive form of ant system which divides the whole search domain into several sub-domains and solves the objective on these subdomains. The results from all the subdomains are compared and the best few of them are promoted for the next level. The subdomains corresponding to the selected results are further subdivided and the process is repeated until an output of desired precision is obtained. This method has been tested on ill-posed geophysical inversion problems and works well.

Convergence

For some versions of the algorithm, it is possible to prove that it is convergent (i.e., it is able to find the global optimum in finite time). The first evidence of a convergence ant colony algorithm was made in 2000, the graph-based ant system algorithm, and then algorithms for ACS and MMAS. Like most metaheuristics, it is very difficult to estimate the theoretical speed of convergence. In 2004, Zlochin and his colleagues showed that COA-type algorithms could be assimilated methods of stochastic gradient descent, on the cross-entropy and estimation of distribution algorithm. They proposed these metaheuristics as a "research-based model". A performance analysis of continuous ant colony algorithm based on its various parameter suggest its sensitivity of convergence on parameter tuning.

Edge selection

An ant is a simple computational agent in the ant colony optimization algorithm. It iteratively constructs a solution for the problem at hand. The intermediate solutions are referred to as solution states. At each iteration of the algorithm, each ant moves from a state x to state y , corresponding to a more complete intermediate solution. Thus, each ant k computes a set A k ( x ) of feasible expansions to its current state in each iteration, and moves to one of these in probability. For ant k , the probability p x y k of moving from state x to state y depends on the combination of two values, viz., the attractiveness η x y of the move, as computed by some heuristic indicating the a priori desirability of that move and the trail level τ x y of the move, indicating how proficient it has been in the past to make that particular move.

The trail level represents a posteriori indication of the desirability of that move. Trails are updated usually when all ants have completed their solution, increasing or decreasing the level of trails corresponding to moves that were part of "good" or "bad" solutions, respectively.

In general, the k th ant moves from state x to state y with probability

p x y k = ( τ x y α ) ( η x y β ) z a l l o w e d x ( τ x z α ) ( η x z β )

where

τ x y is the amount of pheromone deposited for transition from state x to y , 0 ≤ α is a parameter to control the influence of τ x y , η x y is the desirability of state transition x y (a priori knowledge, typically 1 / d x y , where d is the distance) and β ≥ 1 is a parameter to control the influence of η x y . τ x z and η x z represent the attractiveness and trail level for the other possible state transitions.

Pheromone update

When all the ants have completed a solution, the trails are updated by τ x y ( 1 ρ ) τ x y + k Δ τ x y k

where τ x y is the amount of pheromone deposited for a state transition x y , ρ is the pheromone evaporation coefficient and Δ τ x y k is the amount of pheromone deposited by k th ant, typically given for a TSP problem (with moves corresponding to arcs of the graph) by

Δ τ x y k = { Q / L k if ant  k  uses curve  x y  in its tour 0 otherwise

where L k is the cost of the k th ant's tour (typically length) and Q is a constant.

Applications

Ant colony optimization algorithms have been applied to many combinatorial optimization problems, ranging from quadratic assignment to protein folding or routing vehicles and a lot of derived methods have been adapted to dynamic problems in real variables, stochastic problems, multi-targets and parallel implementations. It has also been used to produce near-optimal solutions to the travelling salesman problem. They have an advantage over simulated annealing and genetic algorithm approaches of similar problems when the graph may change dynamically; the ant colony algorithm can be run continuously and adapt to changes in real time. This is of interest in network routing and urban transportation systems.

The first ACO algorithm was called the ant system and it was aimed to solve the travelling salesman problem, in which the goal is to find the shortest round-trip to link a series of cities. The general algorithm is relatively simple and based on a set of ants, each making one of the possible round-trips along the cities. At each stage, the ant chooses to move from one city to another according to some rules:

  1. It must visit each city exactly once;
  2. A distant city has less chance of being chosen (the visibility);
  3. The more intense the pheromone trail laid out on an edge between two cities, the greater the probability that that edge will be chosen;
  4. Having completed its journey, the ant deposits more pheromones on all edges it traversed, if the journey is short;
  5. After each iteration, trails of pheromones evaporate.

Scheduling problem

  • Job-shop scheduling problem (JSP)
  • Open-shop scheduling problem (OSP)
  • Permutation flow shop problem (PFSP)
  • Single machine total tardiness problem (SMTTP)
  • Single machine total weighted tardiness problem (SMTWTP)
  • Resource-constrained project scheduling problem (RCPSP)
  • Group-shop scheduling problem (GSP)
  • Single-machine total tardiness problem with sequence dependent setup times (SMTTPDST)
  • Multistage flowshop scheduling problem (MFSP) with sequence dependent setup/changeover times
  • Vehicle routing problem

  • Capacitated vehicle routing problem (CVRP)
  • Multi-depot vehicle routing problem (MDVRP)
  • Period vehicle routing problem (PVRP)
  • Split delivery vehicle routing problem (SDVRP)
  • Stochastic vehicle routing problem (SVRP)
  • Vehicle routing problem with pick-up and delivery (VRPPD)
  • Vehicle routing problem with time windows (VRPTW)
  • Time dependent vehicle routing problem with time windows (TDVRPTW)
  • Vehicle routing problem with time windows and multiple service workers (VRPTWMS)
  • Assignment problem

  • Quadratic assignment problem (QAP)
  • Generalized assignment problem (GAP)
  • Frequency assignment problem (FAP)
  • Redundancy allocation problem (RAP)
  • Set problem

  • Set cover problem (SCP)
  • Partition problem (SPP)
  • Weight constrained graph tree partition problem (WCGTPP)
  • Arc-weighted l-cardinality tree problem (AWlCTP)
  • Multiple knapsack problem (MKP)
  • Maximum independent set problem (MIS)
  • Device sizing problem in nanoelectronics physical design

  • Ant colony optimization (ACO) based optimization of 45 nm CMOS-based sense amplifier circuit could converge to optimal solutions in very minimal time.
  • Ant colony optimization (ACO) based reversible circuit synthesis could improve efficiency significantly.
  • Image processing

    ACO algorithm is used in image processing for image edge detection and edge linking.

  • Edge detection:
  • The graph here is the 2-D image and the ants traverse from one pixel depositing pheromone.The movement of ants from one pixel to another is directed by the local variation of the image's intensity values. This movement causes the highest density of the pheromone to be deposited at the edges.

    The following are the steps involved in edge detection using ACO:

    Step1: Initialization:
    Randomly place K ants on the image I M 1 M 2 where K = ( M 1 M 2 ) 1 2 . Pheromone matrix τ ( i , j ) are initialized with a random value. The major challenge in the initialization process is determining the heuristic matrix.

    There are various methods to determine the heuristic matrix. For the below example the heuristic matrix was calculated based on the local statistics: the local statistics at the pixel position (i,j).

    η ( i , j ) = 1 Z V c I ( i , j )

    Where I is the image of size M 1 M 2
    Z = i = 1 : M 1 j = 1 : M 2 V c ( I i , j ) ,which is a normalization factor

    V c ( I i , j ) = f ( | I ( i 2 , j 1 ) I ( i + 2 , j + 1 ) | + | I ( i 2 , j + 1 ) I ( i + 2 , j 1 ) | + | I ( i 1 , j 2 ) I ( i + 1 , j + 2 ) | + | I ( i 1 , j 1 ) I ( i + 1 , j + 1 ) | + | I ( i 1 , j ) I ( i + 1 , j ) | + | I ( i 1 , j + 1 ) I ( i 1 , j 1 ) | + | I ( i 1 , j + 2 ) I ( i 1 , j 2 ) | + | I ( i , j 1 ) I ( i , j + 1 ) |

    f ( ) can be calculated using the following functions:
    f ( x ) = λ x , for x ≥ 0; (1)
    f ( x ) = λ x 2 , for x ≥ 0; (2)
    f ( x ) = { sin ( π x 2 λ ) , for 0 ≤ x ≤ λ ; (3) 0 , else
    f ( x ) = { π x sin ( π x 2 λ ) , for 0 ≤ x ≤ λ ; (4) 0 , else
    The parameter λ in each of above functions adjusts the functions’ respective shapes.
    Step 2 Construction process:
    The ant's movement is based on 4-connected pixels or 8-connected pixels. The probability with which the ant moves is given by the probability equation P x , y
    Step 3 and Step 5 Update process:
    The pheromone matrix is updated twice. in step 3 the trail of the ant (given by τ ( x , y ) ) is updated where as in step 5 the evaporation rate of the trail is updated which is given by the below equation.
    τ n e w ( 1 ψ ) τ o l d + ψ τ 0 , where ψ is the pheromone decay coefficient 0 < τ < 1

    Step 7 Decision Process:
    Once the K ants have moved a fixed distance L for N iteration, the decision whether it is an edge or not is based on the threshold T on the pheromone matrixτ. Threshold for the below example is calculated based on Otsu's method.

    Image Edge detected using ACO:
    The above images are generated using different functions given by the equation (1) to (4).

  • Edge linking:
  • ACO has also been proven effective in edge linking algorithms too.

    Others

  • Classification
  • Connection-oriented network routing
  • Connectionless network routing
  • Data mining
  • Discounted cash flows in project scheduling
  • Distributed information retrieval
  • Grid workflow scheduling problem
  • Intelligent testing system
  • System identification
  • Protein folding
  • Power electronic circuit design
  • bankruptcy prediction
  • Inhibitory peptide design for protein protein interactions
  • Definition difficulty

    With an ACO algorithm, the shortest path in a graph, between two points A and B, is built from a combination of several paths. It is not easy to give a precise definition of what algorithm is or is not an ant colony, because the definition may vary according to the authors and uses. Broadly speaking, ant colony algorithms are regarded as populated metaheuristics with each solution represented by an ant moving in the search space. Ants mark the best solutions and take account of previous markings to optimize their search. They can be seen as probabilistic multi-agent algorithms using a probability distribution to make the transition between each iteration. In their versions for combinatorial problems, they use an iterative construction of solutions. According to some authors, the thing which distinguishes ACO algorithms from other relatives (such as algorithms to estimate the distribution or particle swarm optimization) is precisely their constructive aspect. In combinatorial problems, it is possible that the best solution eventually be found, even though no ant would prove effective. Thus, in the example of the Travelling salesman problem, it is not necessary that an ant actually travels the shortest route: the shortest route can be built from the strongest segments of the best solutions. However, this definition can be problematic in the case of problems in real variables, where no structure of 'neighbours' exists. The collective behaviour of social insects remains a source of inspiration for researchers. The wide variety of algorithms (for optimization or not) seeking self-organization in biological systems has led to the concept of "swarm intelligence", which is a very general framework in which ant colony algorithms fit.

    Stigmergy algorithms

    There is in practice a large number of algorithms claiming to be "ant colonies", without always sharing the general framework of optimization by canonical ant colonies (COA). In practice, the use of an exchange of information between ants via the environment (a principle called "stigmergy") is deemed enough for an algorithm to belong to the class of ant colony algorithms. This principle has led some authors to create the term "value" to organize methods and behavior based on search of food, sorting larvae, division of labour and cooperative transportation.

  • Genetic algorithms (GA) maintain a pool of solutions rather than just one. The process of finding superior solutions mimics that of evolution, with solutions being combined or mutated to alter the pool of solutions, with solutions of inferior quality being discarded.
  • Estimation of Distribution Algorithm (EDA) is an Evolutionary Algorithm that substitutes traditional reproduction operators by model-guided operators. Such models are learned from the population by employing machine learning techniques and represented as Probabilistic Graphical Models, from which new solutions can be sampled or generated from guided-crossover.
  • Simulated annealing (SA) is a related global optimization technique which traverses the search space by generating neighboring solutions of the current solution. A superior neighbor is always accepted. An inferior neighbor is accepted probabilistically based on the difference in quality and a temperature parameter. The temperature parameter is modified as the algorithm progresses to alter the nature of the search.
  • Reactive search optimization focuses on combining machine learning with optimization, by adding an internal feedback loop to self-tune the free parameters of an algorithm to the characteristics of the problem, of the instance, and of the local situation around the current solution.
  • Tabu search (TS) is similar to simulated annealing in that both traverse the solution space by testing mutations of an individual solution. While simulated annealing generates only one mutated solution, tabu search generates many mutated solutions and moves to the solution with the lowest fitness of those generated. To prevent cycling and encourage greater movement through the solution space, a tabu list is maintained of partial or complete solutions. It is forbidden to move to a solution that contains elements of the tabu list, which is updated as the solution traverses the solution space.
  • Artificial immune system (AIS) algorithms are modeled on vertebrate immune systems.
  • Particle swarm optimization (PSO), a swarm intelligence method
  • Intelligent water drops (IWD), a swarm-based optimization algorithm based on natural water drops flowing in rivers
  • Gravitational search algorithm (GSA), a swarm intelligence method
  • Ant colony clustering method (ACCM), a method that make use of clustering approach,extending the ACO.
  • Stochastic diffusion search (SDS), an agent-based probabilistic global search and optimization technique best suited to problems where the objective function can be decomposed into multiple independent partial-functions
  • History

    Chronology of ant colony optimization algorithms.

  • 1959, Pierre-Paul Grassé invented the theory of stigmergy to explain the behavior of nest building in termites;
  • 1983, Deneubourg and his colleagues studied the collective behavior of ants;
  • 1988, and Moyson Manderick have an article on self-organization among ants;
  • 1989, the work of Goss, Aron, Deneubourg and Pasteels on the collective behavior of Argentine ants, which will give the idea of ant colony optimization algorithms;
  • 1989, implementation of a model of behavior for food by Ebling and his colleagues;
  • 1991, M. Dorigo proposed the ant system in his doctoral thesis (which was published in 1992). A technical report extracted from the thesis and co-authored by V. Maniezzo and A. Colorni was published five years later;
  • 1994, Appleby and Steward of British Telecommunications Plc published the first application to telecommunications networks
  • 1996, publication of the article on ant system;
  • 1996, Hoos and Stützle invent the max-min ant system;
  • 1997, Dorigo and Gambardella publish the ant colony system;
  • 1997, Schoonderwoerd and his colleagues published an improved application to telecommunication networks;
  • 1998, Dorigo launches first conference dedicated to the ACO algorithms;
  • 1998, Stützle proposes initial parallel implementations;
  • 1999, Bonabeau, Dorigo and Theraulaz publish a book dealing mainly with artificial ants
  • 2000, special issue of the Future Generation Computer Systems journal on ant algorithms
  • 2000, first applications to the scheduling, scheduling sequence and the satisfaction of constraints;
  • 2000, Gutjahr provides the first evidence of convergence for an algorithm of ant colonies
  • 2001, the first use of COA algorithms by companies (Eurobios and AntOptima);
  • 2001, IREDA and his colleagues published the first multi-objective algorithm
  • 2002, first applications in the design of schedule, Bayesian networks;
  • 2002, Bianchi and her colleagues suggested the first algorithm for stochastic problem;
  • 2004, Zlochin and Dorigo show that some algorithms are equivalent to the stochastic gradient descent, the cross-entropy method and algorithms to estimate distribution
  • 2005, first applications to protein folding problems.
  • 2012, Prabhakar and colleagues publish research relating to the operation of individual ants communicating in tandem without pheromones, mirroring the principles of computer network organization. The communication model has been compared to the Transmission Control Protocol.
  • 2016, first application to peptide sequence design.
  • Publications (selected)

  • M. Dorigo, 1992. Optimization, Learning and Natural Algorithms, PhD thesis, Politecnico di Milano, Italy.
  • M. Dorigo, V. Maniezzo & A. Colorni, 1996. "Ant System: Optimization by a Colony of Cooperating Agents", IEEE Transactions on Systems, Man, and Cybernetics–Part B, 26 (1): 29–41.
  • M. Dorigo & L. M. Gambardella, 1997. "Ant Colony System: A Cooperative Learning Approach to the Traveling Salesman Problem". IEEE Transactions on Evolutionary Computation, 1 (1): 53–66.
  • M. Dorigo, G. Di Caro & L. M. Gambardella, 1999. "Ant Algorithms for Discrete Optimization". Artificial Life, 5 (2): 137–172.
  • E. Bonabeau, M. Dorigo et G. Theraulaz, 1999. Swarm Intelligence: From Natural to Artificial Systems, Oxford University Press. ISBN 0-19-513159-2
  • M. Dorigo & T. Stützle, 2004. Ant Colony Optimization, MIT Press. ISBN 0-262-04219-3
  • M. Dorigo, 2007. "Ant Colony Optimization". Scholarpedia.
  • C. Blum, 2005 "Ant colony optimization: Introduction and recent trends". Physics of Life Reviews, 2: 353-373
  • M. Dorigo, M. Birattari & T. Stützle, 2006 Ant Colony Optimization: Artificial Ants as a Computational Intelligence Technique. TR/IRIDIA/2006-023
  • Mohd Murtadha Mohamad,"Articulated Robots Motion Planning Using Foraging Ant Strategy",Journal of Information Technology - Special Issues in Artificial Intelligence, Vol.20, No. 4 pp. 163–181, December 2008, ISSN 0128-3790.
  • N. Monmarché, F. Guinand & P. Siarry (eds), "Artificial Ants", August 2010 Hardback 576 pp. ISBN 978-1-84821-194-0.
  • A. Kazharov, V. Kureichik, 2010. "Ant colony optimization algorithms for solving transportation problems", Journal of Computer and Systems Sciences International, Vol. 49. No. 1. pp. 30–43.
  • K. Saleem, N. Fisal, M. A. Baharudin, A. A. Ahmed, S. Hafizah and S. Kamilah, "Ant colony inspired self-optimized routing protocol based on cross layer architecture for wireless sensor networks", WSEAS Trans. Commun., vol. 9, no. 10, pp. 669–678, 2010. ISBN 978-960-474-200-4
  • K. Saleem and N. Fisal, "Enhanced Ant Colony algorithm for self-optimized data assured routing in wireless sensor networks", Networks (ICON) 2012 18th IEEE International Conference on, pp. 422–427. ISBN 978-1-4673-4523-1
  • References

    Ant colony optimization algorithms Wikipedia