![]() | ||
In queueing theory, a discipline within the mathematical theory of probability, the backpressure routing algorithm is a method for directing traffic around a queueing network that achieves maximum network throughput, which is established using concepts of Lyapunov drift. Backpressure routing considers the situation where each job can visit multiple service nodes in the network. It is an extension of max-weight scheduling where rather each job visits only a single service node.
Contents
- Introduction to backpressure routing
- Origins
- How it works
- The multi hop queueing network model
- The backpressure control decisions
- Choosing the optimal commodity
- Choosing the abt matrix
- Finalizing the routing variables
- Improving delay
- Distributed backpressure
- Mathematical construction via Lyapunov drift
- Control decision constraints and the queue update equation
- Lyapunov drift
- Minimizing the drift bound by switching the sums
- Performance analysis
- Dynamic arrivals
- Network capacity region
- Comparing to S only algorithms
- Non iid operation and universal scheduling
- Backpressure with utility optimization and penalty minimization
- Related links
- References
Introduction to backpressure routing
Backpressure routing is an algorithm for dynamically routing traffic over a multi-hop network by using congestion gradients. The algorithm can be applied to wireless communication networks, including sensor networks, mobile ad hoc networks (MANETS), and heterogeneous networks with wireless and wireline components . Backpressure principles can also be applied to other areas, such as to the study of product assembly systems and processing networks . This article focuses on communication networks, where packets from multiple data streams arrive and must be delivered to appropriate destinations. The backpressure algorithm operates in slotted time. Every time slot it seeks to route data in directions that maximize the differential backlog between neighboring nodes. This is similar to how water flows through a network of pipes via pressure gradients. However, the backpressure algorithm can be applied to multi-commodity networks (where different packets may have different destinations), and to networks where transmission rates can be selected from a set of (possibly time-varying) options. Attractive features of the backpressure algorithm are: (i) it leads to maximum network throughput, (ii) it is provably robust to time-varying network conditions, (iii) it can be implemented without knowing traffic arrival rates or channel state probabilities. However, the algorithm may introduce large delays, and may be difficult to implement exactly in networks with interference. Modifications of backpressure that reduce delay and simplify implementation are described below under Improving delay and Distributed backpressure.
Backpressure routing has mainly been studied in a theoretical context. In practice, ad hoc wireless networks have typically implemented alternative routing methods based on shortest path computations or network flooding, such as Ad Hoc on-Demand Distance Vector Routing (AODV), geographic routing, and extremely opportunistic routing (ExOR). However, the mathematical optimality properties of backpressure have motivated recent experimental demonstrations of its use on wireless testbeds at the University of Southern California and at North Carolina State University .
Origins
The original backpressure algorithm was developed by Tassiulas and Ephremides. They considered a multi-hop packet radio network with random packet arrivals and a fixed set of link selection options. Their algorithm consisted of a max-weight link selection stage and a differential backlog routing stage. An algorithm related to backpressure, designed for computing multi-commodity network flows, was developed in Awerbuch and Leighton. The backpressure algorithm was later extended by Neely, Modiano, and Rohrs to treat scheduling for mobile networks. Backpressure is mathematically analyzed via the theory of Lyapunov drift, and can be used jointly with flow control mechanisms to provide network utility maximization. (see also Backpressure with Utility Optimization and Penalty Minimization).
How it works
Backpressure routing is designed to make decisions that (roughly) minimize the sum of squares of queue backlogs in the network from one timeslot to the next. The precise mathematical development of this technique is described in later sections. This section describes the general network model and the operation of backpressure routing with respect to this model.
The multi-hop queueing network model
Consider a multi-hop network with N nodes (see Fig. 1 for an example with N = 6). The network operates in slotted time
Let
This time-varying network model was first developed for the case when transmission rates every slot t were determined by general functions of a channel state matrix and a power allocation matrix. The model can also be used when rates are determined by other control decisions, such as server allocation, sub-band selection, coding type, and so on. It assumes the supportable transmission rates are known and there are no transmission errors. Extended formulations of backpressure routing can be used for networks with probabilistic channel errors, including networks that exploit the wireless broadcast advantage via multi-receiver diversity.
The backpressure control decisions
Every slot t the backpressure controller observes S(t) and performs the following 3 steps:
Choosing the optimal commodity
Each node a observes its own queue backlogs and the backlogs in its current neighbors. A current neighbor of node a is a node b such that it is possible to choose a non-zero transmission rate
The set of neighbors of a given node determines the set of outgoing links it can use for transmission on the current slot. For each outgoing link (a,b), the optimal commodity
Any ties in choosing the optimal commodity are broken arbitrarily.
An example is shown in Fig. 2. The example assumes each queue currently has only 3 commodities: red, green, and blue, and these are measured in integer units of packets. Focusing on the directed link (1,2), the differential backlogs are:
Hence, the optimal commodity to send over link (1,2) on slot t is the green commodity. On the other hand, the optimal commodity to send over the reverse link (2,1) on slot t is the blue commodity.
Choosing the μab(t) matrix
Once the optimal commodities have been determined for each link (a,b), the network controller computes the following weights
The weight
As an example of the max-weight decision, suppose that on the current slot t, the differential backlogs on each link of the 6 node network lead to link weights
While the set
illustration of the 4 possible transmission rate selections under the current topology state S(t). Option (a) activates the single link (1,5) with a transmission rate of
These four possibilities are illustrated in Fig. 3. The options in Fig. 3 are represented in matrix form by:
Observe that node 6 can neither send nor receive under any of these possibilities. This might arise because node 6 is currently out of communication range. The weighted sum of rates for each of the 4 possibilities are:
Because there is a tie for the maximum weight of 12, the network controller can break the tie arbitrarily by choosing either option
Finalizing the routing variables
Suppose now that the optimal commodities
The value of
In this case, all of the
Improving delay
It is important to note that the backpressure algorithm does not use any pre-specified paths. Paths are learned dynamically, and may be different for different packets. Delay can be very large, particularly when the system is lightly loaded so that there is not enough pressure to push data towards the destination. As an example, suppose one packet enters the network, and nothing else ever enters. This packet may take a loopy walk through the network and never arrive at its destination because no pressure gradients build up. This does not contradict the throughput optimality or stability properties of backpressure because the network has at most one packet at any time and hence is trivially stable (achieving a delivery rate of 0, equal to the arrival rate).
It is also possible to implement backpressure on a set of pre-specified paths. This can restrict the capacity region, but might improve in-order delivery and delay. Another way to improve delay, without affecting the capacity region, is to use an enhanced version that biases link weights towards desirable directions. Simulations of such biasing have shown significant delay improvements. Note that backpressure does not require First-in-First-Out (FIFO) service at the queues. It has been observed that Last-in-First-Out (LIFO) service can dramatically improve delay for the vast majority of packets, without affecting throughput.
Distributed backpressure
Note that once the transmission rates
A distributed approach for interference networks with link rates that are determined by the signal-to-noise-plus-interefernce ratio (SINR) can be carried out using randomization. Each node randomly decides to transmit every slot t (transmitting a "null" packet if it currently does not have a packet to send). The actual transmission rates, and the corresponding actual packets to send, are determined by a 2-step handshake: On the first step, the randomly selected transmitter nodes send a pilot signal with signal strength proportional to that of an actual transmission. On the second step, all potential receiver nodes measure the resulting interference and send that information back to the transmitters. The SINR levels for all outgoing links (n,b) are then known to all nodes n, and each node n can decide its
Alternative distributed implementations can roughly be grouped into two classes: The first class of algorithms consider constant multiplicative factor approximations to the max-weight problem, and yield constant-factor throughput results. The second class of algorithms consider additive approximations to the max-weight problem, based on updating solutions to the max-weight problem over time. Algorithms in this second class seem to require static channel conditions and longer (often non-polynomial) convergence times, although they can provably achieve maximum throughput under appropriate assumptions. Additive approximations are often useful for proving optimality of backpressure when implemented with out-of-date queue backlog information (see Exercise 4.10 of the Neely text).
Mathematical construction via Lyapunov drift
This section shows how the backpressure algorithm arises as a natural consequence of greedily minimizing a bound on the change in the sum of squares of queue backlogs from one slot to the next.
Control decision constraints and the queue update equation
Consider a multi-hop network with N nodes, as described in the above section. Every slot t, the network controller observes the topology state S(t) and chooses transmission rates
Once these routing variables are determined, transmissions are made (using idle fill if necessary), and the resulting queue backlogs satisfy the following:
where
It is assumed that
Lyapunov drift
Define
This is a sum of the squares of queue backlogs (multiplied by 1/2 only for convenience in later analysis). The above sum is the same as summing over all n, c such that
The conditional Lyapunov drift
Note that the following inequality holds for all
By squaring the queue update equation (Eq. (6)) and using the above inequality, it is not difficult to show that for all slots t and under any algorithm for choosing transmission and routing variables
where B is a finite constant that depends on the second moments of arrivals and the maximum possible second moments of transmission rates.
Minimizing the drift bound by switching the sums
The backpressure algorithm is designed to observe
where the finite sums have been pushed through the expectations to illuminate the maximizing decision. By the principle of opportunistically maximizing an expectation, the above expectation is maximized by maximizing the function inside of it (given the observed
It is not immediately obvious what decisions maximize the above. This can be illuminated by switching the sums. Indeed, the above expression is the same as below:
The weight
Clearly one should choose
where
It remains only to choose
The above problem is identical to the max-weight problem in Eqs. (1)-(2). The backpressure algorithm uses the max-weight decisions for
A remarkable property of the backpressure algorithm is that it acts greedily every slot t based only on the observed topology state S(t) and queue backlogs
Performance analysis
This section proves throughput optimality of the backpressure algorithm. For simplicity, the scenario where events are independent and identically distributed (i.i.d.) over slots is considered, although the same algorithm can be shown to work in non-i.i.d. scenarios (see below under Non-i.i.d. operation and universal scheduling).
Dynamic arrivals
Let
It is assumed that
Network capacity region
Assume the topology state S(t) is i.i.d. over slots with probabilities
Such a stationary and randomized algorithm that bases decisions only on S(t) is called an S-only algorithm. It is often useful to assume that
As a technical requirement, it is assumed that the second moments of transmission rates
Comparing to S-only algorithms
Because the backpressure algorithm observes
where
Now assume
Thus, the drift of a quadratic Lyapunov function is less than or equal to a constant B for all slots t. This fact, together with the assumption that queue arrivals have bounded second moments, imply the following for all network queues:
For a stronger understanding of average queue size, one can assume the arrival rates
from which one immediately obtains (see):
It is interesting to note that this average queue size bound increases as the distance
Non-i.i.d. operation and universal scheduling
The above analysis assumes i.i.d. properties for simplicity. However, the same backpressure algorithm can be shown to operate robustly in non-i.i.d. situations. When arrival processes and topology states are ergodic but not necessarily i.i.d., backpressure still stabilizes the system whenever
Backpressure with utility optimization and penalty minimization
Backpressure has been shown to work in conjunction with flow control via a drift-plus-penalty technique. This technique greedily maximizes a sum of drift and a weighted penalty expression. The penalty is weighted by a parameter V that determines a performance tradeoff. This technique ensures throughput utility is within O(1/V) of optimality while average delay is O(V). Thus, utility can be pushed arbitrarily close to optimality, with a corresponding tradeoff in average delay. Similar properties can be shown for average power minimization and for optimization of more general network attributes.
Alternative algorithms for stabilizing queues while maximizing a network utility have be developed using fluid model analysis, joint fluid analysis and Lagrange multiplier analysis , convex optimization , and stochastic gradients . These approaches do not provide the O(1/V), O(V) utility-delay results.