Tuesday, 15 January 2013

Capacity of wireless channels


5 Capacity of wireless channels
In the previous two chapters, we studied specific techniques for communication
over wireless channels. In particular, Chapter 3 is centered on the
point-to-point communication scenario and there the focus is on diversity as
a way to mitigate the adverse effect of fading. Chapter 4 looks at cellular
wireless networks as a whole and introduces several multiple access and
interference management techniques.
The present chapter takes a more fundamental look at the problem of
communication over wireless fading channels. We ask: what is the optimal
performance achievable on a given channel and what are the techniques to
achieve such optimal performance? We focus on the point-to-point scenario in
this chapter and defer the multiuser case until Chapter 6. The material covered
in this chapter lays down the theoretical basis of the modern development in
wireless communication to be covered in the rest of the book.
The framework for studying performance limits in communication is information
theory. The basic measure of performance is the capacity of a channel:
the maximum rate of communication for which arbitrarily small error
probability can be achieved. Section 5.1 starts with the important example
of the AWGN (additive white Gaussian noise) channel and introduces
the notion of capacity through a heuristic argument. The AWGN channel
is then used as a building block to study the capacity of wireless
fading channels. Unlike the AWGN channel, there is no single definition
of capacity for fading channels that is applicable in all scenarios. Several
notions of capacity are developed, and together they form a systematic
study of performance limits of fading channels. The various capacity
measures allow us to see clearly the different types of resources available
in fading channels: power, diversity and degrees of freedom. We will see
how the diversity techniques studied in Chapter 3 fit into this big picture.
More importantly, the capacity results suggest an alternative technique,
opportunistic communication, which will be explored further in the later
chapters.
166
167 5.1 AWGN channel capacity
5.1 AWGN channel capacity
Information theory was invented by Claude Shannon in 1948 to characterize
the limits of reliable communication. Before Shannon, it was widely believed
that the only way to achieve reliable communication over a noisy channel,
i.e., to make the error probability as small as desired, was to reduce the data
rate (by, say, repetition coding). Shannon showed the surprising result that
this belief is incorrect: by more intelligent coding of the information, one
can in fact communicate at a strictly positive rate but at the same time with
as small an error probability as desired. However, there is a maximal rate,
called the capacity of the channel, for which this can be done: if one attempts
to communicate at rates above the channel capacity, then it is impossible to
drive the error probability to zero.
In this section, the focus is on the familiar (real) AWGN channel:
ym = xm+wm (5.1)
where xm and ym are real input and output at time m respectively and wm
is 02 noise, independent over time. The importance of this channel is
two-fold:
• It is a building block of all of the wireless channels studied in this book.
• It serves as a motivating example of what capacity means operationally and
gives some sense as to why arbitrarily reliable communication is possible
at a strictly positive data rate.
5.1.1 Repetition coding
Using uncoded BPSK symbols xm = ±

P, the error probability is
QP/2. To reduce the error probability, one can repeat the same
symbol N times to transmit the one bit of information. This is a
repetition code of block length N, with codewords xA
=

P1    1t
and xB
=

P−1   −1t . The codewords meet a power constraint of
P joules/symbol. If xA is transmitted, the received vector is
y = xA
+w (5.2)
where w = w1    wNt . Error occurs when y is closer to xB than to
xA, and the error probability is given by
Q
xA
−xB

2
 = Q
NP
2

 (5.3)
which decays exponentially with the block length N. The good news is that
communication can now be done with arbitrary reliability by choosing a large
168 Capacity of wireless channels
enough N. The bad news is that the data rate is only 1/N bits per symbol
time and with increasing N the data rate goes to zero.
The reliably communicated data rate with repetition coding can be
marginally improved by using multilevel PAM (generalizing the two-level
BPSK scheme from earlier). By repeating an M-level PAM symbol, the levels
equally spaced between ±

P, the rate is logM/N bits per symbol time1 and
the error probability for the inner levels is equal to
Q


NP
M −1

 (5.4)
As long as the number of levels M grows at a rate less than

N, reliable
communication is guaranteed at large block lengths. But the data rate is
bounded by log

N/N and this still goes to zero as the block length
increases. Is that the price one must pay to achieve reliable communication?
5.1.2 Packing spheres
Geometrically, repetition coding puts all the codewords (the M levels) in just
one dimension (Figure 5.1 provides an illustration; here, all the codewords
are on the same line). On the other hand, the signal space has a large number
of dimensions N. We have already seen in Chapter 3 that this is a very
inefficient way of packing codewords. To communicate more efficiently, the
codewords should be spread in all the N dimensions.
We can get an estimate on the maximum number of codewords that can
be packed in for the given power constraint P, by appealing to the classic
sphere-packing picture (Figure 5.2). By the law of large numbers, the
N-dimensional received vector y=x+w will, with high probability, lie within
Figure 5.1 Repetition coding
packs points inefficiently in the
high-dimensional signal space.
√N(P + σ
2)
1 In this chapter, all logarithms are taken to be to the base 2 unless specified otherwise.
169 5.1 AWGN channel capacity
Figure 5.2 The number of
noise spheres that can be
packed into the y-sphere
yields the maximum number
of codewords that can be
reliably distinguished. Nσ
2 √NP
√N(P + σ
2)
a y-sphere of radius NP +2; so without loss of generality we need only
focus on what happens inside this y-sphere. On the other hand
1
N
N
   
m=1
w2m→2 (5.5)
as N →, by the law of large numbers again. So, for N large, the received
√vector y lies, with high probability, near the surface of a noise sphere of radius
N around the transmitted codeword (this is sometimes called the sphere
hardening effect). Reliable communication occurs as long as the noise spheres
around the codewords do not overlap. The maximum number of codewords
that can be packed with non-overlapping noise spheres is the ratio of the
volume of the y-sphere to the volume of a noise sphere:2
NP +2N


N2N  (5.6)
This implies that the maximum number of bits per symbol that can be reliably
communicated is
1
N
log


NP +2N


N2N


= 1
2
log1+ P
2

 (5.7)
This is indeed the capacity of the AWGN channel. (The argument might sound
very heuristic. Appendix B.5 takes a more careful look.)
The sphere-packing argument only yields the maximum number of codewords
that can be packed while ensuring reliable communication. How to construct
codes to achieve the promised rate is another story. In fact, in Shannon’s
argument, he never explicitly constructed codes. What he showed is that if
2 The volume of an N-dimensional sphere of radius r is proportional to rN and an exact
expression is evaluated in Exercise B.10.
170 Capacity of wireless channels
one picks the codewords randomly and independently, with the components
of each codeword i.i.d. 0P, then with very high probability the randomly
chosen code will do the job at any rate R < C. This is the so-called i.i.d.
Gaussian code. A sketch of this random coding argument can be found in
Appendix B.5.
From an engineering standpoint, the essential problem is to identify easily
encodable and decodable codes that have performance close to the capacity.
The study of this problem is a separate field in itself and Discussion 5.1
briefly chronicles the success story: codes that operate very close to capacity
have been found and can be implemented in a relatively straightforward way
using current technology. In the rest of the book, these codes are referred to
as “capacity-achieving AWGN codes”.
Discussion 5.1 Capacity-achieving AWGN channel codes
Consider a code for communication over the real AWGN channel in (5.1).
The ML decoder chooses the nearest codeword to the received vector as
the most likely transmitted codeword. The closer two codewords are to
each other, the higher the probability of confusing one for the other: this
yields a geometric design criterion for the set of codewords, i.e., place
the codewords as far apart from each other as possible. While such a set
of maximally spaced codewords are likely to perform very well, this in
itself does not constitute an engineering solution to the problem of code
construction: what is required is an arrangement that is “easy” to describe
and “simple” to decode. In other words, the computational complexity of
encoding and decoding should be practical.
Many of the early solutions centered around the theme of ensuring
efficient ML decoding. The search of codes that have this property leads to
a rich class of codes with nice algebraic properties, but their performance
is quite far from capacity. A significant breakthrough occurred when the
stringent ML decoding was relaxed to an approximate one. An iterative
decoding algorithm with near ML performance has led to turbo and low
density parity check codes.
A large ensemble of linear parity check codes can be considered in conjunction
with the iterative decoding algorithm. Codes with good performance
can be found offline and they have been verified to perform very close to
capacity.Toget a feel for their performance,weconsidersomesampleperformance
numbers. The capacity of the AWGN channel at 0 dB SNR is 0.5 bits
per symbol. The error probability of a carefully designedLDPCcode in these
operating conditions (rate 0.5 bits per symbol, and the signal-to-noise ratio is
equal to 0.1 dB) with a block length of 8000 bits is approximately 10−4. With
a larger block length, much smaller error probabilities have been achieved.
These modern developments are well surveyed in [100].
171 5.1 AWGN channel capacity
The capacity of the AWGN channel is probably the most well-known
result of information theory, but it is in fact only a special case of Shannon’s
general theory applied to a specific channel. This general theory is outlined
in Appendix B. All the capacity results used in the book can be derived from
this general framework. To focus more on the implications of the results in
the main text, the derivation of these results is relegated to Appendix B. In
the main text, the capacities of the channels looked at are justified by either
Figure 5.3 The three
communication schemes when
viewed in N-dimensional space:
(a) uncoded signaling: error
probability is poor since large
noise in any dimension is
enough to confuse the receiver;
(b) repetition code: codewords
are now separated in all
dimensions, but there are only
a few codewords packed in a
single dimension; (c)
capacity-achieving code:
codewords are separated in all
dimensions and there are many
of them spread out in the
space.
Summary 5.1 Reliable rate of communication and capacity
• Reliable communication at rate R bits/symbol means that one can design
codes at that rate with arbitrarily small error probability.
• To get reliable communication, one must code over a long block; this
is to exploit the law of large numbers to average out the randomness of
the noise.
• Repetition coding over a long block can achieve reliable communication,
but the corresponding data rate goes to zero with increasing block length.
• Repetition coding does not pack the codewords in the available degrees
of freedom in an efficient manner. One can pack a number of codewords
that is exponential in the block length and still communicate reliably.
This means the data rate can be strictly positive even as reliability is
increased arbitrarily by increasing the block length.
• The maximum data rate at which reliable communication is possible is
called the capacity C of the channel.
• The capacity of the (real) AWGN channel with power constraint P and
noise variance 2 is:
Cawgn
= 1
2
log1+ P
2

 (5.8)
and the engineering problem of constructing codes close to this performance
has been successfully addressed.
Figure 5.3 summarizes the three communication schemes discussed.
(a) (b) (c)
172 Capacity of wireless channels
transforming the channels back to the AWGN channel, or by using the type
of heuristic sphere-packing arguments we have just seen.
5.2 Resources of the AWGN channel
The AWGN capacity formula (5.8) can be used to identify the roles of the
key resources of power and bandwidth.
5.2.1 Continuous-time AWGN channel
Consider a continuous-time AWGN channel with bandwidth W Hz, power
constraint ¯P watts, and additive white Gaussian noise with power spectral
density N0/2. Following the passband–baseband conversion and sampling at
rate 1/W (as described in Chapter 2), this can be represented by a discretetime
complex baseband channel:
ym = xm+wm (5.9)
where wm is 0N0 and is i.i.d. over time. Note that since the noise is
independent in the I and Q components, each use of the complex channel can
be thought of as two independent uses of a real AWGN channel. The noise
variance and the power constraint per real symbol are N0/2 and ¯ P/2W
respectively. Hence, the capacity of the channel is
1
2
log1+
¯P
N0W
bits per real dimension (5.10)
or
log1+
¯P
N0W
bits per complex dimension (5.11)
This is the capacity in bits per complex dimension or degree of freedom.
Since there are W complex samples per second, the capacity of the continuoustime
AWGN channel is
Cawgn ¯ PW = W log1+
¯P
N0W
bits/s (5.12)
Note that SNR     = ¯P/N0W is the SNR per (complex) degree of freedom.
Hence, AWGN capacity can be rewritten as
Cawgn
= log1+SNR bits/s/Hz (5.13)
This formula measures the maximum achievable spectral efficiency through
the AWGN channel as a function of the SNR.
173 5.2 Resources of the AWGN channel
5.2.2 Power and bandwidth
Let us ponder the significance of the capacity formula (5.12) to a communication
engineer. One way of using this formula is as a benchmark for evaluating
the performance of channel codes. For a system engineer, however, the main
significance of this formula is that it provides a high-level way of thinking
about how the performance of a communication system depends on the basic
resources available in the channel, without going into the details of specific
modulation and coding schemes used. It will also help identify the bottleneck
that limits performance.
The basic resources of the AWGN channel are the received power ¯P and
the bandwidth W. Let us first see how the capacity depends on the received
power. To this end, a key observation is that the function
fSNR     = log1+SNR (5.14)
is concave, i.e., fx≤0 for all x≥0 (Figure 5.4). This means that increasing
the power ¯P suffers from a law of diminishing marginal returns: the higher
the SNR, the smaller the effect on capacity. In particular, let us look at the
low and the high SNR regimes. Observe that
log21+x ≈ x log2 e whenx ≈ 0 (5.15)
log21+x ≈ log2 x whenx    1 (5.16)
Thus, when the SNR is low, the capacity increases linearly with the received
power ¯P: every 3 dB increase in (or, doubling) the power doubles the capacity.
When the SNR is high, the capacity increases logarithmically with ¯P : every
3 dB increase in the power yields only one additional bit per dimension.
This phenomenon should not come as a surprise. We have already seen in
Figure 5.4 Spectral efficiency
log1+SNR of the AWGN
channel.
0
3
4
5
6
7
0 20 40 60 80 100
1
2
SNR
log (1 + SNR)
174 Capacity of wireless channels
Chapter 3 that packing many bits per dimension is very power-inefficient.
The capacity result says that this phenomenon not only holds for specific
schemes but is in fact fundamental to all communication schemes. In fact,
for a fixed error probability, the data rate of uncoded QAM also increases
logarithmically with the SNR (Exercise 5.7).
The dependency of the capacity on the bandwidth W is somewhat more
complicated. From the formula, the capacity depends on the bandwidth in two
ways. First, it increases the degrees of freedom available for communication.
This can be seen in the linear dependency on W for a fixed SNR = ¯P/N0W.
On the other hand, for a given received power ¯P, the SNR per dimension
decreases with the bandwidth as the energy is spread more thinly across the
degrees of freedom. In fact, it can be directly calculated that the capacity is
an increasing, concave function of the bandwidth W (Figure 5.5). When the
bandwidth is small, the SNR per degree of freedom is high, and then the
capacity is insensitive to small changes in SNR. Increasing W yields a rapid
increase in capacity because the increase in degrees of freedom more than
compensates for the decrease in SNR. The system is in the bandwidth-limited
regime. When the bandwidth is large such that the SNR per degree of freedom
is small,
W log1+
¯P
N0W
 ≈ W
 ¯P
N0W
log2 e =
¯P
N0
log2 e (5.17)
In this regime, the capacity is proportional to the total received power across
the entire band. It is insensitive to the bandwidth, and increasing the bandwidth
has a small impact on capacity. On the other hand, the capacity is now linear
in the received power and increasing power has a significant effect. This is
the power-limited regime.
Figure 5.5 Capacity as a
function of the bandwidth W.
Here ¯P/N0 = 106.
5 30
Bandwidth W (MHz)
Capacity
Limit for W → ∞
Power limited region
0.2
1
Bandwidth limited region
(Mbps)
C(W )
0.4
0 10 15 20 25
1.6
1.4
1.2
0.8
0.6
0
P
N0
log2 e
175 5.2 Resources of the AWGN channel
As W increases, the capacity increases monotonically (why must it?) and
reaches the asymptotic limit
C =
¯P
N0
log2 e bits/s (5.18)
This is the infinite bandwidth limit, i.e., the capacity of the AWGN channel
with only a power constraint but no limitation on bandwidth. It is seen that
even if there is no bandwidth constraint, the capacity is finite.
In some communication applications, the main objective is to minimize
the required energy per bit b rather than to maximize the spectral efficiency.
At a given power level ¯P, the minimum required energy per bit
b is ¯ P/Cawgn ¯ PW. To minimize this, we should be operating in the most
power-efficient regime, i.e., ¯P →0. Hence, the minimum b/N0 is given by
 b
N0

min
= lim
¯P
→0
¯P
Cawgn ¯ PWN0
= 1
log2 e
=−159dB (5.19)
To achieve this, the SNR per degree of freedom goes to zero. The price
to pay for the energy efficiency is delay: if the bandwidth W is fixed, the
communication rate (in bits/s) goes to zero. This essentially mimics the
infinite bandwidth regime by spreading the total energy over a long time
interval, instead of spreading the total power over a large bandwidth.
It was already mentioned that the success story of designing capacityachieving
AWGN codes is a relatively recent one. In the infinite bandwidth
regime, however, it has long been known that orthogonal codes3 achieve the
capacity (or, equivalently, achieve the minimum b/N0 of −159 dB). This is
explored in Exercises 5.8 and 5.9.
Example 5.2 Bandwidth reuse in cellular systems
The capacity formula for the AWGN channel can be used to conduct
a simple comparison of the two orthogonal cellular systems discussed
in Chapter 4: the narrowband system with frequency reuse versus the
wideband system with universal reuse. In both systems, users within a cell
are orthogonal and do not interfere with each other. The main parameter
of interest is the reuse ratio

 ≤ 1. If W denotes the bandwidth per user
within a cell, then each user transmission occurs over a bandwidth of
W.
The parameter
 = 1 yields the full reuse of the wideband OFDM system
and
<1 yields the narrowband system.
3 One example of orthogonal coding is the Hadamard sequences used in the IS-95 system
(Section 4.3.1). Pulse position modulation (PPM), where the position of the on–off pulse
(with large duty cycle) conveys the information, is another example.
176 Capacity of wireless channels
Here we consider the uplink of this cellular system; the study of the
downlink in orthogonal systems is similar. A user at a distance r is heard
at the base-station with an attenuation of a factor r− in power; in free
space the decay rate is equal to 2 and the decay rate is 4 in the model
of a single reflected path off the ground plane, cf. Section 2.1.5.
The uplink user transmissions in a neighboring cell that reuses the same
frequency band are averaged and this constitutes the interference (this
averaging is an important feature of the wideband OFDM system; in the
narrowband system in Chapter 4, there is no interference averaging but that
effect is ignored here). Let us denote by f
 the amount of total out-of-cell
interference at a base-station as a fraction of the received signal power of
a user at the edge of the cell. Since the amount of interference depends
on the number of neighboring cells that reuse the same frequency band,
the fraction f
 depends on the reuse ratio and also on the topology of the
cellular system.
For example, in a one-dimensional linear array of base-stations
(Figure 5.6), a reuse ratio of
 corresponds to one in every 1/
 cells using
the same frequency band. Thus the fraction f
 decays roughly as
. On
the other hand, in a two-dimensional hexagonal array of base-stations, a
reuse ratio of
 corresponds to the nearest reusing base-station roughly a
distance of

1/
 away: this means that the fraction f
 decays roughly as

/2. The exact fraction f
 takes into account geographical features of the
cellular system (such as shadowing) and the geographic averaging of the
interfering uplink transmissions; it is usually arrived at using numerical
simulations (Table 6.2 in [140] has one such enumeration for a full reuse
system). In a simple model where the interference is considered to come
from the center of the cell reusing the same frequency band, f
 can be
taken to be 2
/2 for the linear cellular system and 6
/4 /2 for the
hexagonal planar cellular system (see Exercises 5.2 and 5.3).
The received SINR at the base-station for a cell edge user is
SINR = SNR

+f
SNR
 (5.20)
where the SNR for the cell edge user is
SNR     = P
N0Wd
 (5.21)
d
Figure 5.6 A linear cellular system with base-stations along a line (representing a highway).
177 5.2 Resources of the AWGN channel
with d the distance of the user to the base-station and P the uplink
transmit power. The operating value of the parameter SNR is decided by the
coverage of a cell: a user at the edge of a cell has to have a minimum SNR
to be able to communicate reliably (at aleast a fixed minimum rate) with
the nearest base-station. Each base-station comes with a capital installation
cost and recurring operation costs and to minimize the number of basestations,
the cell size d is usually made as large as possible; depending on
the uplink transmit power capability, coverage decides the cell size d.
Using the AWGN capacity formula (cf. (5.14)), the rate of reliable
communication for a user at the edge of the cell, as a function of the reuse
ratio
, is
R

=
W log21+SINR =
W log2
1+ SNR

+f
SNR
bits/s (5.22)
The rate depends on the reuse ratio through the available degrees of
freedom and the amount of out-of-cell interference. A large
 increases
the available bandwidth per cell but also increases the amount of out-ofcell
interference. The formula (5.22) allows us to study the optimal reuse
factor. At low SNR, the system is not degree of freedom limited and the
interference is small relative to the noise; thus the rate is insensitive to the
reuse factor and this can be verified directly from (5.22). On the other hand,
at large SNR the interference grows as well and the SINR peaks at 1/f
.
(A general rule of thumb in practice is to set SNR such that the interference
is of the same order as the background noise; this will guarantee that the
operating SINR is close to the largest value.) The largest rate is

W log2
1+ 1
f


 (5.23)
This rate goes to zero for small values of
; thus sparse reuse is not
favored. It can be verified that universal reuse yields the largest rate in
(5.23) for the hexagonal cellular system (Exercise 5.3). For the linear
cellular model, the corresponding optimal reuse is
 = 1/2, i.e., reusing
the frequency every other cell (Exercise 5.5). The reduction in interference
due to less reuse is more dramatic in the linear cellular system when
compared to the hexagonal cellular system. This difference is highlighted
in the optimal reuse ratios for the two systems at high SNR: universal
reuse is preferred for the hexagonal cellular system while a reuse ratio of
1/2 is preferred for the linear cellular system.
This comparison also holds for a range of SNR between the small and
the large values: Figures 5.7 and 5.8 plot the rates in (5.22) for different
reuse ratios for the linear and hexagonal cellular systems respectively.
Here the power decay rate is fixed to 3 and the rates are plotted as a
function of the SNR for a user at the edge of the cell, cf. (5.21). In the
178 Capacity of wireless channels
10 15 20 25 30
Rate
bits / s / Hz
Cell edge SNR (dB)
1/2
Frequency reuse factor 1
1/3
0.5
–10 –5 0 5
3
2.5
2
1.5
1
0
Figure 5.7 Rates in bits/s/Hz as a function of the SNR for a user at the edge of the cell for
universal reuse and reuse ratios of 1/2 and 1/3 for the linear cellular system. The power decay
rate  is set to 3.
10 15 20 25 30
1/7
Cell edge SNR (dB)
Frequency reuse factor 1
0.2 1/2
–10 –5 0 5
1.4
1.2
1
0.8
0.6
0.4
0
Rate
bits /s / Hz
Figure 5.8 Rates in bits/s/Hz as a function of the SNR for a user at the edge of the cell for
universal reuse, reuse ratios 1/2 and 1/7 for the hexagonal cellular system. The power decay rate
 is set to 3.
hexagonal cellular system, universal reuse is clearly preferred at all ranges
of SNR. On the other hand, in a linear cellular system, universal reuse
and a reuse of 1/2 have comparable performance and if the operating
SNR value is larger than a threshold (10 dB in Figure 5.7), then it pays to
reuse, i.e., R1/2 >R1. Otherwise, universal reuse is optimal. If this SNR
threshold is within the rule of thumb setting mentioned earlier (i.e., the
gain in rate is worth operating at this SNR), then reuse is preferred. This
Preference has to be traded off with the size of the cell dictated by (5.21)
due to a transmit power constraint on the mobile device.
179 5.3 Linear time-invariant Gaussian channels
5.3 Linear time-invariant Gaussian channels
We give three examples of channels which are closely related to the simple
AWGN channel and whose capacities can be easily computed. Moreover,
optimal codes for these channels can be constructed directly from an optimal
code for the basic AWGN channel. These channels are time-invariant, known
to both the transmitter and the receiver, and they form a bridge to the fading
channels which will be studied in the next section.
5.3.1 Single input multiple output (SIMO) channel
Consider a SIMO channel with one transmit antenna and L receive antennas:
y m = h xm+w m = 1   L (5.24)
where h is the fixed complex channel gain from the transmit antenna to
the th receive antenna, and w m is 0N0 is additive Gaussian noise
independent across antennas. A sufficient statistic for detecting xm from
ym     = y1m    yLmt is
˜y
m     = h∗ym = h2xm+h∗wm (5.25)
where h     = h1   hLt and wm     = w1m    wLmt . This is an
AWGN channel with received SNR Ph2/N0 if P is the average energy per
transmit symbol. The capacity of this channel is therefore
C = log1+ Ph2
N0
bits/s/Hz (5.26)
Multiple receive antennas increase the effective SNR and provide a power
gain. For example, for L=2 and h1
= h2
=1, dual receive antennas provide
a 3 dB power gain over a single antenna system. The linear combining (5.25)
maximizes the output SNR and is sometimes called receive beamforming.
5.3.2 Multiple input single output (MISO) channel
Consider a MISO channel with L transmit antennas and a single receive
antenna:
ym = h∗xm+wm (5.27)
where h = h1   hLt and h is the (fixed) channel gain from transmit
antenna to the receive antenna. There is a total power constraint of P across
the transmit antennas.
180 Capacity of wireless channels
In the SIMO channel above, the sufficient statistic is the projection of the
L-dimensional received signal onto h: the projections in orthogonal directions
contain noise that is not helpful to the detection of the transmit signal. A natural
reciprocal transmission strategy for the MISO channel would send information
only in the direction of the channel vector h; information sent in any orthogonal
direction will be nulled out by the channel anyway. Therefore, by setting
xm = h
h
˜x
m (5.28)
the MISO channel is reduced to the scalar AWGN channel:
ym = h˜xm+wm (5.29)
with a power constraint P on the scalar input. The capacity of this scalar
channel is
log1+ Ph2
N0
bits/s/Hz (5.30)
Can one do better than this scheme? Any reliable code for the MISO channel
can be used as a reliable code for the scalarAWGNchannel ym=xm+wm:
if
Xi are the transmitted L×N (space-time) code matrices for the MISO channel,
then the received 1×N vectors
h∗Xi form a code for the scalar AWGN
channel. Hence, the rate achievable by a reliable code for the MISO channel
must be at most the capacity of a scalar AWGN channel with the same received
SNR. Exercise 5.11 shows that the received SNR Ph2/N0 of the transmission
strategy above is in fact the largest possible SNR given the transmit power constraint
of P. Any other scheme has a lower received SNR and hence its reliable
rate must be less than (5.30), the rate achieved by the proposed transmission
strategy. We conclude that the capacity of the MISO channel is indeed
C = log1+ Ph2
N0
bits/s/Hz (5.31)
Intuitively, the transmission strategy maximizes the received SNR by having
the received signals from the various transmit antennas add up in-phase
(coherently) and by allocating more power to the transmit antenna with the
better gain. This strategy, “aligning the transmit signal in the direction of
the transmit antenna array pattern”, is called transmit beamforming. Through
beamforming, the MISO channel is converted into a scalar AWGN channel
and thus any code which is optimal for theAWGNchannel can be used directly.
In both the SIMO and the MISO examples the benefit from having multiple
antennas is a power gain. To get a gain in degrees of freedom, one has to use
both multiple transmit and multiple receive antennas (MIMO). We will study
this in depth in Chapter 7.
181 5.3 Linear time-invariant Gaussian channels
5.3.3 Frequency-selective channel
Transformation to a parallel channel
Consider a time-invariant L-tap frequency-selective AWGN channel:
ym =
L−1
   
=0
h xm− +wm (5.32)
with an average power constraint P on each input symbol. In Section 3.4.4, we
saw that the frequency-selective channel can be converted into Nc independent
sub-carriers by adding a cyclic prefix of length L−1 to a data vector of
length Nc, cf. (3.137). Suppose this operation is repeated over blocks of data
symbols (of length Nc each, along with the corresponding cyclic prefix of
length L−1); see Figure 5.9. Then communication over the ith OFDM block
can be written as
˜y
ni = ˜hn
˜d
ni+ ˜wni n = 0 1   Nc
−1 (5.33)
Here,
˜d
i     = ˜d0i     ˜dNc−1it (5.34)
˜ wi     = ˜w0i     ˜wNc−1it (5.35)
˜yi     = ˜y0i    ˜yNc−1it (5.36)
are the DFTs of the input, the noise and the output of the ith OFDM block
respectively. ˜h is the DFT of the channel scaled by

Nc (cf. (3.138)). Since the
overhead in the cyclic prefix relative to the block lengthNc can be made arbitrarily
small by choosing Nc large, the capacity of the original frequency-selective
channel is the same as the capacity of this transformed channel as Nc
→.
The transformed channel (5.33) can be viewed as a collection of sub-channels,
one for each sub-carrier n. Each of the sub-channels is an AWGN channel. The
Figure 5.9 A coded OFDM
system. Information bits are
coded and then sent over the
frequency-selective channel via
OFDM modulation. Each
channel use corresponds to an
OFDM block. Coding can be
done across different OFDM
blocks as well as over different
sub-carriers.
Encoder
OFDM
modulator
Channel
(use 2)
OFDM
modulator
Channel
(use 3)
Channel
(use 1)
Information
bits
OFDM
modulator
182 Capacity of wireless channels
transformed noise w˜ i is distributed as 0N0I, so the noise is 0N0
in each of the sub-channels and, moreover, the noise is independent across
sub-channels. The power constraint on the input symbols in time translates
to one on the data symbols on the sub-channels (Parseval theorem for DFTs):
˜di2 ≤ NcP (5.37)
In information theory jargon, a channel which consists of a set of noninterfering
sub-channels, each of which is corrupted by independent noise, is
called a parallel channel. Thus, the transformed channel here is a parallel
AWGN channel, with a total power constraint across the sub-channels. A natural
strategy for reliable communication over a parallel AWGN channel is
illustrated in Figure 5.10. We allocate power to each sub-channel, Pn to the
nth sub-channel, such that the total power constraint is met. Then, a separate
capacity-achieving AWGN code is used to communicate over each of the subchannels.
The maximum rate of reliable communication using this scheme is
Nc−1
   
n=0
log

1+ Pn
˜hn
2
N0

bits/OFDM symbol (5.38)
Further, the power allocation can be chosen appropriately, so as to maximize
the rate in (5.38). The “optimal power allocation”, thus, is the solution to the
optimization problem:
CNc     = max
P0    PNc−1
Nc−1
   
n=0
log

1+ Pn
˜hn
2
N0

 (5.39)
Figure 5.10 Coding
independently over each of the
sub-carriers. This architecture,
with appropriate power and
rate allocations, achieves the
capacity of the
frequency-selective channel.
OFDM
modulator
OFDM
modulator
OFDM
modulator
Channel
(use 1)
Channel
(use 2)
Channel
(use 3)
Information
bits
Information
bits
Encoder
for subcarrier 1
Encoder
for subcarrier 2
183 5.3 Linear time-invariant Gaussian channels
subject to
Nc−1
   
n=0
Pn
= NcP Pn
≥ 0 n= 0   Nc
−1 (5.40)
Waterfilling power allocation
The optimal power allocation can be explicitly found. The objective function
in (5.39) is jointly concave in the powers and this optimization problem can
be solved by Lagrangian methods. Consider the Lagrangian
P0   PNc−1     =
Nc−1
   
n=0
log

1+ Pn
˜hn
2
N0

−
Nc−1
   
n=0
Pn (5.41)
where  is the Lagrange multiplier. The Kuhn–Tucker condition for the
optimality of a power allocation is

Pn
=0 ifPn > 0
≤0 ifPn
= 0
(5.42)
Define x+     = maxx 0. The power allocation
P∗
n
= 1

− N0
˜hn
2

+
 (5.43)
satisfies the conditions in (5.42) and is therefore optimal, with the Lagrange
multiplier  chosen such that the power constraint is met:
1
Nc
Nc−1
   
n=0
1

− N0
˜hn
2

+
= P (5.44)
Figure 5.11 gives a pictorial view of the optimal power allocation strategy
for the OFDM system. Think of the values N0/ ˜hn
2 plotted as a function
of the sub-carrier index n = 0   Nc
−1, as tracing out the bottom of a
vessel. If P units of water per sub-carrier are filled into the vessel, the depth
of the water at sub-carrier n is the power allocated to that sub-carrier, and
1/ is the height of the water surface. Thus, this optimal strategy is called
waterfilling or waterpouring. Note that there are some sub-carriers where the
bottom of the vessel is above the water and no power is allocated to them. In
these sub-carriers, the channel is too poor for it to be worthwhile to transmit
information. In general, the transmitter allocates more power to the stronger
sub-carriers, taking advantage of the better channel conditions, and less or
even no power to the weaker ones.
184 Capacity of wireless channels
Figure 5.11 Waterfilling power
allocation over the Nc subcarriers.
P1 = 0
N0
|H( f )|2
Subcarrier n
P2
P3
*
*
*

Observe that
˜h
n
=
L−1
   
=0
h exp−j2 n
Nc

 (5.45)
is the discrete-time Fourier transform Hf evaluated at f = nW/Nc, where
(cf. (2.20))
Hf      =
L−1
   
=0
h exp−j2 f
W

 f∈ 0 W (5.46)
As the number of sub-carriers Nc grows, the frequency width W/Nc of the
sub-carriers goes to zero and they represent a finer and finer sampling of the
continuous spectrum. So, the optimal power allocation converges to
P∗f  = 1

− N0
Hf  2

+
 (5.47)
where the constant  satisfies (cf. (5.44))

W
0
P∗f df = P (5.48)
The power allocation can be interpreted as waterfilling over frequency (see
Figure 5.12). With Nc sub-carriers, the largest reliable communication rate
185 5.3 Linear time-invariant Gaussian channels
Figure 5.12 Waterfilling power
allocation over the frequency
spectrum of the two-tap
channel (high-pass filter):
h0 = 1 and h1 = 05.
P ( f )
Frequency ( f )
– 0.4W – 0.2W 0 0.2W 0.4W
4
0
3.5
3
2.5
2
1.5
1
0.5
N0
|H( f )|2
*

with independent coding is CNc bits per OFDM symbol or CNc/Nc bits/s/Hz
(CNc given in (5.39)). So as Nc
→, the WCNc/Nc converges to
C = 
W
0
log1+ P∗f  Hf  2
N0
df bits/s (5.49)
Does coding across sub-carriers help?
So far we have considered a very simple scheme: coding independently over
each of the sub-carriers. By coding jointly across the sub-carriers, presumably
better performance can be achieved. Indeed, over a finite block length, coding
jointly over the sub-carriers yields a smaller error probability than can be
achieved by coding separately over the sub-carriers at the same rate. However,
somewhat surprisingly, the capacity of the parallel channel is equal to the
largest reliable rate of communication with independent coding within each
sub-carrier. In other words, if the block length is very large then coding jointly
over the sub-carriers cannot increase the rate of reliable communication any
more than what can be achieved simply by allocating power and rate over
the sub-carriers but not coding across the sub-carriers. So indeed (5.49) is the
capacity of the time-invariant frequency-selective channel.
To get some insight into why coding across the sub-carriers with large
block length does not improve capacity, we turn to a geometric view. Consider
a code, with block length NcN symbols, coding over all Nc of the sub-carriers
with N symbols from each sub-carrier. In high dimensions, i.e., N      1, the
NcN-dimensional received vector after passing through the parallel channel
(5.33) lives in an ellipsoid, with different axes stretched and shrunk by the
different channel gains ˜hn. The volume of the ellipsoid is proportional to
Nc−1

n=0
 ˜hn
2Pn
+N0
N
 (5.50)
186 Capacity of wireless channels
see Exercise 5.12. The volume of the noise sphere is, as in Section 5.1.2,
proportional to NNcN
0 . The maximum number of distinguishable codewords
that can be packed in the ellipsoid is therefore
Nc−1

n=0

1+ Pn
˜hn
2
N0
N
 (5.51)
The maximum reliable rate of communication is
1
N
log
Nc−1

n=0

1+ Pn
˜hn
2
N0
N
=
Nc−1
   
n=0
log

1+ Pn
˜hn
2
N0

bits/OFDM symbol
(5.52)
This is precisely the rate (5.38) achieved by separate coding and this suggests
that coding across sub-carriers can do no better. While this sphere-packing
argument is heuristic, Appendix B.6 gives a rigorous derivation from information
theoretic first principles.
Even though coding across sub-carriers cannot improve the reliable rate of
communication, it can still improve the error probability for a given data rate.
Thus, coding across sub-carriers can still be useful in practice, particularly
when the block length for each sub-carrier is small, in which case the coding
effectively increases the overall block length.
In this section we have used parallel channels to model a frequencyselective
channel, but parallel channels will be seen to be very useful in
modeling many other wireless communication scenarios as well.
5.4 Capacity of fading channels
The basic capacity results developed in the last few sections are now applied
to analyze the limits to communication over wireless fading channels.
Consider the complex baseband representation of a flat fading channel:
ym = hmxm+wm (5.53)
where
hm is the fading process and
wm is i.i.d. 0N0 noise.
As before, the symbol rate is W Hz, there is a power constraint of P
joules/symbol, and  hm 2 = 1 is assumed for normalization. Hence
SNR     = P/N0 is the average received SNR.
In Section 3.1.2, we analyzed the performance of uncoded transmission for
this channel. What is the ultimate performance limit when information can
be coded over a sequence of symbols? To answer this question, we make
the simplifying assumption that the receiver can perfectly track the fading
process, i.e., coherent reception. As we discussed in Chapter 2, the coherence
time of typical wireless channels is of the order of hundreds of symbols and
187 5.4 Capacity of fading channels
so the channel varies slowly relative to the symbol rate and can be estimated
by say a pilot signal. For now, the transmitter is not assumed to have any
knowledge of the channel realization other than the statistical characterization.
The situation when the transmitter has access to the channel realizations will
be studied in Section 5.4.6.
5.4.1 Slow fading channel
Let us first look at the situation when the channel gain is random but remains
constant for all time, i.e., hm = h for all m. This models the slow fading
situation where the delay requirement is short compared to the channel
coherence time (cf. Table 2.2). This is also called the quasi-static scenario.
Conditional on a realization of the channel h, this is an AWGN channel
with received signal-to-noise ratio h 2SNR. The maximum rate of reliable
communication supported by this channel is log1+ h 2SNR bits/s/Hz. This
quantity is a function of the random channel gain h and is therefore random
(Figure 5.13). Now suppose the transmitter encodes data at a rate R bits/s/Hz.
If the channel realization h is such that log1+ h 2SNR < R, then whatever
the code used by the transmitter, the decoding error probability cannot be
made arbitrarily small. The system is said to be in outage, and the outage
probability is
poutR     = 
log1+ h 2SNR < R (5.54)
Thus, the best the transmitter can do is to encode the data assuming that
the channel gain is strong enough to support the desired rate R. Reliable
communication can be achieved whenever that happens, and outage occurs
otherwise.
A more suggestive interpretation is to think of the channel as allowing
log1+ h 2SNR bits/s/Hz of information through when the fading gain is h.
Figure 5.13 Density of
log1+h2SNR, for Rayleigh
fading and SNR = 0 dB. For
any target rate R, there is a
non-zero outage probability.
0
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0 1 2 3 4 5
0.05
0.1
R
Area = pout (R)
188 Capacity of wireless channels
Reliable decoding is possible as long as this amount of information exceeds
the target rate.
For Rayleigh fading (i.e., h is 0 1), the outage probability is
poutR = 1−exp−2R−1
SNR

 (5.55)
At high SNR,
poutR ≈ 2R−1
SNR
 (5.56)
and the outage probability decays as 1/SNR. Recall that when we discussed
uncoded transmission in Section 3.1.2, the detection error probability also
decays like 1/SNR. Thus, we see that coding cannot significantly improve the
error probability in a slow fading scenario. The reason is that while coding
can average out the Gaussian white noise, it cannot average out the channel
fade, which affects all the coded symbols. Thus, deep fade, which is the
typical error event in the uncoded case, is also the typical error event in the
coded case.
There is a conceptual difference between the AWGN channel and the slow
fading channel. In the former, one can send data at a positive rate (in fact, any
rate less than C) while making the error probability as small as desired. This
cannot be done for the slow fading channel as long as the probability that
the channel is in deep fade is non-zero. Thus, the capacity of the slow fading
channel in the strict sense is zero. An alternative performance measure is the
-outage capacity C. This is the largest rate of transmission R such that the
outage probability poutR is less than . Solving poutR =  in (5.54) yields
C
= log1+F−11− SNR bits/s/Hz (5.57)
where F is the complementary cumulative distribution function of h 2, i.e.,
Fx     = 
h 2 > x.
In Section 3.1.2, we looked at uncoded transmission and there it was natural
to focus only on the high SNR regime; at low SNR, the error probability of
uncoded transmission is very poor. On the other hand, for coded systems,
it makes sense to consider both the high and the low SNR regimes. For
example, the CDMA system in Chapter 4 operates at very low SINR and
uses very low-rate orthogonal coding. A natural question is: in which regime
does fading have a more significant impact on outage performance? One can
answer this question in two ways. Eqn (5.57) says that, to achieve the same
rate as the AWGN channel, an extra 10 log1/F−11− dB of power is
needed. This is true regardless of the operating SNR of the environment. Thus
the fade margin is the same at all SNRs. If we look at the outage capacity
at a given SNR, however, the impact of fading depends very much on the
operating regime. To get a sense, Figure 5.14 plots the -outage capacity as
189 5.4 Capacity of fading channels
Figure 5.14 -outage capacity
as a fraction of AWGN capacity
under Rayleigh fading, for
 = 01 and  = 001.
0
1
–10 –5 0 5 10 15 20 25 30
0.6
0.4
0.2
0.8
= 0.1
= 0.01
C
Cawgn
SNR (dB)
35 40
∋ ∋

a function of SNR for the Rayleigh fading channel. To assess the impact of
fading, the -outage capacity is plotted as a fraction of the AWGN capacity
at the same SNR. It is clear that the impact is much more significant in the
low SNR regime. Indeed, at high SNR,
C
≈ log SNR+logF−11− (5.58)
≈ Cawgn
−log 1
F−11−

 (5.59)
a constant difference irrespective of the SNR. Thus, the relative loss gets
smaller at high SNR. At low SNR, on the other hand,
C
≈ F−11−SNR log2 e (5.60)
≈ F−11−Cawgn (5.61)
For reasonably small outage probabilities, the outage capacity is only a
small fraction of the AWGN capacity at low SNR. For Rayleigh fading,
F−11− ≈  for small  and the impact of fading is very significant. At
an outage probability of 001, the outage capacity is only 1% of the AWGN
capacity! Diversity has a significant effect at high SNR (as already seen in
Chapter 3), but can be more important at low SNR. Intuitively, the impact
of the randomness of the channel is in the received SNR, and the reliable
rate supported by the AWGN channel is much more sensitive to the received
SNR at low SNR than at high SNR. Exercise 5.10 elaborates on this point.
5.4.2 Receive diversity
Let us increase the diversity of the channel by having L receive antennas
instead of one. For given channel gains h     = h1   hLt , the capacity was
190 Capacity of wireless channels
calculated in Section 5.3.1 to be log1+h2SNR. Outage occurs whenever
this is below the target rate R:
prx
outR     = 
log1+h2SNR < R (5.62)
This can be rewritten as
poutR = 
h2 <
2R−1
SNR

 (5.63)
Under independent Rayleigh fading, h2 is a sum of the squares of 2L
independent Gaussian random variables and is distributed as Chi-square with
2L degrees of freedom. Its density is
fx = 1
L−1!xL−1e−x x≥ 0 (5.64)
Approximating e−x by 1 for x small, we have (cf. (3.44)),

h2 <  ≈ 1
L!L (5.65)
for  small. Hence at high SNR the outage probability is given by
poutR ≈ 2R−1L
L!SNRL  (5.66)
Comparing with (5.55), we see a diversity gain of L: the outage probability
now decays like 1/SNRL. This parallels the performance of uncoded transmission
discussed in Section 3.3.1: thus, coding cannot increase the diversity
gain.
The impact of receive diversity on the -outage capacity is plotted in
Figure 5.15. The -outage capacity is given by (5.57) with F now the cumulative
distribution function of h2. Receive antennas yield a diversity gain
and an L-fold power gain. To emphasize the impact of the diversity gain, let
us normalize the outage capacity C by Cawgn
= log1+LSNR. The dramatic
salutary effect of diversity on outage capacity can now be seen. At low SNR
and small , (5.61) and (5.65) yield
C
≈ F−11−SNR log2 e (5.67)
≈ L!
1L

1L
SNR log2 e bits/s/Hz (5.68)
and the loss with respect to the AWGN capacity is by a factor of 1/L rather
than by  when there is no diversity. At  = 001 and L = 2, the outage
capacity is increased to 14% of the AWGN capacity (as opposed to 1% for
L = 1).
191 5.4 Capacity of fading channels
Figure 5.15 -outage capacity
with L-fold receive diversity, as
a fraction of the AWGN
capacity log1+LSNR for
 = 001 and different L.
0
–10 0 5 10 15 20 25 30 35 40
1
0.8
0.6
0.4
0.2
–5
C
Cawgn
L = 2
L = 4
L = 5
L = 3
L = 1
SNR (dB)

5.4.3 Transmit diversity
Now suppose there are L transmit antennas but only one receive antenna, with
a total power constraint of P. From Section 5.3.2, the capacity of the channel
conditioned on the channel gains h = h1   hLt is log1+h2SNR.
Following the approach taken in the SISO and the SIMO cases, one is tempted
to say that the outage probability for a fixed rate R is
pfull−csi
out R = 
log1+h2SNR < R (5.69)
which would have been exactly the same as the corresponding SIMO system
with 1 transmit and L receive antennas. However, this outage performance
is achievable only if the transmitter knows the phases and magnitudes of the
gains h so that it can perform transmit beamforming, i.e., allocate more power
to the stronger antennas and arrange the signals from the different antennas to
align in phase at the receiver. When the transmitter does not know the channel
gains h, it has to use a fixed transmission strategy that does not depend on h.
(This subtlety does not arise in either the SISO or the SIMO case because the
transmitter need not know the channel realization to achieve the capacity for
those channels.) How much performance loss does not knowing the channel
entail?
Alamouti scheme revisited
For concreteness, let us focus on L = 2 (dual transmit antennas). In this
situation, we can use the Alamouti scheme, which extracts transmit diversity
without transmitter channel knowledge (introduced in Section 3.3.2). Recall
from (3.76) that, under this scheme, both the transmitted symbols u1u2 over a
block of 2 symbol times see an equivalent scalar fading channel with gain h
192 Capacity of wireless channels
h2
w2
h1 w1
w2
h2
MISO channel
MISO channel
repetition
Alamouti
post-processing
y1 = (|h1|2
+ |h2|2)u1 + w1
y1 = (|h1|2
+ |h2|2)u1 + w1
y2 = (|h1|2
+ |h2|2)u2 + w2
h2
h1
h2h2 *
*
*
post-processing
u1
*
*
*
–*
u1
u2
(b)
(a)
2 equivalent scalar channels
equivalent scalar channel
h1 w1
h1
–h1
Figure 5.16 A space-time and additive noise 0N0 (Figure 5.16(b)). The energy in the symbols
coding scheme combined with
the MISO channel can be
viewed as an equivalent scalar
channel: (a) repetition coding;
(b) the Alamouti scheme. The
outage probability of the
scheme is the outage
probability of the equivalent
channel.
u1 and u2 is P/2. Conditioned on h1h2, the capacity of the equivalent scalar
channel is
log1+h2 SNR
2
bits/s/Hz (5.70)
Thus, if we now consider successive blocks and use an AWGN capacityachieving
code of rate R over each of the streams
u1m and
u2m
separately, then the outage probability of each stream is
pAla
out R = 
log1+h2 SNR
2

<R

 (5.71)
Compared to (5.69) when the transmitter knows the channel, the Alamouti
scheme performs strictly worse: the loss is 3 dB in the received SNR. This
can be explained in terms of the efficiency with which energy is transferred
to the receiver. In the Alamouti scheme, the symbols sent at the two transmit
antennas in each time are independent since they come from two separately
coded streams. Each of them has power P/2. Hence, the total SNR at the
receive antenna at any given time is
 h1
2 + h2
2
SNR
2
 (5.72)
In contrast, when the transmitter knows the channel, the symbols transmitted
at the two antennas are completely correlated in such a way that the
signals add up in phase at the receive antenna and the SNR is now
 h1
2 + h2
2 SNR
193 5.4 Capacity of fading channels
a 3-dB power gain over the independent case.4 Intuitively, there is a power
loss because, without channel knowledge, the transmitter is sending signals
that have energy in all directions instead of focusing the energy in a specific
direction. In fact, the Alamouti scheme radiates energy in a perfectly isotropic
manner: the signal transmitted from the two antennas has the same energy
when projected in any direction (Exercise 5.14).
Ascheme radiates energy isotropically whenever the signals transmitted from
the antennas are uncorrelated and have equal power (Exercise 5.14). Although
the Alamouti scheme does not perform as well as transmit beamforming, it
is optimal in one important sense: it has the best outage probability among
all schemes that radiate energy isotropically. Indeed, any such scheme must
have a received SNR equal to (5.72) and hence its outage performance must be
no better than that of a scalar slow fading AWGN channel with that received
SNR. But this is precisely the performance achieved by the Alamouti scheme.
Can one do even better by radiating energy in a non-isotropic manner (but
in a way that does not depend on the random channel gains)? In other words,
can one improve the outage probability by correlating the signals from the
transmit antennas and/or allocating unequal powers on the antennas? The
answer depends of course on the distribution of the gains h1h2. If h1h2
are i.i.d. Rayleigh, Exercise 5.15 shows, using symmetry considerations, that
correlation never improves the outage performance, but it is not necessarily
optimal to use all the transmit antennas. Exercise 5.16 shows that uniform
power allocation across antennas is always optimal, but the number of antennas
used depends on the operating SNR. For reasonable values of target outage
probabilities, it is optimal to use all the antennas. This implies that in most
cases of interest, the Alamouti scheme has the optimal outage performance
for the i.i.d. Rayleigh fading channel.
What about forL>2 transmit antennas? An information theoretic argument
in Appendix B.8 shows (in a more general framework) that
poutR = 
log1+h2 SNR
L

<R
 (5.73)
is achievable. This is the natural generalization of (5.71) and corresponds again
to isotropic transmission of energy from the antennas. Again, Exercises 5.15
and 5.16 show that this strategy is optimal for the i.i.d. Rayleigh fading
channel and for most target outage probabilities of interest. However, there
is no natural generalization of the Alamouti scheme for a larger number
of transmit antennas (cf. Exercise 3.17). We will return to the problem of
outage-optimal code design for L>2 in Chapter 9.
4 The addition of two in-phase signals of equal power yields a sum signal that has double the
amplitude and four times the power of each of the signals. In contrast, the addition of two
independent signals of equal power only doubles the power.
194 Capacity of wireless channels
1e–10
10 15
1e–08
1e–06
0.0001
0.01
1
–10 –5 0 5 10 15 20 5
7
6
5
4
3
2
1
0
–10 –5 0
9
8
1e–14
1e–12
C
(bps /
Hz)
(a)
SNR (dB)
pout
L = 5
L = 3
L = 1
MISO
SIMO
SNR (dB)
(b)
20
L = 5
L = 3
L = 1

Figure 5.17 Comparisonof The outage performances of the SIMO and the MISO channels with i.i.d.
outage performance between
SIMOandMISOchannels for
different L: (a) outage probability
as a function of SNR, for fixed
R = 1; (b) outage capacity as a
function of SNR, for a fixed outage
probability of 10−2.
Rayleigh gains are plotted in Figure 5.17 for different numbers of transmit
antennas. The difference in outage performance clearly outlines the asymmetry
between receive and transmit antennas caused by the transmitter lacking
knowledge of the channel.
Suboptimal schemes: repetition coding
In the above, the Alamouti scheme is viewed as an inner code that converts
the MISO channel into a scalar channel. The outage performance (5.71) is
achieved when the Alamouti scheme is used in conjunction with an outer code
that is capacity-achieving for the scalar AWGN channel. Other space-time
schemes can be similarly used as inner codes and their outage probability
analyzed and compared to the channel outage performance.
Here we consider the simplest example, the repetition scheme: the same
symbol is transmitted over the L different antennas over L symbol periods,
using only one antenna at a time to transmit. The receiver does maximal
ratio combining to demodulate each symbol. As a result, each symbol sees
an equivalent scalar fading channel with gain h and noise variance N0
(Figure 5.16(a)). Since only one symbol is transmitted every L symbol periods,
a rate of LR bits/symbol is required on this scalar channel to achieve a target
rate of R bits/symbol on the original channel. The outage probability of this
scheme, when combined with an outer capacity-achieving code, is therefore:
prep
outR = 
 1
L
log1+h2SNR < R

 (5.74)
Compared to the outage probability (5.73) of the channel, this scheme is
suboptimal: the SNR has to be increased by a factor of
L2R−1
2LR−1
 (5.75)
195 5.4 Capacity of fading channels
to achieve the same outage probability for the same target rate R. Equivalently,
the reciprocal of this ratio can be interpreted as the maximum achievable
coding gain over the simple repetition scheme. For a fixed R, the performance
loss increases with L: the repetition scheme becomes increasingly inefficient
in using the degrees of freedom of the channel. For a fixed L, the performance
loss increases with the target rate R. On the other hand, for R small,
2R−1 ≈ Rln 2 and 2RL−1 ≈ RLln 2, so
L2R−1
2LR−1
≈ LRln 2
LRln 2
= 1 (5.76)
and there is hardly any loss in performance. Thus, while the repetition scheme
is very suboptimal in the high SNR regime where the target rate can be high,
it is nearly optimal in the low SNR regime. This is not surprising: the system
is degree-of-freedom limited in the high SNR regime and the inefficiency of
the repetition scheme is felt more there.
Summary 5.2 Transmit and receive diversity
With receive diversity, the outage probability is
prx
outR     = 
log1+h2SNR < R (5.77)
With transmit diversity and isotropic transmission, the outage probability is
ptx
outR     = 
log1+h2 SNR
L

<R

 (5.78)
a loss of a factor of L in the received SNR because the transmitter has
no knowledge of the channel direction and is unable to beamform in the
specific channel direction.
With two transmit antennas, capacity-achieving AWGN codes in conjunction
with the Alamouti scheme achieve the outage probability.
5.4.4 Time and frequency diversity
Outage performance of parallel channels
Another way to increase channel diversity is to exploit the time-variation
of the channel: in addition to coding over symbols within one coherence
period, one can code over symbols from L such periods. Note that this is
a generalization of the schemes considered in Section 3.2, which take one
symbol from each coherence period. When coding can be performed over
196 Capacity of wireless channels
many symbols from each period, as well as between symbols from different
periods, what is the performance limit?
One can model this situation using the idea of parallel channels introduced
in Section 5.3.3: each of the sub-channels, = 1   L, represents
a coherence period of duration Tc symbols:
y m = h x m+w m m = 1   Tc (5.79)
Here h is the (non-varying) channel gain during the th coherence period.
It is assumed that the coherence time Tc is large such that one can code
over many symbols in each of the sub-channels. An average transmit power
constraint of P on the original channel translates into a total power constraint
of LP on the parallel channel.
For a given realization of the channel, we have already seen in Section 5.3.3
that the optimal power allocation across the sub-channels is waterfilling.
However, since the transmitter does not know what the channel gains are, a
reasonable strategy is to allocate equal power P to each of the sub-channels.
In Section 5.3.3, it was mentioned that the maximum rate of reliable communication
given the fading gains h is
L
   
=1
log1+ h
2SNR bits/s/Hz (5.80)
where SNR = P/N0. Hence, if the target rate is R bits/s/Hz per sub-channel,
then outage occurs when
L
   
=1
log1+ h
2SNR < LR (5.81)
Can one design a code to communicate reliably whenever
L
   
=1
log1+ h
2SNR > LR? (5.82)
If so, an L-fold diversity is achieved for i.i.d. Rayleigh fading: outage occurs
only if each of the terms in the sum L
=1 log1+ h
2SNR is small.
The term log1 + h
2SNR is the capacity of an AWGN channel with
received SNR equal to h
2SNR. Hence, a seemingly straightforward strategy,
already used in Section 5.3.3, would be to use a capacity-achieving AWGN
code with rate
log1+ h
2SNR
for the th coherence period, yielding an average rate of
1
L
L
   
=1
log1+ h
2SNR bits/s/Hz
197 5.4 Capacity of fading channels
and meeting the target rate whenever condition (5.82) holds. The caveat is
that this strategy requires the transmitter to know in advance the channel state
during each of the coherence periods so that it can adapt the rate it allocates to
each period. This knowledge is not available. However, it turns out that such
transmitter adaptation is unnecessary: information theory guarantees that
one can design a single code that communicates reliably at rate R whenever
the condition (5.82) is met. Hence, the outage probability of the time diversity
channel is precisely
poutR = 
 1
L
L
   
=1
log1+ h
2SNR < R

 (5.83)
Even though this outage performance can be achieved with or without
transmitter knowledge of the channel, the coding strategy is vastly different.
With transmitter knowledge of the channel, dynamic rate allocation and separate
coding for each sub-channel suffices. Without transmitter knowledge,
separate coding would mean using a fixed-rate code for each sub-channel and
poor diversity results: errors occur whenever one of the sub-channels is bad.
Indeed, coding across the different coherence periods is now necessary: if the
channel is in deep fade during one of the coherence periods, the information
bits can still be protected if the channel is strong in other periods.
A geometric view
Figure 5.18 gives a geometric view of our discussion so far. Consider a code
with rate R, coding over all the sub-channels and over one coherence timeinterval;
the block length is LTc symbols. The codewords lie in an LTcdimensional
sphere. The received LTc-dimensional signal lives in an ellipsoid,
with (L groups of) different axes stretched and shrunk by the different subchannel
gains (cf. Section 5.3.3). The ellipsoid is a function of the sub-channel
gains, and hence random. The no-outage condition (5.82) has a geometric
interpretation: it says that the volume of the ellipsoid is large enough to
contain 2LTcR noise spheres, one for each codeword. (This was already seen
in the sphere-packing argument in Section 5.3.3.) An outage-optimal code is
one that communicates reliably whenever the random ellipsoid is at least this
large. The subtlety here is that the same code must work for all such ellipsoids.
Since the shrinking can occur in any of the L groups of dimensions, a robust
code needs to have the property that the codewords are simultaneously wellseparated
in each of the sub-channels (Figure 5.18(a)). A set of independent
codes, one for each sub-channel, is not robust: errors will be made when even
only one of the sub-channels fades (Figure 5.18(b)).
We have already seen, in the simple context of Section 3.2, codes for
the parallel channel which are designed to be well-separated in all the subchannels.
For example, the repetition code and the rotation code in Figure 3.8
have the property that the codewords are separated in bot the sub-channels
198 Capacity of wireless channels
Channel
fade
Channel
fade
(a)
Reliable communication Noise spheres overlap
(b)
(here Tc
=1 symbol and L=2 sub-channels). More generally, the code design
Figure 5.18 Effect of the fading
gains on codes for the parallel
channel. Here there are L= 2
sub-channels and each axis
represents Tc dimensions within
a sub-channel. (a) Coding
across the sub-channels. The
code works as long as the
volume of the ellipsoid is big
enough. This requires good
codeword separation in both
the sub-channels. (b) Separate,
non-adaptive code for each
sub-channel. Shrinking of one
of the axes is enough to cause
confusion between the
codewords.
criterion of maximizing the product distance for all pairs of codewords naturally
favors codes that satisfy this property. Coding over long blocks affords
a larger coding gain; information theory guarantees the existence of codes
with large enough coding gain to achieve the outage probability in (5.83).
To achieve the outage probability, one wants to design a code that communicates
reliably over every parallel channel that is not in outage (i.e., parallel
channels that satisfy (5.82)). In information theory jargon, a code that communicates
reliably for a class of channels is said to be universal for that class.
In this language, we are looking for universal codes for parallel channels that
are not in outage. In the slow fading scalar channel without diversity (L = 1),
this problem is the same as the code design problem for a specific channel.
This is because all scalar channels are ordered by their received SNR; hence a
code that works for the channel that is just strong enough to support the target
rate will automatically work for all better channels. For parallel channels,
each channel is described by a vector of channel gains and there is no natural
ordering of channels; the universal code design problem is now non-trivial.
In Chapter 9, a universal code design criterion will be developed to construct
universal codes that come close to achieving the outage probability.
Extensions
In the above development, a uniform power allocation across the sub-channels
is assumed. Instead, if we choose to allocate power P to sub-channel , then
the outage probability (5.83) generalizes to
poutR = 
 L
   
=1
log1+ h
2SNR  < LR

 (5.84)
where SNR
= P /N0. Exercise 5.17 shows that for the i.i.d. Rayleigh fading
model, a non-uniform power allocation that does not depend on the channel
gains cannot improve the outage performance.
199 5.4 Capacity of fading channels
The parallel channel is used to model time diversity, but it can model
frequency diversity as well. By using the usual OFDM transformation, a slow
frequency-selective fading channel can be converted into a set of parallel subchannels,
one for each sub-carrier. This allows us to characterize the outage
capacity of such channels as well (Exercise 5.22).
We summarize the key idea in this section using more suggestive
language.
Summary 5.3 Outage for parallel channels
Outage probability for a parallel channel with L sub-channels and the th
channel having random gain h :
poutR = 
 1
L
L
   
=1
log1+ h
2SNR < R

 (5.85)
where R is in bits/s/Hz per sub-channel.
The th sub-channel allows log1+ h
2SNR bits of information per symbol
through. Reliable decoding can be achieved as long as the total amount
of information allowed through exceeds the target rate.
5.4.5 Fast fading channel
In the slow fading scenario, the channel remains constant over the transmission
duration of the codeword. If the codeword length spans several coherence
periods, then time diversity is achieved and the outage probability improves.
When the codeword length spans many coherence periods, we are in the
so-called fast fading regime. How does one characterize the performance limit
of such a fast fading channel?
Capacity derivation
Let us first consider a very simple model of a fast fading channel:
ym = hmxm+wm (5.86)
where hm = h remains constant over the th coherence period of Tc symbols
and is i.i.d. across different coherence periods. This is the so-called
block fading model; see Figure 5.19(a). Suppose coding is done over L such
coherence periods. If Tc
     1, we can effectively model this as L parallel
sub-channels that fade independently. The outage probability from (5.83) is
poutR = 
 1
L
L
   
=1
log1+ h
2SNR < R

 (5.87)
200 Capacity of wireless channels
Figure 5.19 (a) Typical
trajectory of the channel
strength as a function of
symbol time under a block
fading model. (b) Typical
trajectory of the channel
strength after interleaving. One
can equally think of these
plots as rates of flow of
information allowed through
the channel over time.
m
l = 0
h[m]
l = 1 l = 2 l = 3
m
h[m]
(a) (b)
For finite L, the quantity
1
L
L
   
=1
log1+ h
2SNR
is random and there is a non-zero probability that it will drop below any
target rate R. Thus, there is no meaningful notion of capacity in the sense of
maximum rate of arbitrarily reliable communication and we have to resort to
the notion of outage. However, as L→, the law of large numbers says that
1
L
L
   
=1
log1+ h
2SNR→log1+ h 2SNR (5.88)
Now we can average over many independent fades of the channel by coding


No comments:

Post a Comment