IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 65, NO. 4, FEBRUARY 15, 2017 933
Coarrays, MUSIC, and the Cram
´
er–Rao Bound
Mianzhi Wang, Student Member, IEEE, and Arye Nehorai, Life Fellow, IEEE
Abstract—Sparse linear arrays, such as coprime arrays and
nested arrays, have the attractive capability of providing enhanced
degrees of freedom. By exploiting the coarray structure, an
augmented sample covariance matrix can be constructed and
MUtiple SIgnal Classification (MUSIC) can be applied to identify
more sources than the number of sensors. While such a MUSIC
algorithm works quite well, its performance has not been theoret-
ically analyzed. In this paper, we derive a simplified asymptotic
mean square error (MSE) expression for the MUSIC algorithm
applied to the coarray model, which is applicable even if the source
number exceeds the sensor number. We show that the directly
augmented sample covariance matrix and the spatial smoothed
sample covariance matrix yield the same asymptotic MSE for
MUSIC. We also show that when there are more sources than the
number of sensors, the MSE converges to a positive value instead
of zero when the signal-to-noise ratio (SNR) goes to infinity. This
finding explains the “saturation” behavior of the coarray-based
MUSIC algorithms in the high-SNR region observed in previous
studies. Finally, we derive the Cram
´
er–Rao bound for sparse
linear arrays, and conduct a numerical study of the statistical
efficiency of the coarray-based estimator. Experimental results
verify theoretical derivations and reveal the complex efficiency
pattern of coarray-based MUSIC algorithms.
Index Terms—MUSIC, Cram
´
er–Rao bound, coarray, sparse
linear arrays, statistical efficiency.
I. INTRODUCTION
E
STIMATING source directions of arrivals (DOAs) using
sensors arrays plays an important role in the field of ar-
ray signal processing. For uniform linear arrays (ULA), it is
widely known that traditional subspace-based methods, such as
MUSIC, can resolve up to N 1 uncorrelated sources with N
sensors [1]–[3]. However, for sparse linear arrays, such as mini-
mal redundancy arrays (MRA) [4], it is possible to construct an
augmented covariance matrix by exploiting the coarray struc-
ture. We can then apply MUSIC to the augmented covariance
matrix, and up to O(N
2
) sources can be resolved with only N
sensors [4].
Recently, the development of co-prime arrays [5]–[8] and
nested arrays [9]–[11], has generated renewed interest in sparse
linear arrays, and it remains to investigate the performance
of these arrays. The performance of the MUSIC estimator
and its variants (e.g., root-MUSIC [12], [13]) was thoroughly
analyzed by Stoica et al. in [2], [14] and [15]. The same authors
also derived the asymptotic MSE expression of the MUSIC
Manuscript received March 31, 2016; revised September 16, 2016; accepted
October 19, 2016. Date of publication November 8, 2016; date of current ver-
sion December 5, 2016. The associate editor coordinating the review of this
manuscript and approving it for publication was Prof. Wee Peng Tay. This work
was supported by the AFOSR under Grant FA9550-11-1-0210 and the ONR
under Grant N000141310050.
The authors are with the Preston M. Green Department of Electrical and
Systems Engineering, Washington University in St. Louis, St. Louis, MO 63130
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TSP.2016.2626255
estimator, and rigorously studied its statistical efficiency.
In [16], Li et al. derived a unified MSE expression for common
subspace-based estimators (e.g., MUSIC and ESPRIT [17])
via first-order perturbation analysis. However, these results
are based on the physical array model and make use of the
statistical properties of the original sample covariance matrix,
which cannot be applied when the coarray model is utilized.
In [18], Gorokhov et al. first derived a general MSE expression
for the MUSIC algorithm applied to matrix-valued transforms
of the sample covariance matrix. While this expression is
applicable to coarray-based MUSIC, its explicit form is
rather complicated, making it difficult to conduct analytical
performance studies. Therefore, a simpler and more revealing
MSE expression is desired.
In this paper, we first review the coarray signal model
commonly used for sparse linear arrays. We investigate two
common approaches to constructing the augmented sample
covariance matrix, namely, the direct augmentation approach
(DAA) [19], [20] and the spatial smoothing approach [9]. We
show that MUSIC yields the same asymptotic estimation er-
ror for both approaches. We are then able to derive an explicit
MSE expression that is applicable to both approaches. Our MSE
expression has a simpler form, which may facilitate the perfor-
mance analysis of coarray-based MUSIC algorithms. We ob-
serve that the MSE of coarray-based MUSIC depends on both
the physical array geometry and the coarray geometry. We show
that, when there are more sources than the number of sensors,
the MSE does not drop to zero even if the SNR approaches
infinity, which agrees with the experimental results in previous
studies. Next, we derive the CRB of DOAs that is applicable to
sparse linear arrays. We notice that when there are more sources
than the number of sensors, the CRB is strictly nonzero as the
SNR goes to infinity, which is consistent with our observation
on the MSE expression. It should be mentioned that during the
review process of this paper, Liu et al. and Koochakzadeh et al.
also independently derived the CRB for sparse linear arrays in
[21], [22]. In this paper, we provide a more rigorous proof the
CRB’s limiting properties in high SNR regions. We also include
various statistical efficiency analysis by utilizing our results on
MSE, which is not present in [21], [22]. Finally, we verify our
analytical MSE expression and analyze the statistical efficiency
of different sparse linear arrays via numerical simulations. We
we observe good agreement between the empirical MSE and
the analytical MSE, as well as complex efficiency patterns of
coarray-based MUSIC.
Throughout this paper, we make use of the following nota-
tions. Given a matrix A,weuseA
T
, A
H
, and A
to denote
the transpose, the Hermitian transpose, and the conjugate of
A, respectively. We use A
ij
to denote the (i, j)-th element of A,
and a
i
to denote the i-th column of A.IfA is full column rank,
we define its pseudo inverse as A
=(A
H
A)
1
A
H
.Wealso
define the projection matrix onto the null space of A as Π
A
=
I AA
.LetA =[a
1
a
2
... a
N
] C
M × N
, and we define
1053-587X © 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications
standards/publications/rights/index.html for more information.
934 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 65, NO. 4, FEBRUARY 15, 2017
the vectorization operation as vec(A)=[a
T
1
a
T
2
... a
T
N
]
T
, and
mat
M,N
(·) as its inverse operation. We use and to denote
the Kronecker product and the Khatri-Rao product (i.e., the
column-wise Kronecker product), respectively. We denote by
R(A) and I(A) the real and the imaginary parts of A.IfA is
a square matrix, we denote its trace by tr(A). In addition, we
use T
M
to denote a M × M permutation matrix whose anti-
diagonal elements are one, and whose remaining elements are
zero. We say a complex vector z C
M
is conjugate symmetric
if T
M
z = z
.Wealsousee
i
to denote the i-th natural base vec-
tor in Euclidean space. For instance, Ae
i
yields the i-th column
of A, and e
T
i
A yields the i-th row of A.
II. C
OARRAY SIGNAL MODEL
We consider a linear sparse array consisting of M sensors
whose locations are given by D = {d
1
,d
2
,...,d
M
}. Each sen-
sor location d
i
is chosen to be the integer multiple of the smallest
distance between any two sensors, denoted by d
0
. Therefore we
can also represent the sensor locations using the integer set
¯
D = {
¯
d
1
,
¯
d
2
,...,
¯
d
M
}, where
¯
d
i
= d
i
/d
0
for i =1, 2,...,M.
Without loss of generality, we assume that the first sensor
is placed at the origin. We consider K narrow-band sources
θ
1
2
,...,θ
K
impinging on the array from the far field. Denot-
ing λ as the wavelength of the carrier frequency, we can express
the steering vector for the k-th source as
a(θ
k
)=
1 e
j
¯
d
2
φ
k
··· e
j
¯
d
M
φ
k
T
, (1)
where φ
k
=(2πd
0
sin θ
k
). Hence the received signal vectors
are given by
y(t)=A(θ)x(t)+n(t),t =1, 2,...,N, (2)
where A =[a(θ
1
) a(θ
2
) ... a(θ
K
)] denotes the array steering
matrix, x(t) denotes the source signal vector, n(t) denotes ad-
ditive noise, and N denotes the number of snapshots. In the
following discussion, we make the following assumptions:
A1 The source signals follow the unconditional model
[15] and are uncorrelated white circularly-symmetric
Gaussian.
A2 The source DOAs are distinct (i.e., θ
k
= θ
l
k = l).
A3 The additive noise is white circularly-symmetric Gaus-
sian and uncorrelated from the sources.
A4 The is no temporal correlation between each snapshot.
Under A1–A4, the sample covariance matrix is given by
R = AP A
H
+ σ
2
n
I, (3)
where P = diag(p
1
,p
2
,...,p
K
) denotes the source covariance
matrix, and σ
2
n
denotes the variance of the additive noise. By
vectorizing R, we can obtain the following coarray model:
r = A
d
p + σ
2
n
i, (4)
where A
d
= A
A, p =[p
1
,p
2
,...,p
K
]
T
, and i = vec(I).
It has been shown in [9] that A
d
corresponds to the steer-
ing matrix of the coarray whose sensor locations are given by
D
co
= {d
m
d
n
|1 m, n M}. By carefully selecting rows
of (A
A), we can construct a new steering matrix repre-
senting a virtual ULA with enhanced degrees of freedom. Be-
cause D
co
is symmetric, this virtual ULA is centered at the
origin. The sensor locations of the virtual ULA are given by
Fig. 1. A coprime array with sensors located at [0, 2, 3, 4, 6, 9]λ/2 and its
coarray: (a) physical array; (b) coarray; (c) central ULA part of the coarray.
[M
v
+1, M
v
+2,...,0,...,M
v
1]d
0
, where M
v
is de-
fined such that 2M
v
1 is the size of the virtual ULA. Fig. 1
provides an illustrative example of the relationship between the
physical array and the corresponding virtual ULA. The obser-
vation vector of the virtual ULA is given by
z = Fr = A
c
p + σ
2
n
Fi, (5)
where F is the coarray selection matrix, whose detailed defini-
tion is provided in Appendix A, and A
c
represents the steering
matrix of the virtual array. The virtual ULA can be divided
into M
v
overlapping uniform subarrays of size M
v
. The output
of the i-th subarray is given by z
i
= Γ
i
z for i =1, 2,...,M
v
,
where Γ
i
=[0
M
v
× (i1)
I
M
v
× M
v
0
M
v
× (M
v
i)
] represents the
selection matrix for the i-th subarray.
Given the outputs of the M
v
subarrays, the augmented covari-
ance matrix of the virtual array R
v
is commonly constructed
via one of the following methods [9], [20]:
R
v1
=[z
M
v
z
M
v
1
··· z
1
], (6a)
R
v2
=
1
M
v
M
v
i=1
z
i
z
H
i
, (6b)
where method (6a) corresponds to DAA, while method (6b)
corresponds to the spatial smoothing approach.
Following the results in [9] and [20], R
v1
and R
v2
are related
via the following equality:
R
v2
=
1
M
v
R
2
v1
=
1
M
v
(A
v
PA
H
v
+ σ
2
n
I)
2
, (7)
where A
v
corresponds to the steering matrix of a ULA whose
sensors are located at [0, 1,...,M
v
1]d
0
. If we design a sparse
linear array such that M
v
>M, we immediately gain enhanced
degrees of freedom by applying MUSIC to either R
v1
or R
v2
instead of R in (3). For example, in Fig. 1, we have a co-prime
array with M
v
=8> 6=M. Because MUSIC is applicable
only when the number of sources is less than the number of
sensors, we assume that K<M
v
throughout the paper. This
assumption, combined with A2, ensures that A
v
is full column
rank.
It should be noted that the elements in (6a) are obtained
via linear operations on the elements in R, and those in (6b)
are obtained via quadratic operations. Therefore the statistical
properties of R
v1
and R
v2
are different from that of R.
Consequently, traditional performance analysis for the MUSIC
algorithm based on R cannot be applied to the coarray-based
MUSIC. For brevity, we use the term direct augmentation
based MUSIC (DA-MUSIC), and the term spatial smoothing
based MUSIC (SS-MUSIC) to denote the MUSIC algorithm
WANG AND NEHORAI: COARRAYS, MUSIC, AND THE CRAM
´
ER–RAO BOUND 935
applied to R
v1
and R
v2
, respectively. In the following section,
we will derive a unified analytical MSE expression for both
DA-MUSIC and SS-MUSIC.
III. MSE
OF COARRAY-BASED MUSIC
In practice, the real sample covariance matrix R is
unobtainable, and its maximum-likelihood estimate
ˆ
R =
1/N
N
t=1
x(t)x(t)
H
is used. Therefore z, R
v1
, and R
v2
are
also replaced with their estimated versions
ˆ
z,
ˆ
R
v1
, and
ˆ
R
v2
.
Due to the estimation error ΔR =
ˆ
R R, the estimated noise
eigenvectors will deviate from the true one, leading to DOA
estimation errors.
In general, the eigenvectors of a perturbed matrix are not
well-determined [23]. For instance, in the very low SNR sce-
nario, ΔR may cause a subspace swap, and the estimated noise
eigenvectors will deviate drastically from the true ones [24].
Nevertheless, as shownin [16], [18] and [25], given enough sam-
ples and sufficient SNR, it is possible to obtain the closed-form
expressions for DOA estimation errors via first-order analysis.
Following similar ideas, we are able to derive the closed-form
error expression for DA-MUSIC and SS-MUSIC, as stated in
Theorem 1.
Theorem 1: Let
ˆ
θ
(1)
k
and
ˆ
θ
(2)
k
denote the estimated values
of the k-th DOA by DA-MUSIC and SS-MUSIC, respectively.
Let Δr = vec(
ˆ
R R). Assume the signal subspace and the
noise subspace are well-separated, so that Δr does not cause a
subspace swap. Then
ˆ
θ
(1)
k
θ
k
.
=
ˆ
θ
(2)
k
θ
k
.
= (γ
k
p
k
)
1
R(ξ
T
k
Δr), (8)
where
.
= denotes asymptotic equality, and
ξ
k
= F
T
Γ
T
(β
k
α
k
), (9a)
α
T
k
= e
T
k
A
v
, (9b)
β
k
= Π
A
v
˙
a
v
(θ
k
), (9c)
γ
k
=
˙
a
H
v
(θ
k
)Π
A
v
˙
a
v
(θ
k
), (9d)
Γ =
Γ
T
M
v
Γ
T
M
v
1
···Γ
T
1
T
, (9e)
˙
a
v
(θ
k
)=
a
v
(θ
k
)
∂θ
k
. (9f)
Proof: See Appendix B.
Theorem 1 can be reinforced by Proposition 1. β
k
=0en-
sures that γ
1
k
exists and (8) is well-defined, while ξ
k
=0
ensures that (8) depends on Δr and cannot be trivially zero.
Proposition 1: β
k
, ξ
k
= 0 for k =1, 2,...,K.
Proof: We first show that β
k
= 0 by contradiction.
Assume β
k
= 0. Then Π
A
v
Da
v
(θ
k
)=0, where D =
diag(0, 1,...,M
v
1). This implies that Da
v
(θ
k
) lies in the
column space of A
v
.Leth = e
k
Da
v
(θ
k
). We immedi-
ately obtain that [A
v
h] is not full column rank. We now add
M
v
K 1 distinct DOAs in (π/2/2) that are different
from θ
1
,...,θ
K
, and construct an extended steering matrix
¯
A
v
of the M
v
1 distinct DOAs, θ
1
,...,θ
M
v
1
.LetB =[
¯
A
v
h].
It follows that B is also not full column rank. Because B is
a square matrix, it is also not full row rank. Therefore there
exists some non-zero c C
M
v
such that c
T
B = 0.Lett
l
=
e
l
for l =1, 2,...,M
v
, where φ
l
=(2πd
0
sin θ
k
). We can
express B as
11··· 10
t
1
t
2
··· t
M
v
1
1
t
2
1
t
2
2
··· t
2
M
v
1
2t
k
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
t
M
v
1
1
t
M
v
1
2
··· t
M
v
1
M
v
1
(M
v
1)t
M
v
2
k
.
We define the complex polynomial f (x)=
M
v
l=1
c
l
x
l1
.It
can be observed that c
T
B = 0 is equivalent to f(t
l
)=0for
l =1, 2,...,M
v
1, and f
(t
k
)=0. By construction, θ
l
are
distinct, so t
l
are M
v
1 different roots of f(x). Because c = 0,
f(x) is not a constant-zero polynomial, and has at most M
v
1
roots. Therefore each root t
l
has a multiplicity of at most one.
However, f
(t
k
)=0implies that t
k
has a multiplicity of at least
two, which contradicts the previous conclusion and completes
the proof of β
k
= 0.
We now show that ξ
k
= 0. By the definition of F in
Appendix A, each row of F has at least one non-zero ele-
ment, and each column of F has at most one non-zero element.
Hence F
T
x = 0 for some x C
2M
v
1
if and only of x = 0.
It suffices to show that Γ
T
(β
k
α
k
) = 0. By the definition of
Γ, we can rewrite Γ
T
(β
k
α
k
) as
˜
B
k
α
k
, where
˜
B
k
=
β
kM
v
0 ··· 0
β
k(M
v
1)
β
kM
v
··· 0
.
.
.
.
.
.
.
.
.
.
.
.
β
k1
β
k2
··· β
kM
v
0 β
k1
··· β
k(M
v
1)
.
.
.
.
.
.
.
.
.
.
.
.
00··· β
k1
,
and β
kl
is the l-th element of β
k
. Because β
k
= 0
k
and K<
M
v
,
˜
B
k
is full column rank. By the definition of pseudo inverse,
we know that α
k
= 0. Therefore
˜
B
k
α
k
= 0, which completes
the proof of ξ
k
= 0.
One important implication of Theorem 1 is that DA-MUSIC
and SS-MUSIC share the same first-order error expression, de-
spite the fact that R
v1
is constructed from the second-order
statistics, while R
v2
is constructed from the fourth-order statis-
tics. Theorem 1 enables a unified analysis of the MSEs of
DA-MUSIC and SS-MUSIC, which we present in Theorem 2.
Theorem 2: Under the same assumptions as in Theorem 1,
the asymptotic second-order statistics of the DOA estimation
errors by DA-MUSIC and SS-MUSIC share the same form:
E[(
ˆ
θ
k
1
θ
k
1
)(
ˆ
θ
k
2
θ
k
2
)]
.
=
R[ξ
H
k
1
(R R
T
)ξ
k
2
]
Np
k
1
p
k
2
γ
k
1
γ
k
2
. (10)
Proof: See Appendix C.
By Theorem 2, it is straightforwardto write the unified asymp-
totic MSE expression as
(θ
k
)=
ξ
H
k
(R R
T
)ξ
k
Np
2
k
γ
2
k
. (11)
936 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 65, NO. 4, FEBRUARY 15, 2017
Therefore the MSE
1
depends on both the physical array ge-
ometry and the coarray geometry. The physical array geometry
is captured by A, which appears in R R
T
. The coarray geom-
etry is captured by A
v
, which appears in ξ
k
and γ
k
. Therefore,
even if two arrays share the same coarray geometry, they may
not share the same MSE because their physical array geometry
maybedifferent.
It can be easily observed from (11) that (θ
k
)0 as N →∞.
However, because p
k
appears in both the denominator and
numerator in (11), it is not obvious how the MSE varies
with respect to the source power p
k
and noise power σ
2
n
.Let
¯p
k
= p
k
2
n
denote the signal-to-noise ratio of the k-th source.
Let
¯
P = diag(¯p
1
, ¯p
2
,...,¯p
K
), and
¯
R = A
¯
PA
H
+ I. We can
then rewrite (θ
k
) as
(θ
k
)=
ξ
H
k
(
¯
R
¯
R
T
)ξ
k
N ¯p
2
k
γ
2
k
. (12)
Hence the MSE depends on the SNRs instead of the absolute
values of p
k
or σ
2
n
. To provide an intuitive understanding how
SNR affects the MSE, we consider the case when all sources
have the same power. In this case, we show in Corollary 1 that
the MSE asymptotically decreases as the SNR increases.
Corollary 1: Assume all sources have the same power p.Let
¯p = p/σ
2
n
denote the common SNR. Given sufficiently large N,
the MSE (θ
k
) decreases monotonically as ¯p increases, and
lim
¯p→∞
(θ
k
)=
1
2
k
ξ
H
k
(A A
)
2
2
. (13)
Proof: The limiting expression can be derived straightfor-
wardly from (12). For monotonicity, without loss of generality,
let p =1,so¯p =1
2
n
. Because f(x)=1/x is monotonically
decreasing on (0, ), it suffices to show that (θ
k
) increases
monotonically as σ
2
n
increases. Assume 0 <s
1
<s
2
, and we
have
(θ
k
)|
σ
2
n
=s
2
(θ
k
)|
σ
2
n
=s
1
=
1
2
k
ξ
H
k
k
,
where Q =(s
2
s
1
)[(AA
H
) I + I (AA
H
)+(s
2
+
s
1
)I]. Because AA
H
is positive semidefinite, both (AA
H
) I
and I (AA
H
) are positive semidefinite. Combined with our
assumption that 0 <s
1
<s
2
, we conclude that Q is positive
definite. By Proposition 1 we know that ξ
k
= 0. Therefore
ξ
H
k
k
is strictly greater than zero, which implies the MSE
monotonically increases as σ
2
n
increases.
Because both DA-MUSIC and SS-MUSIC work also in cases
when the number of sources exceeds the number of sensors, we
are particularly interested in their limiting performance in such
cases. As shown in Corollary 2, when K M, the correspond-
ing MSE is strictly greater than zero, even though the SNR
approaches infinity. This corollary explains the “saturation” be-
havior of SS-MUSIC in the high SNR region as observed in [8]
and [9]. Another interesting implication of Corollary 2 is that
when 2 K<M, the limiting MSE is not necessarily zero.
Recall that in [2], it was shown that the MSE of the traditional
MUSIC algorithm will converge to zero as SNR approaches
infinity. We know that both DA-MUSIC and SS-MUSIC will
1
For brevity, when we use the acronym “MSE” in the following discussion,
we refer to the asymptotic MSE, (θ
k
), unless explicitly stated.
be outperformed by traditional MUSIC in high SNR regions
when 2 K<M. Therefore, we suggest using DA-MUSIC
or SS-MUSIC only when K M.
Corollary 2: Following the same assumptions in Corollary 1,
1) When K =1, lim
¯p→∞
(θ
k
)=0;
2) When 2 K<M, lim
¯p→∞
(θ
k
) 0;
3) When K M, lim
¯p→∞
(θ
k
) > 0.
Proof: The right-hand side of (13) can be expanded into
1
2
k
K
m =1
K
n=1
ξ
H
k
[a(θ
m
) a
(θ
n
)]
2
2
.
By the definition of F , F [a(θ
m
) a
(θ
m
)] becomes
[e
j(M
v
1)φ
m
,e
j(M
v
2)φ
m
,...,e
j(M
v
1)φ
m
].
Hence ΓF [a(θ
m
) a
(θ
m
)] = a
v
(θ
m
) a
v
(θ
m
). Observe
that
ξ
H
k
[a(θ
m
) a
(θ
m
)] = (β
k
α
k
)
H
(a
v
(θ
m
) a
v
(θ
m
))
=(β
H
k
a
v
(θ
m
))(α
H
k
a
v
(θ
m
))
=(
˙
a
H
v
(θ
k
)Π
A
v
a
v
(θ
m
))(α
H
k
a
v
(θ
m
))
=0.
We can reduce the right-hand side of (13) into
1
2
k
1m,nK
m =n
ξ
H
k
[a(θ
m
) a
(θ
n
)]
2
2
.
Therefore when K =1, the limiting expression is exactly zero.
When 2 K<M, the limiting expression is not necessary
zero because when m = n, ξ
H
k
[a(θ
m
) a
(θ
n
)] is not neces-
sarily zero.
When K M, A is full row rank. Hence A A
is also
full row rank. By Proposition 1 we know that ξ
k
=0, which
implies that (θ
k
) is strictly greater than zero.
IV. C
RAM
´
ER-RAO BOUND
The CRB for the unconditional model (2) has been well stud-
ied in [15], but only when the number of sources is less than
the number of sensors and no prior knowledge of P is given.
For the coarray model, the number of sources can exceed the
number of sensors, and P is assumed to be diagonal. Therefore,
the CRB derived in [15] cannot be directly applied. Based on
[26, Appendix 15C], we provide an alternative CRB based on
the signal model (2), under assumptions A1–A4.
For the signal model (2), the parameter vector is defined by
η =[θ
1
,...,θ
K
,p
1
,...,p
k
2
n
]
T
, (14)
and the (m, n)-th element of the Fisher information matrix
(FIM) is given by [15], [26]
FIM
mn
= N tr
R
∂η
m
R
1
R
∂η
n
R
1
. (15)
WANG AND NEHORAI: COARRAYS, MUSIC, AND THE CRAM
´
ER–RAO BOUND 937
Observe that tr(AB)=vec(A
T
)
T
vec(B), and that vec
(AXB)=(B
T
A) vec(X). We can rewrite (15) as
FIM
mn
= N
r
∂η
m
H
(R
T
R)
1
r
∂η
n
.
Denote the derivatives of r with respect to η as
r
η
=
r
∂θ
1
···
r
∂θ
K
r
∂p
1
···
r
∂p
K
r
∂σ
2
n
. (16)
The FIM can be compactly expressed by
FIM =
r
η
H
(R
T
R)
1
r
η
. (17)
According to (4), we can compute the derivatives in (16) and
obtain
r
η
=
˙
A
d
PA
d
i
, (18)
where
˙
A
d
=
˙
A
A + A
˙
A, A
d
and i follow the same def-
initions as in (4), and
˙
A =
a(θ
1
)
∂θ
1
a(θ
2
)
∂θ
2
···
a(θ
K
)
∂θ
K
.
Note that (18) can be partitioned into two parts, specifically,
the part corresponding to DOAs and the part corresponding to
the source and noise powers. We can also partition the FIM.
Because R is positive definite, (R
T
R)
1
is also positive
definite, and its square root (R
T
R)
1/2
also exists. Let
M
θ
=(R
T
R)
1/2
˙
A
d
P ,
M
s
=(R
T
R)
1/2
A
d
i
.
We can write the partitioned FIM as
FIM = N
M
H
θ
M
θ
M
H
θ
M
s
M
H
s
M
θ
M
H
s
M
s
.
The CRB matrix for the DOAs is then obtained by block-wise
inversion:
CRB
θ
=
1
N
(M
H
θ
Π
M
s
M
θ
)
1
, (19)
where Π
M
s
= I M
s
(M
H
s
M
s
)
1
M
H
s
. It is worth noting
that, unlike the classical CRB for the unconditional model in-
troduced in [15, Remark 1], expression (19) is applicable even
if the number of sources exceeds the number of sensors.
Remark 1: Similar to (11), CRB
θ
depends on the SNRs in-
stead of the absolute values of p
k
or σ
2
n
.Let¯p
k
= p
k
2
n
, and
¯
P = diag(¯p
1
, ¯p
2
,...,¯p
K
).Wehave
M
θ
=(
¯
R
T
¯
R)
1/2
˙
A
d
¯
P , (20)
M
s
= σ
2
n
(
¯
R
T
¯
R)
1/2
A
d
i
. (21)
Substituting (20) and (21) into (19), the term σ
2
n
gets canceled,
and the resulting CRB
θ
depends on the ratios ¯p
k
instead of
absolute values of p
k
or σ
2
n
.
Remark 2: The invertibility of the FIM depends on the coar-
ray structure. In the noisy case, (R
T
R)
1
is always full
rank, so the FIM is invertible if and only if r/∂η is full col-
umn rank. By (18) we know that the rank of r/∂η is closely
related to A
d
, the coarray steering matrix. Therefore CRB
θ
is
not valid for an arbitrary number of sources, because A
d
may
not be full column rank when too many sources are present.
Proposition 2: Assume all sources have the same power p,
and r/∂η is full column rank. Let ¯p = p/σ
2
n
.
1IfK<M, and lim
¯p→∞
CRB
θ
exists, it is zero under mild
conditions.
2IfK M, and lim
¯p→∞
CRB
θ
exists,it is positivedefinite.
Proof: See Appendix D.
While infinite SNR is unachievable from a practical stand-
point, Proposition 2 gives some useful theoretical implications.
When K<M, the limiting MSE (13) in Corollary 1 is not
necessarily zero. However, Proposition 2 reveals that the CRB
may approach zero when SNR goes to infinity. This observa-
tion implies that both DA-MUSIC and SS-MUSIC may have
poor statistical efficiency in high SNR regions. When K M,
Proposition 2 implies that the CRB of each DOA will converge
to a positive constant, which is consistent with Corollary 2.
V. N UMERICAL ANALYSIS
In this section, we numerically analyze of DA-MUSIC and
SS-MUSIC by utilizing (11) and (19). We first verify the MSE
expression (10) introduced in Theorem 2 through Monte Carlo
simulations. We then examine the application of (8) in predicting
the resolvability of two closely placed sources, and analyze the
asymptotic efficiency of both estimators from various aspects.
Finally, we investigate how the number of sensors affect the
asymptotic MSE.
In all experiments, we define the signal-to-noise ratio (SNR)
as
SNR = 10 log
10
min
k=1,2,...,K
p
k
σ
2
n
,
where K is the number of sources.
Throughout Section V-A, V-B and V-C, we consider the fol-
lowing three different types of linear arrays with the following
sensor configurations:
r
Co-prime Array [5]: [0, 3, 5, 6, 9, 10, 12, 15, 20, 25]λ/2
r
Nested Array [9]: [1, 2, 3, 4, 5, 10, 15, 20, 25, 30]λ/2
r
MRA [27]: [0, 1, 4, 10, 16, 22, 28, 30, 33, 35]λ/2
All three arrays share the same number of sensors, but differ-
ence apertures.
A. Numerical Verification
We first verify (11) via numerical simulations. We consider 11
sources with equal power, evenly placed between 67.50
and
56.25
, which is more than the number of sensors. We compare
the difference between the analytical MSE and the empirical
MSE under different combinations of SNR and snapshot num-
bers. The analytical MSE is defined by
MSE
an
=
1
K
K
k=1
(θ
k
),
938 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 65, NO. 4, FEBRUARY 15, 2017
Fig. 2. |MSE
an
MSE
em
|/MSE
em
for different types of arrays under
different numbers of snapshots and different SNRs.
and the empirical MSE is defined by
MSE
em
=
1
KL
L
l=1
K
k=1
ˆ
θ
(l)
k
θ
(l)
k
2
,
where θ
(l)
k
is the k-th DOA in the l-th trial, and
ˆ
θ
(l)
k
is the
corresponding estimate.
Fig. 2 illustrates the relative errors between MSE
an
and
MSE
em
obtained from 10,000 trials under various scenarios.
It can be observed that MSE
em
and MSE
an
agree very well
given enough snapshots and a sufficiently high SNR. It should
be noted that at 0 dB SNR, (8) is quite accurate when 250
snapshots are available. In addition. there is no significant dif-
ference between the relative errors obtained from DA-MUSIC
and those from SS-MUSIC. These observations are consistent
with our assumptions, and verify Theorem 1 and Theorem 2.
We observe that in some of the low SNR regions, |MSE
an
MSE
em
|/MSE
em
appears to be smaller even if the number of
snapshots is limited. In such regions, MSE
em
actually “satu-
rates”, and MSE
an
happens to be close to the saturated value.
Therefore, this observation does not imply that (11) is valid in
such regions.
B. Prediction of Resolvability
One direct application of Theorem 2 is predicting the re-
solvability of two closely located sources. We consider two
sources with equal power, located at θ
1
=30
Δθ/2, and
θ
2
=30
θ/2, where Δθ varies from 0.3
to 3.0
.Wesay
the two sources are correctly resolved if the MUSIC algorithm
is able to identify two sources, and the two estimated DOAs
satisfy |
ˆ
θ
i
θ
i
| < Δθ/2,fori ∈{1, 2}. The probability of res-
olution is computed from 500 trials. For all trials, the number of
Fig. 3. Probability of resolution vs. source separation, obtained from 500
trials. The number of snapshots is fixed at 500, and the SNR is set to 0 dB.
snapshots is fixed at 500, the SNR is set to 0 dB, and SS-MUSIC
is used.
For illustration purpose, we analytically predict the resolv-
ability of the two sources via the following simple criterion:
(θ
1
)+(θ
2
)
Unresovalble
Resolvable
Δθ. (22)
Readers are directed to [28] for a more comprehensive criterion.
Fig. 3 illustrates the resolution performance of the three ar-
rays under different Δθ, as well as the thresholds predicted by
(22). The MRA shows best resolution performance of the three
arrays, which can be explained by the fact that the MRA has the
largest aperture. The co-prime array, with the smallest aperture,
shows the worst resolution performance. Despite the differences
in resolution performance, the probability of resolution of each
array drops to nearly zero at the predicted thresholds. This con-
firms that (11) provides a convenient way of predicting the
resolvability of two close sources.
C. Asymptotic Efficiency Study
In this section, we utilize (11) and (19) to study the asymp-
totic statistical efficiency of DA-MUSIC and SS-MUSIC under
different array geometries and parameter settings. We define
their average efficiency as
κ =
tr CRB
θ
K
k=1
(θ
k
)
. (23)
For efficient estimators we expect κ =1, while for inefficient
estimators we expect 0 κ<1.
We first compare the κ value under different SNRs for
the three different arrays. We consider three cases: K =1,
K =6, and K =12.TheK sources are located at {−60
+
[120(k 1)/(K 1)]
|k =1, 2,...,K}, and all sources have
the same power. As shown in Fig. 4(a), when only one source is
present, κ increases as the SNR increases for all three arrays.
However, none of the arrays leads to efficient DOA estimation.
Interestingly, despite being the least efficient geometry in the
low SNR region, the co-prime array achieves higher efficiency
than the nested array in the high SNR region. When K =6,
we can observe in Fig. 4(b) that κ decreases to zero as SNR
increases. This rather surprising behavior suggests that both
DA-MUSIC and SS-MUSIC are not statistically efficient meth-
ods for DOA estimation when the number of sources is greater
than one and less than the number of sensors. It is consistent
WANG AND NEHORAI: COARRAYS, MUSIC, AND THE CRAM
´
ER–RAO BOUND 939
Fig. 4. Average efficiency vs. SNR: (a) K =1,(b)K =6,(c)K =12.
with the implication of Proposition 2 when K<M. When
K =12, the number of sources exceeds the number of sensors.
We can observe in Fig. 4(c) that κ also decreases as SNR in-
creases. However, unlike the case when K =6, κ converges to
a positive value instead of zero.
The above observations imply that DA-MUSIC and
SS-MUSIC achieve higher degrees of freedom at the cost of
decreased statistical efficiency. When statistical efficiency is
concerned and the number of sources is less than the number
of sensors, one might consider applying MUSIC directly to the
original sample covariance R defined in (3) [29].
Next, we then analyze how κ is affected by angular separa-
tion. Two sources located at Δθ and Δθ are considered. We
compute the κ values under different choices of Δθ for all three
arrays. For reference, we also include the empirical results ob-
tained from 1000 trials. To satisfy the asymptotic assumption,
the number of snapshots is fixed at 1000 for each trial. As shown
in Fig. 5(a)–(c), the overall statistical efficiency decreases as the
SNR increases from 0 dB to 10 dB for all three arrays, which
is consistent with our previous observation in Fig. 4(b). We can
also observe that the relationship between κ and the normalized
angular separation Δθ/π is rather complex, as opposed to the
traditional MUSIC algorithm (c.f. [2]). The statistical efficiency
Fig. 5. Average efficiency vs. angular separation for the co-prime array:
(a) MRA, (b) nested array, (c) co-prime array. The solid lines and dashed lines
are analytical values obtained from (23). The circles and crosses are emprical
results averaged from 1000 trials.
of DA-MUSIC and SS-MUSIC is highly dependent on array
geometry and angular separation.
D. MSE vs. Number of Sensors
In this section, we investigate how the number of sensors
affect the asymptotic MSE, (θ
k
). We consider three types of
sparse linear arrays: co-prime arrays, nested arrays, and MRAs.
In this experiment, the co-prime arrays are generated by co-
prime pairs (q,q +1) for q =2, 3,...,12. The nested arrays
are generated by parameter pairs (q +1,q) for q =2, 3,...,12.
The MRAs are constructed according to [27]. We consider two
cases: the one source case where K =1, and the under deter-
mined case where K = M. For the former case, we placed the
only source at the 0
. For the later case, we placed the sources
uniformly between 60
and 60
.WesetSNR = 0 dB and
N = 1000. The empirical MSEs were obtained from 500 trials.
SS-MUSIC was used in all the trials.
In Fig. 6(a), we observe that when K =1, the MSE de-
creases at a rate of approximately O(M
4.5
) for all three arrays.
940 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 65, NO. 4, FEBRUARY 15, 2017
Fig. 6. MSE vs. M :(a)K =1,(b)K = M . The solid lines are analytical
results. The “+”, “”, and “” denote empirical results obtains from 500 trials.
The dashed lines are trend lines used for comparison.
In Fig. 6(b), we observe that when K = M, the MSE only de-
creases at a rate of approximately O(M
3.5
). In both cases, the
MRAs and the nested arrays achieve lower MSE than the co-
prime arrays. Another interesting observation is that for all three
arrays, the MSE decreases faster than O(M
3
). Recall that for a
M-sensor ULA, the asymptotic MSE of traditional MUSIC de-
creases at a rate of O(M
3
) as M →∞[2]. This observation
suggests that given the same number of sensors, these sparse
linear arrays can achieve higher estimation accuracy than ULAs
when the number of sensors is large.
VI. C
ONCLUSION
In this paper, we reviewed the coarray signal model and de-
rived the asymptotic MSE expression for two coarray-based
MUSIC algorithms, namely DA-MUSIC and SS-MUSIC. We
theoretically proved that the two MUSIC algorithms share the
same asymptotic MSE error expression. Our analytical MSE
expression is more revealing and can be applied to various types
of sparse linear arrays, such as co-prime arrays, nested arrays,
and MRAs. In addition, our MSE expression is also valid when
the number of sources exceeds the number of sensors. We also
derived the CRB for sparse linear arrays, and analyzed the sta-
tistically efficiency of typical sparse linear arrays. Our results
will benefit to future research on performance analysis and opti-
mal design of sparse linear arrays. Throughout our derivations,
we assume the array is perfectly calibrated. In the future, it will
be interesting to extend the results in this paper to cases when
model errors are present. Additionally, we will further investi-
gate how the number of sensors affect the MSE and the CRB for
sparse linear arrays, as well as the possibility of deriving closed
form expressions in the case of large number of sensors.
A
PPENDIX A
D
EFINITION AND PROPERTIES OF THE COARRAY
SELECTION MATR IX
According to (3),
R
mn
=
K
k=1
p
k
exp[j(
¯
d
m
¯
d
n
)φ
k
]+δ
mn
σ
2
n
,
where δ
mn
denotes Kronecker’s delta. This equation implies
that the (m, n)-th element of R is associated with the difference
(
¯
d
m
¯
d
n
). To capture this property, we introduce the difference
matrix Δ such that Δ
mn
=
¯
d
m
¯
d
n
. We also define the weight
function ω(n):Z → Z as (see [9] for details)
ω(l)=|{(m, n)|Δ
mn
= l}|,
where |A| denotes the cardinality of the set A . Intuitively, ω(l)
counts the number of all possible pairs of (
¯
d
m
,
¯
d
n
) such that
¯
d
m
¯
d
n
= l. Clearly, ω(l)=ω(l).
Definition 1: The coarray selection matrix F is a (2M
v
1) × M
2
matrix satisfying
F
m,p+(q1)M
=
1
ω(m M
v
)
, Δ
pq
= m M
v
,
0, otherwise,
(24)
for m =1, 2,...,2M
v
1,p=1, 2,...,M,q =1, 2,...,M.
To better illustrate the construction of F , we consider a toy
array whose sensor locations are given by {0,d
0
, 4d
0
}.The
corresponding difference matrix of this array is
Δ =
0 1 4
103
43 0
.
The ULA part of the difference coarray consists of three sensors
located at d
0
,0,andd
0
. The weight function satisfies ω(1) =
ω(1) = 1, and ω(0) = 3,soM
v
=2. We can write the coarray
selection matrix as
F =
000100000
1
3
000
1
3
000
1
3
010000000
.
If we pre-multiply the vectorized sample covariance matrix r by
F , we obtain the observation vector of the virtual ULA (defined
in (5)):
z =
z
1
z
2
z
3
=
R
12
1
3
(R
11
+ R
22
+ R
33
)
R
21
.
It can be seen that z
m
is obtained by averaging all the el-
ements in R that correspond to the difference m M
v
,for
m =1, 2,...,2M
v
1.
Based on Definition 1, we nowderive several useful properties
of F .
Lemma 1: F
m,p+(q1)M
= F
2M
v
m,q+(p1)M
for m =
1, 2,...,2M
v
1,p=1, 2,...,M,q =1, 2,...,M.
Proof: If F
m,p+(q1)M
=0, then Δ
pq
= m M
v
.Be-
cause Δ
qp
= Δ
pq
, Δ
qp
= (m M
v
). Hence (2M
v
m) M
v
= (m M
v
)
qp
, which implies that
F
2M
v
m,q+(p1)M
is also zero.
If F
m,p+(q1)M
=0, then Δ
pq
=mM
v
and F
m,p+(q1)
M =
1(m M
v
). Note that (2M
v
m) M
v
= (mM
v
)=
Δ
pq
qp
. We thus have F
2M
v
m,q+(p1)M
=1((m
M
v
)) = 1(m M
v
)=F
m,p+(q1)M
.
Lemma 2: Let R C
M
be Hermitian symmetric. Then z =
F vec(R) is conjugate symmetric.
WANG AND NEHORAI: COARRAYS, MUSIC, AND THE CRAM
´
ER–RAO BOUND 941
Proof: By Lemma 1 and R = R
H
,
z
m
=
M
p=1
M
q=1
F
m,p+(q1)M
R
pq
=
M
q=1
M
p=1
F
2M
v
m,q+(p1)M
R
qp
= z
2M
v
m
.
Lemma 3: Let z C
2M
v
1
be conjugate symmetric. Then
mat
M,M
(F
T
z) is Hermitian symmetric.
Proof: Let H = mat
M,M
(F
T
z). Then
H
pq
=
2M
v
1
m =1
z
m
F
m,p+(q1)M
. (25)
We know that z is conjugate symmetric, so z
m
= z
2M
v
m
.
Therefore, by Lemma 1
H
pq
=
2M
v
1
m =1
z
2M
v
m
F
2M
v
m,q+(p1)M
=
2M
v
1
m
=1
z
m
F
m
,q+(p1)M
= H
qp
.
(26)
A
PPENDIX B
P
ROOF OF THEOREM 1
We first derive the first-order expression of DA-MUSIC. De-
note the eigendecomposition of R
v1
by
R
v1
= E
s
Λ
s1
E
H
s
+ E
n
Λ
n1
E
H
n
,
where E
n
and E
s
are eigenvectors of the signal subspace and
noise subspace, respectively, and Λ
s1
, Λ
n1
are the correspond-
ing eigenvalues. Specifically, we have Λ
n1
= σ
2
n
I.
Let
˜
R
v1
= R
v1
R
v1
,
˜
E
n1
= E
n
E
n1
, and
˜
Λ
n1
=
Λ
n1
Λ
n1
be the perturbed versions of R
v1
, E
n
, and Λ
n1
.
The following equality holds:
(R
v1
R
v1
)(E
n
E
n1
)=(E
n
E
n1
)(Λ
n1
Λ
n1
).
If the perturbation is small, we can omit high-order terms and
obtain [16], [23], [25]
A
H
v
ΔE
n1
.
= P
1
A
v
ΔR
v1
E
n
. (27)
Because P is diagonal, for a specific θ
k
,wehave
a
H
(θ
k
E
n1
.
= p
1
k
e
T
k
A
v
ΔR
v1
E
n
, (28)
where e
k
is the k-th column of the identity matrix I
K × K
. Based
on the conclusion in Appendix B of [2], under sufficiently small
perturbations, the error expression of DA-MUSIC for the k-th
DOA is given by
ˆ
θ
(1)
k
θ
k
.
=
R[a
H
v
(θ
k
E
n1
E
H
n
˙
a
v
(θ
k
))]
˙
a
H
v
(θ
k
)E
n
E
H
n
˙
a
v
(θ
k
)
,
(29)
where
˙
a
v
(θ
k
)=a
v
(θ
k
)/∂θ
k
.
Substituting (28) into (29) gives
ˆ
θ
(1)
k
θ
k
.
=
R[e
T
k
A
v
ΔR
v1
E
n
E
H
n
˙
a
v
(θ
k
)]
p
k
˙
a
H
v
(θ
k
)E
n
E
H
n
˙
a
v
(θ
k
)
. (30)
Because vec(AXB)=(B
T
A) vec(X) and E
n
E
H
n
=
Π
A
v
, we can use the notations introduced in (9b)–(9d) to ex-
press (30) as
ˆ
θ
(1)
k
θ
k
.
= (γ
k
p
k
)
1
R[(β
k
α
k
)
T
Δr
v1
], (31)
where Δr
v1
=vec(ΔR
v1
).
Note that
˜
R
v1
is constructed from
˜
R. It follows that ΔR
v1
actually depends on ΔR, which is the perturbation part of the
covariance matrix R. By the definition of R
v1
,
Δr
v1
=vec

Γ
M
v
Δz ··· Γ
2
Δz Γ
1
Δz

= ΓF Δr,
where Γ =[Γ
T
M
v
Γ
T
M
v
1
···Γ
T
1
]
T
and Δr = vec(ΔR).
Let ξ
k
= F
T
Γ
T
(β
k
α
k
). We can now express (31) in
terms of Δr as
ˆ
θ
(1)
k
θ
k
.
= (γ
k
p
k
)
1
R(ξ
T
k
Δr), (32)
which completes the first part of the proof.
We next consider the first-order error expression of SS-
MUSIC. From (7) we know that R
v2
shares the same eigen-
vectors as R
v1
. Hence the eigendecomposition of R
v2
can be
expressed by
R
v2
= E
s
Λ
s2
E
H
s
+ E
n
Λ
n2
E
H
n
,
where Λ
s2
and Λ
n2
are the eigenvalues of the signal subspace
and noise subspace. Specifically, we have Λ
n2
= σ
4
n
/M
v
I.Note
that R
v2
=(A
v
PA
H
v
+ σ
4
n
I)
2
/M
v
. Following a similar ap-
proach to the one we used to obtain (27), we get
A
H
v
ΔE
n2
.
= M
v
P
1
(PA
H
v
A
v
+2σ
2
n
I)
1
A
v
ΔR
v2
E
n
,
where ΔE
n2
is the perturbation of the noise eigenvectors pro-
duced by ΔR
v2
. After omitting high-order terms, ΔR
v2
is
given by
ΔR
v2
.
=
1
M
v
M
v
k=1
(z
k
Δz
H
k
z
k
z
H
k
).
According to [9], each subarray observation vector z
k
can be
expressed by
z
k
= A
v
Ψ
M
v
k
p + σ
2
n
i
M
v
k+1
, (33)
for k =1, 2,...,M
v
, where i
l
is a vector of length M
v
whose
elements are zero except for the l-th element being one, and
Ψ = diag(e
1
,e
2
,...,e
K
).
Observe that
M
v
k=1
σ
2
n
i
M
v
k+1
Δz
H
k
= σ
2
n
ΔR
H
v1
,
942 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 65, NO. 4, FEBRUARY 15, 2017
and
M
v
k=1
A
v
Ψ
M
v
k
pΔz
H
k
= A
v
P
e
j(M
v
1)φ
1
e
j(M
v
2)φ
1
··· 1
e
j(M
v
1)φ
2
e
j(M
v
2)φ
2
··· 1
.
.
.
.
.
.
.
.
.
.
.
.
e
j(M
v
1)φ
K
e
j(M
v
2)φ
K
··· 1
Δz
H
1
Δz
H
2
.
.
.
Δz
H
M
v
= A
v
P (T
M
v
A
v
)
H
T
M
v
ΔR
H
v1
= A
v
PA
H
v
ΔR
H
v1
,
where T
M
v
is a M
v
× M
v
permutation matrix whose anti-
diagonal elements are one, and whose remaining elements are
zero. Because ΔR R
H
, by Lemma 2 we know that Δz is
conjugate symmetric. According to the definition of R
v1
,itis
straightforward to show that ΔR
v1
R
H
v1
also holds. Hence
ΔR
v2
.
=
1
M
v
(A
v
PA
H
v
+2σ
2
n
IR
v1
R
v1
A
v
PA
H
v
.
Substituting ΔR
v2
into the expression of A
H
v
ΔE
n2
, and uti-
lizing the property that A
H
v
E
n
= 0, we can express A
H
v
ΔE
n2
as
P
1
(PA
H
v
A
v
+2σ
2
n
I)
1
A
v
(A
v
PA
H
v
+2σ
2
n
IR
v1
E
n
.
Observe that
A
v
(A
v
PA
H
v
+2σ
2
n
I)=(A
H
v
A
v
)
1
A
H
v
(A
v
PA
H
v
+2σ
2
n
I)
=[PA
H
v
+2σ
2
n
(A
H
v
A
v
)
1
A
H
v
]
=(PA
H
v
A
v
+2σ
2
n
I)A
v
.
Hence the term (PA
H
v
A
v
+2σ
2
n
I) gets canceled and we obtain
A
H
v
ΔE
n2
.
= P
1
A
v
ΔR
v1
E
n
, (34)
which coincides with the first-order error expression of
A
H
v
ΔE
n1
.
A
PPENDIX C
P
ROOF OF THEOREM 2
Before proceeding to the main proof, we introduce the fol-
lowing definition.
Definition 2: Let A =[a
1
a
2
...a
N
] R
N × N
, and B =
[b
1
b
2
...b
N
] R
N × N
. The structured matrix C
AB
R
N
2
× N
2
is defined as
C
AB
=
a
1
b
T
1
a
2
b
T
1
... a
N
b
T
1
a
1
b
T
2
a
2
b
T
2
... a
N
b
T
2
.
.
.
.
.
.
.
.
.
.
.
.
a
1
b
T
N
a
2
b
T
N
... a
N
b
T
N
.
We now start deriving the explicit MSE expression. Accord-
ingto(32),
E[(
ˆ
θ
k
1
θ
k
1
)(
ˆ
θ
k
2
θ
k
2
)]
.
=(γ
k
1
p
k
1
)
1
(γ
k
2
p
k
2
)
1
E[R(ξ
T
k
1
Δr) R(ξ
T
k
2
Δr)]
=(γ
k
1
p
k
1
)
1
(γ
k
2
p
k
2
)
1
R(ξ
k
1
)
T
E[Rr) Rr)
T
]
× R(ξ
k
2
)
+ I(ξ
k
1
)
T
E[Ir) Ir)
T
] I(ξ
k
2
)
R(ξ
k
1
)
T
E[Rr) Ir)
T
] I(ξ
k
2
)
R(ξ
k
2
)
T
E[Rr) Ir)
T
] I(ξ
k
1
)
, (35)
where we used the property that R(AB)=R(A) R(B)
I(A) I(B) for two complex matrices A and B with proper
dimensions.
To obtain the closed-form expression for (35), we need to
compute the four expectations. It should be noted that in the case
of finite snapshots, Δr does not follow a circularly-symmetric
complex Gaussian distribution. Therefore we cannot directly
use the properties of the circularly-symmetric complex Gaus-
sian distribution to evaluate the expectations. For brevity, we
demonstrate the computation of only the first expectation in
(35). The computation of the remaining three expectations fol-
lows the same idea.
Let r
i
denote the i-th column of R in (3). Its estimate,
ˆ
r
i
,
is given by
N
t=1
y(t)y
i
(t), where y
i
(t) is the i-th element of
y(t). Because E[
ˆ
r
i
]=r
i
,
E[Rr
i
) Rr
l
)
T
]
= E[R(
ˆ
r
i
) R(
ˆ
r
l
)
T
] R(r
i
) R(r
l
)
T
.
(36)
The second term in (36) is deterministic, and the first term in
(36) can be expanded into
1
N
2
E
R
N
s=1
y(s)y
i
(s)
R
N
t=1
y(t)y
l
(t)
T
=
1
N
2
E
N
s=1
N
t=1
R(y(s)y
i
(s)) R(y(t)y
l
(t))
T
=
1
N
2
N
s=1
N
t=1
E
R(y(s)) R(y
i
(s)) I(y(s)) I(y
i
(s))
×
R(y(t))
T
R(y
l
(t)) I(y(t))
T
I(y
l
(t))
=
1
N
2
N
s=1
N
t=1
E[R(y(s)) R(y
i
(s)) R(y(t))
T
R(y
l
(t))]
+ E[R(y(s)) R(y
i
(s)) I(y(t))
T
I(y
l
(t))]
+ E[I(y(s)) I(y
i
(s)) R(y(t))
T
R(y
l
(t))]
+ E[I(y(s)) I(y
i
(s)) I(y(t))
T
I(y
l
(t))]
. (37)
We first consider the partial sum of the cases when s = t.By
A4, y(s) and y(t) are uncorrelated Gaussians. Recall that for
WANG AND NEHORAI: COARRAYS, MUSIC, AND THE CRAM
´
ER–RAO BOUND 943
x ∼CN(0, Σ),
E[R(x) R(x)
T
]=
1
2
R(Σ), E[R(x) I(x)
T
]=
1
2
I(Σ)
E[I(x) R(x)
T
]=
1
2
I(Σ), E[I(x) I(x)
T
]=
1
2
R(Σ).
We have
E[R(y(s)) R(y
i
(s)) R(y(t))
T
R(y
l
(t))]
= E[R(y(s)) R(y
i
(s))]E[R(y(t))
T
R(y
l
(t))]
=
1
4
R(r
i
) R(r
l
)
T
.
Similarly, we can obtain that when s = t,
E[R(y(s)) R(y
i
(s)) I(y(t))
T
I(y
l
(t))] =
1
4
R(r
i
) R(r
l
)
T
,
E[I(y(s)) I(y
i
(s)) R(y(t))
T
R(y
l
(t))] =
1
4
R(r
i
) R(r
l
)
T
,
E[I(y(s)) I(y
i
(s)) I(y(t))
T
I(y
l
(t))] =
1
4
R(r
i
) R(r
l
)
T
.
(38)
Therefore the partial sum of the cases when s = t is given by
(1 1/N ) R(r
i
) R(r
l
)
T
.
We now consider the partial sum of the cases when s = t.We
first consider the first expectation inside the double summation
in (37). Recall that for x ∼N(0, Σ), E[x
i
x
l
x
p
x
q
]=σ
il
σ
pq
+
σ
ip
σ
lq
+ σ
iq
σ
lp
. We can express the (m, n)-th element of the
matrix E[R(y(t)) R(y
i
(t)) R(y(t))
T
R(y
l
(t))] as
E[R(y
m
(t)) R(y
i
(t)) R(y
n
(t)) R(y
l
(t))]
= E[R(y
m
(t)) R(y
i
(t)) R(y
l
(t)) R(y
n
(t))]
= E[R(y
m
(t)) R(y
i
(t))]E[R(y
l
(t)) R(y
n
(t))]
+ E[R(y
m
(t)) R(y
l
(t))]E[R(y
i
(t)) R(y
n
(t))]
+ E[R(y
m
(t)) R(y
n
(t))]E[R(y
i
(t)) R(y
l
(t))]
=
1
4
[R(R
mi
) R(R
ln
)+ R(R
ml
) R(R
in
)+R(R
mn
) R(R
il
)].
Hence
E[R(y(t)) R(y
i
(t)) R(y(t))
T
R(y
l
(t))]
=
1
4
[R(r
i
) R(r
l
)
T
+ R(r
l
) R(r
i
)
T
+ R(R) R(R
il
)].
Similarly, we obtain that
E[I(y(t)) I(y
i
(t)) I(y(t))
T
I(y
l
(t))]
=
1
4
[R(r
i
) R(r
l
)
T
+ R(r
l
) R(r
i
)
T
+ R(R) R(R
il
)],
E[R(y(t)) R(y
i
(t)) I(y(t))
T
I(y
l
(t))]
= E[I(y(t)) I(y
i
(t)) R(y(t))
T
R(y
l
(t))]
=
1
4
[R(r
i
) R(r
l
)
T
I(r
l
) I(r
i
)
T
+ I(R) I(R
il
)].
Therefore the partial sum of the cases when s = t
is given by (1/N ) R(r
i
) R(r
l
)
T
+(1/2N)[R(R) R(R
il
)+ I
(R) I(R
il
)+R(r
l
) R(r
i
)
T
I(r
l
) I(r
i
)
T
]. Combined with
the previous partial sum of the cases when s = t, we obtain
that
E[Rr
i
) Rr
l
)
T
]
=
1
2N
[R(R) R(R
il
)+I(R) I(R
il
)
+ R(r
l
) R(r
i
)
T
I(r
l
) I(r
i
)
T
].
(39)
Therefore
E[Rr) Rr)
T
]
=
1
2N
[R(R) R(R)+I(R) I(R)
+ C
R(R) R(R)
C
I(R) I(R)
],
(40)
which completes the computation of first expectation in (35).
Utilizing the same technique, we obtain that
E[Ir) Ir)
T
]
=
1
2N
[R(R) R(R)+I(R) I(R)
+ C
I(R) I(R)
C
R(R) R(R)
], (41)
and
E[Rr) Ir)
T
]
=
1
2N
[I(R) R(R) R(R) I(R)
+ C
R(R) I(R)
+ C
I(R) R(R)
]. (42)
Substituting (40)–(42) into (35) gives a closed-form MSE ex-
pression. However, this expression is too complicated for analyt-
ical study. In the following steps, we make use of the properties
of ξ
k
to simply the MSE expression.
Lemma 4: Let X, Y , A, B R
N × N
satisfying X
T
=
(1)
n
x
X, A
T
=(1)
n
a
A, and B
T
=(1)
n
b
B, where
n
x
,n
a
,n
b
∈{0, 1}. Then
vec(X)
T
(A B) vec(Y )
=(1)
n
x
+n
b
vec(X)
T
C
AB
vec(Y ),
vec(X)
T
(B A) vec(Y )
=(1)
n
x
+n
a
vec(X)
T
C
BA
vec(Y ).
Proof: By Definition 2,
vec(X)
T
C
AB
vec(Y )
=
N
m =1
N
n=1
x
T
m
a
n
b
T
m
y
n
=
N
m =1
N
n=1
N
p=1
A
pn
X
pm

N
p=1
B
qm
Y
qn
=
N
m =1
N
n=1
N
p=1
N
q=1
A
pn
X
pm
B
qm
Y
qn
944 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 65, NO. 4, FEBRUARY 15, 2017
=(1)
n
x
+n
b
N
p=1
N
n=1
N
m =1
N
q=1
(X
mp
B
mq
Y
qn
)A
pn
=(1)
n
x
+n
b
N
p=1
N
n=1
x
T
p
A
pn
By
n
=(1)
n
x
+n
b
vec(X)
T
(A B) vec(Y ).
The proof of the second equality follows the same idea.
Lemma 5: T
M
v
Π
A
v
T
M
v
=(Π
A
v
)
.
Proof: Since Π
A
v
= I A
v
(A
H
v
A
v
)
1
A
H
v
, it suffices
to show that T
M
v
A
v
(A
H
v
A
v
)
1
A
H
v
T
M
v
=(A
v
(A
H
v
A
v
)
1
A
H
v
)
. Because A
v
is the steering matrix of a ULA with
M
v
sensors, it is straightforward to show that T
M
v
A
v
=
(A
v
Φ)
, where Φ = diag(e
j(M
v
1)φ
1
,e
j(M
v
1)φ
2
,...,
e
j(M
v
1)φ
K
).
Because T
M
v
T
M
v
= I, T
H
M
v
= T
M
v
,
T
M
v
A
v
(A
H
v
A
v
)
1
A
H
v
T
M
v
= T
M
v
A
v
(A
H
v
T
H
M
v
T
M
v
A
v
)
1
A
H
v
T
H
M
v
=(A
v
Φ)
((A
v
Φ)
T
(A
v
Φ)
)
1
(A
v
Φ)
T
=(A
v
(A
H
v
A
v
)
1
A
H
v
)
.
Lemma 6: Let Ξ
k
= mat
M,M
(ξ
k
). Then Ξ
H
k
= Ξ
k
for k =
1, 2,...,K.
Proof: Note that ξ
k
= F
T
Γ
T
(β
k
α
k
). We first prove that
β
k
α
k
is conjugate symmetric, or that (T
M
v
T
M
v
)(β
k
α
k
)=(β
k
α
k
)
. Similar to the proof of Lemma 5, we utilize
the properties that T
M
v
A
v
=(A
v
Φ)
and that T
M
v
a
v
(θ
k
)=
(a
v
(θ
k
)e
j(M
v
1)φ
k
)
to show that
T
M
v
(A
v
)
H
e
k
a
H
v
(θ
k
)T
M
v
=[(A
v
)
H
e
k
a
H
v
(θ
k
)]
. (43)
Observe that
˙
a
v
(θ
k
)=j
˙
φ
k
Da
v
(θ
k
), where
˙
φ
k
=(2πd
0
cos θ
k
) and D = diag(0, 1,...,M
v
1).Wehave
(T
M
v
T
M
v
)(β
k
α
k
)=(β
k
α
k
)
⇐⇒ T
M
v
α
k
β
T
k
T
M
v
=(α
k
β
T
k
)
⇐⇒ T
M
v
[(A
v
)
H
e
k
a
H
v
(θ
k
)DΠ
A
v
]
T
M
v
= (A
v
)
H
e
k
a
H
v
(θ
k
)DΠ
A
v
.
Since D = T
M
v
T
M
v
DT
M
v
T
M
v
, combining with Lemma 5
and (43), it suffices to show that
(A
v
)
H
e
k
a
H
v
(θ
k
)T
M
v
DT
M
v
Π
A
v
= (A
v
)
H
e
k
a
H
v
(θ
k
)DΠ
A
v
. (44)
Observe that T
M
v
DT
M
v
+ D =(M
v
1)I.Wehave
Π
A
v
(T
M
v
DT
M
v
+ D)a
v
(θ
k
)=0,
or equivalently
a
H
v
(θ
k
)T
M
v
DT
M
v
Π
A
v
= a
H
v
(θ
k
)DΠ
A
v
. (45)
Pre-multiplying both sides of (45) with (A
v
)
H
e
k
leads to
(44), which completes the proof that β
k
α
k
is conjugate
symmetric. According to the definition of Γ in(9e),itis
straightforward to show that Γ
T
(β
k
α
k
) is also conjugate
symmetric. Combined with Lemma 3 in Appendix A, we con-
clude that mat
M,M
(F
T
Γ
T
(β
k
α
k
)) is Hermitian symmet-
ric, or that Ξ
k
= Ξ
H
k
.
Given Lemma 4–6, we are able continue the simplification.
We first consider the term R(ξ
k
1
)
T
E[Rr) Rr)
T
] R(ξ
k
2
)
in (35). Let Ξ
k
1
= mat
M,M
(ξ
k
1
), and Ξ
k
2
= mat
M,M
(ξ
k
2
).
By Lemma 6, we have Ξ
k
1
= Ξ
H
k
1
, and Ξ
k
2
= Ξ
H
k
2
. Observe
that R(R)
T
= R(R), and that I(R)
T
= I(R). By Lemma 4
we immediately obtain the following equalities:
R(ξ
k
1
)
T
(R(R) R(R)) R(ξ
k
2
)
= R(ξ
k
1
)
T
C
R(R) R(R)
R(ξ
k
2
),
R(ξ
k
1
)
T
(I(R) I(R)) R(ξ
k
2
)
= R(ξ
k
1
)
T
C
I(R) I(R)
R(ξ
k
2
).
Therefore R(ξ
k
1
)
T
E[Rr) Rr)
T
] R(ξ
k
2
) can be com-
pactly expressed as
R(ξ
k
1
)
T
E[Rr) Rr)
T
] R(ξ
k
2
)
=
1
N
R(ξ
k
1
)
T
[R(R) R(R)+I(R) I(R)] R(ξ
k
2
)
=
1
N
R(ξ
k
1
)
T
R(R
T
R) R(ξ
k
2
), (46)
where we make use of the properties that R
T
= R
, and
R(R
R)=R(R) R(R)+I(R) I(R). Similarly, we
can obtain that
I(ξ
k
1
)
T
E[Ir) Ir)
T
] I(ξ
k
2
)
=
1
N
I(ξ
k
1
)
T
R(R
T
R) I(ξ
k
2
), (47)
R(ξ
k
1
)
T
E[Rr) Ir)
T
] I(ξ
k
2
)
=
1
N
R(ξ
k
1
)
T
I(R
T
R) I(ξ
k
2
), (48)
R(ξ
k
2
)
T
E[Rr) Ir)
T
] I(ξ
k
1
)
=
1
N
R(ξ
k
2
)
T
I(R
T
R) I(ξ
k
1
). (49)
Substituting (46)–(49) into (35) completes the proof.
A
PPENDIX D
P
ROOF OF PROPOSITION 2
Without loss of generality, let p =1and σ
2
n
0. For brevity,
we denote R
T
R by W . We first consider the case when K<
M. Denote the eigendecomposition of R
1
by E
s
Λ
1
s
E
H
s
+
σ
2
n
E
n
E
H
n
.Wehave
W
1
= σ
4
n
K
1
+ σ
2
n
K
2
+ K
3
,
WANG AND NEHORAI: COARRAYS, MUSIC, AND THE CRAM
´
ER–RAO BOUND 945
where
K
1
= E
n
E
T
n
E
n
E
H
n
,
K
2
= E
s
Λ
1
s
E
T
s
E
n
E
H
n
+ E
n
E
T
n
E
s
Λ
1
s
E
H
s
,
K
3
= E
s
Λ
1
s
E
T
s
E
s
Λ
1
s
E
H
s
.
Recall that A
H
E
n
= 0.Wehave
K
1
˙
A
d
=(E
n
E
T
n
E
n
E
H
n
)(
˙
A
A + A
˙
A)
= E
n
E
T
n
˙
A
E
n
E
H
n
A + E
n
E
T
n
A
E
n
E
H
n
˙
A
= 0. (50)
Therefore
M
H
θ
M
θ
=
˙
A
H
d
W
1
˙
A
d
= σ
2
n
˙
A
H
d
(K
2
+ σ
2
n
K
3
)
˙
A
d
. (51)
Similar to W
1
, we denote W
1
2
= σ
2
n
K
1
+ σ
1
n
K
4
+ K
5
,
where
K
4
= E
s
Λ
1
2
s
E
T
s
E
n
E
H
n
+ E
n
E
T
n
E
s
Λ
1
2
s
E
H
s
,
K
5
= E
s
Λ
1
2
s
E
T
s
E
s
Λ
1
2
s
E
H
s
.
Therefore
M
H
θ
Π
M
s
M
θ
=
˙
A
H
d
W
1
2
Π
M
s
W
1
2
˙
A
d
= σ
2
n
˙
A
H
d
(σ
n
K
5
+ K
4
)Π
M
s
(σ
n
K
5
+ K
4
)
˙
A
d
,
where Π
M
s
= M
s
M
s
. We can then express the CRB as
CRB
θ
= σ
2
n
(Q
1
+ σ
n
Q
2
+ σ
2
n
Q
3
)
1
, (52)
where
Q
1
=
˙
A
H
d
(K
2
K
4
Π
M
s
K
4
)
˙
A
d
,
Q
2
=
˙
A
H
d
(K
4
Π
M
s
K
5
+ K
5
Π
M
s
K
4
)
˙
A
d
,
Q
3
=
˙
A
H
d
(K
3
K
5
Π
M
s
K
5
)
˙
A
d
.
When σ
2
n
=0, R reduces to AA
H
. Observe that the eigende-
composition of R always exists for σ
2
n
0.WeuseK
1
K
5
to
denote the corresponding K
1
K
5
when σ
2
n
0.
Lemma 7: Let K<M. Assume r/∂η is full column rank.
Then lim
σ
2
n
0
+
Π
M
s
exists.
Proof: Because A
H
E
n
= 0,
K
2
A
d
=(E
s
Λ
1
s
E
T
s
E
n
E
H
n
)(A
A)
+(E
n
E
T
n
E
s
Λ
1
s
E
H
s
)(A
A)
= E
s
Λ
1
s
E
T
s
A
E
n
E
H
n
A
+ E
n
E
T
n
A
E
s
Λ
1
s
E
H
s
A
= 0
Similarly, we can show that K
4
A
d
= 0, i
H
K
2
i = i
H
K
4
i =
0, and i
H
K
1
i =rank(E
n
)=M K. Hence
M
H
s
M
s
=
A
H
d
K
3
A
d
A
H
d
K
3
i
i
H
K
3
A
d
i
H
W
1
i
.
Because r/∂η is full column rank, M
H
s
M
s
is full rank and
positive definite. Therefore the Schur complements exist, and
we can inverse M
H
s
M
s
block-wisely. Let V = A
H
d
K
3
A
d
and v = i
H
W
1
i. After tedious but straightforward computa-
tion, we obtain
Π
M
s
= K
5
A
d
S
1
A
H
d
K
5
s
1
K
5
A
d
V
1
A
H
d
K
3
ii
H
(K
5
+ σ
2
n
K
1
)
v
1
(K
5
+ σ
2
n
K
1
)ii
H
K
3
A
d
S
1
A
H
d
K
5
+ s
1
(K
5
+ σ
2
n
K
1
)ii
H
(K
5
+ σ
2
n
K
1
),
where S and s are Schur complements given by
S = V v
1
A
H
d
K
3
ii
H
K
3
A
d
,
s = v i
H
K
3
A
d
V
1
A
H
d
K
5
i.
Observe that
v = i
H
W
1
i = σ
4
n
(M K)+i
H
K
3
i.
We know that both v
1
and s
1
decrease at the rate of σ
4
n
.As
σ
2
n
0,wehave
S A
H
d
K
3
A
d
,
s
1
(K
5
+ σ
2
n
K
1
) 0,
v
1
(K
5
+ σ
2
n
K
1
) 0,
s
1
(K
5
+ σ
2
n
K
1
)ii
H
(K
5
+ σ
2
n
K
1
)
K
1
ii
H
K
1
M K
.
We now show that A
H
d
K
3
A
d
is nonsingular. Denote the
eigendecomposition of AA
H
by E
s
Λ
s
(E
s
)
H
. Recall that
for matrices with proper dimensions, (A B)
H
(C D)=
(A
H
C) (B
H
D), where denotes the Hadamard product.
We can expand A
H
d
K
3
A
d
into
A
H
E
s
(Λ
s
)
1
(E
s
)
H
A
A
H
E
s
(Λ
s
)
1
(E
s
)
H
A
.
Note that AA
H
E
s
(Λ
s
)
1
(E
s
)
H
A = E
s
(E
s
)
H
A = A, and
that A is full column rank when K<M. We thus
have A
H
E
s
(Λ
s
)
1
(E
s
)
H
A = I. Therefore A
H
d
K
3
A
d
= I,
which is nonsingular.
Combining the above results, we obtain that when σ
2
n
0,
Π
M
s
K
5
A
d
A
H
d
K
5
+
K
1
ii
H
K
1
M K
.
For sufficiently small σ
2
n
> 0, it is easy to show that K
1
K
5
are bounded in the sense of Frobenius norm (i.e., K
i
F
C
for some C>0,fori ∈{1, 2, 3, 4, 5}). Because r/∂η is
full rank, M
s
is also full rank for any σ
2
n
> 0, which implies
that Π
M
s
is well-defined for any σ
2
n
> 0. Observe that Π
M
s
is positive semidefinite, and that tr(Π
M
s
) = rank(M
s
).We
know that Π
M
s
is bounded for any σ
2
n
> 0. Therefore Q
2
and
Q
3
are also bounded for sufficiently small σ
2
n
, which implies
that σ
n
Q
2
+ σ
2
n
Q
3
0 as σ
2
n
0.
By Lemma 7, we know that Q
1
Q
1
as σ
2
n
0, where
Q
1
=
˙
A
H
d
(K
2
K
4
Π
M
s
K
4
)
˙
A
d
,
946 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 65, NO. 4, FEBRUARY 15, 2017
and Π
M
s
= lim
σ
2
n
0
+
Π
M
s
as derived in Lemma 7. As-
sume Q
1
is nonsingular
2
. By (52) we immediately obtain that
CRB
θ
0 as σ
2
n
0.
When K M, R is full rank regardless of the choice of
σ
2
n
. Hence (R
T
R)
1
is always full rank. Because r/∂η is
full column rank, the FIM is positive definite, which implies its
Schur complements are also positive definite. Therefore CRB
θ
is positive definite.
R
EFERENCES
[1] R. Schmidt, “Multiple emitter location and signal parameter estimation,
IEEE Trans. Antennas Propag., vol. 34, no. 3, pp. 276–280, Mar. 1986.
[2] P. Stoica and A. Nehorai, “MUSIC, maximum likelihood, and Cramer-
Rao bound,IEEE Trans. Acoust., Speech, Signal Process., vol. 37, no. 5,
pp. 720–741, May 1989.
[3] B. Ottersten, M. Viberg, P. Stoica, and A. Nehorai, “Exact and large sample
maximum likelihood techniques for parameter estimation and detection in
array processing, in Radar Array Processing, T. S. Huang, T. Kohonen,
M. R. Schroeder, H. K. V. Lotsch, S. Haykin, J. Litva, and T. J. Shepherd,
Eds. Berlin, Germany: Springer, 1993, vol. 25, pp. 99–151.
[4] A. Moffet, “Minimum-redundancy linear arrays, IEEE Trans. Antennas
Propag., vol. 16, no. 2, pp. 172–175, Mar. 1968.
[5] P. Pal and P. P. Vaidyanathan, “Coprime sampling and the MUSIC algo-
rithm, in Proc. 2011 IEEE Digital Signal Process. Workshop IEEE Signal
Process. Education Workshop, Jan. 2011, pp. 289–294.
[6] Z. Tan and A. Nehorai, “Sparse direction of arrival estimation using co-
prime arrays with off-grid targets, IEEE Signal Process. Lett., vol. 21,
no. 1, pp. 26–29, Jan. 2014.
[7] Z. Tan, Y. C. Eldar, and A. Nehorai, “Direction of arrival estimation
using co-prime arrays: A super resolution viewpoint,IEEE Trans. Signal
Process., vol. 62, no. 21, pp. 5565–5576, Nov. 2014.
[8] S. Qin, Y. Zhang, and M. Amin, “Generalized coprime array configurations
for direction-of-arrival estimation,IEEE Trans. Signal Process., vol. 63,
no. 6, pp. 1377–1390, Mar. 2015.
[9] P. Pal and P. Vaidyanathan, “Nested arrays: A novel approach to array pro-
cessing with enhanced degrees of freedom,IEEE Trans. Signal Process.,
vol. 58, no. 8, pp. 4167–4181, Aug. 2010.
[10] K. Han and A. Nehorai, “Wideband gaussian source processing using a
linear nested array,IEEE Signal Process. Lett., vol. 20, no. 11, pp. 1110–
1113, Nov. 2013.
[11] K. Han and A. Nehorai, “Nested vector-sensor array processing via tensor
modeling,IEEE Trans. Signal Process., vol. 62, no. 10, pp. 2542–2553,
May 2014.
[12] B. Friedlander, “The root-MUSIC algorithm for direction finding with
interpolated arrays,Signal Process., vol. 30, no. 1, pp. 15–29, Jan. 1993.
[13] M. Pesavento, A. Gershman, and M. Haardt, “Unitary root-MUSIC with
a real-valued eigendecomposition: A theoretical and experimental perfor-
mance study, IEEE Trans. Signal Process., vol. 48, no. 5, pp. 1306–1314,
May 2000.
[14] P. Stoica and A. Nehorai, “MUSIC, maximum likelihood, and Cramer–Rao
bound: Further results and comparisons, IEEE Trans. Acoust., Speech,
Signal Process., vol. 38, no. 12, pp. 2140–2150, Dec. 1990.
[15] P. Stoica and A. Nehorai, “Performance study of conditional and un-
conditional direction-of-arrival estimation,IEEE Trans. Acoust., Speech,
Signal Process., vol. 38, no. 10, pp. 1783–1795, Oct. 1990.
[16] F. Li, H. Liu, and R. Vaccaro, “Performance analysis for DOA estimation
algorithms: unification, simplification, and observations, IEEE Trans.
Aerosp. Electron. Syst., vol. 29, no. 4, pp. 1170–1184, Oct. 1993.
[17] R. Roy and T. Kailath, “ESPRIT-estimation of signal parameters via ro-
tational invariance techniques,IEEE Trans. Acoust., Speech, Signal Pro-
cess., vol. 37, no. 7, pp. 984–995, Jul. 1989.
[18] A. Gorokhov, Y. Abramovich, and J. F. Bohme, “Unified analysis of DOA
estimation algorithms for covariance matrix transforms,Signal Process.,
vol. 55, no. 1, pp. 107–115, Nov. 1996.
[19] Y. Abramovich, N. Spencer, and A. Gorokhov, “Detection-estimation of
more uncorrelated Gaussian sources than sensors in nonuniform linear
antenna arrays .I. Fully augmentable arrays, IEEE Trans. Signal Process.,
vol. 49, no. 5, pp. 959–971, May 2001.
2
The condition when Q
1
is singular is difficult to obtain analytically. In
numerical simulations, we have verified that it remains nonsingular for various
parameter settings.
[20] C.-L. Liu and P. Vaidyanathan, “Remarks on the spatial smoothing
step in coarray MUSIC, IEEE Signal Process. Lett., vol. 22, no. 9,
pp. 1438–1442, Sep. 2015.
[21] A. Koochakzadeh and P. Pal, “Cram
´
er–Rao bounds for underdeter-
mined source localization, IEEE Signal Process. Lett., vol. 23, no. 7,
pp. 919–923, Jul. 2016.
[22] C.-L. Liu and P. Vaidyanathan, “Cram
´
er–Rao bounds for coprime and
other sparse arrays, which find more sources than sensors, Digital
Signal Process., 2016. [Online]. Available: http://www.sciencedirect.
com/science/article/pii/S1051200416300264
[23] G. W. Stewart, “Error and perturbation bounds for subspaces associated
with certain eigenvalue problems,SIAM Rev., vol. 15, no. 4, pp. 727–764,
Oct. 1973.
[24] M. Hawkes, A. Nehorai, and P. Stoica, “Performance breakdown of
subspace-based methods: prediction and cure, in Proc. IEEE Int. Conf.
Acoustics, Speech, Signal Process., 2001, vol. 6, pp. 4005–4008.
[25] A. Swindlehurst and T. Kailath, “A performance analysis of subspace-
based methods in the presence of model errors. I. The MUSIC algorithm,
IEEE Trans. Signal Process., vol. 40, no. 7, pp. 1758–1774, Jul. 1992.
[26] S. M. Kay, Fundamentals of Statistical Signal Processing. Englewood
Cliffs, NJ, USA: Prentice-Hall, 1993.
[27] M. Ishiguro, “Minimum redundancy linear arrays for a large number of
antennas,Radio Sci., vol. 15, no. 6, pp. 1163–1170, Nov. 1980.
[28] Z. Liu and A. Nehorai, “Statistical angular resolution limit for point
sources, IEEE Trans. Signal Process., vol. 55, no. 11, pp. 5521–5527,
Nov. 2007.
[29] P. P. Vaidyanathan and P. Pal, “Direct-MUSIC on sparse arrays,” in Proc.
2012 Int. Conf. Signal Process. Commun., Jul. 2012, pp. 1–5.
Mianzhi Wang (S’15) received the B.Sc. degree
in electronic engineering from Fudan University,
Shanghai, China, in 2013. He is currently working
toward the Ph.D. degree in the Preston M. Green
Department of Electrical and Systems Engineering,
Washington University in St. Louis, St. Louis, MO,
USA under the guidance of Dr. A. Nehorai. His re-
search interests include the areas of statistical signal
processing for sensor arrays, optimization, and ma-
chine learning.
Arye Nehorai (S’80–M’83–SM’90–F’94–LF’17)
received the B.Sc. and M.Sc. degrees from the Tech-
nion, Haifa, Israel, and the Ph.D. degree from Stan-
ford University, Stanford, CA, USA. He is currently
the Eugene and Martha Lohman Professor of electri-
cal engineering in the Preston M. Green Department
of Electrical and Systems Engineering, Washington
University in St. Louis (WUSTL), St. Louis, MO,
USA. He served as the Chair of this department from
2006 to 2016. Under his Department Chair Lead-
ership, the undergraduate enrollment has more than
tripled in four years and the master’s enrollment grew seven-fold in the same
time period. He is also a Professor in the Department of Biomedical Engineering
(by courtesy), a Professor in the Division of Biology and Biomedical Studies,
and the Director of the Center for Sensor Signal and Information Processing
at WUSTL. Prior to serving at WUSTL, he was a Faculty Member at the Yale
University and the University of Illinois at Chicago.
Dr. Nehorai served as an Editor-in-Chief of the IEEE T
RANSACTIONS ON
SIGNAL PROCESSING from 2000 to 2002. From 2003 to 2005, he was the Vice
President (Publications) of the IEEE Signal Processing Society, the Chair of the
Publications Board, and a Member of the Executive Committee of this Society.
He was the Founding Editor of the special columns on Leadership Reflections in
IEEE S
IGNAL PROCESSING MAGAZINE from 2003 to 2006. He received the 2006
IEEE SPS Technical Achievement Award and the 2010 IEEE SPS Meritorious
Service Award. He was elected Distinguished Lecturer of the IEEE SPS for a
term lasting from 2004 to 2005. He received several Best Paper Awards in the
IEEE journals and conferences. In 2001, he was named University Scholar of the
University of Illinois. He was the Principal Investigator of the Multidisciplinary
University Research Initiative project titled Adaptive Waveform Diversity for
Full Spectral Dominance from 2005 to 2010. He has been a Fellow of the Royal
Statistical Society since 1996 and Fellow of the AAAS since 2012.