$29
Note: Problems refer to Haykin 5th ed, or 4th ed. (problem numbers are the same). Also, here use MATLAB as a “calculator” and do not use built-in functions to compute FPEF coe¢ cients, AR models and the like. For example, code up the Levinson-Durbin recursion on your own, don’t use a canned routine. Also, some of these problems require referencing the notes given out that give exact formulas for the correlation, power spectrum and innovations …lters for an AR(2) process.
1. Problem 2.18. Some comments:
(a) For the Wiener …lter, assume the signal u [n] has the form as given under hypoth-esis H1, so ~u = ~s + ~v where ~s is deterministic. The desired signal is s [k] for some …xed k. By …nding "R" and "p" you can …nd the Wiener …lter, and it will have the form as given in the text. You will note regardless of the choice of s [k], all these vectors are the same up to a scaling factor. And that is the point of the problem: the solutions to (b) and (c) yield the same vector, up to a scaling factor.
(b) Haykin gives you a hint. Use it to transform the SNR into a Rayleigh quotient of a matrix. Please don’t compute gradients- you should know how to maximize a Rayleigh quotient!
(c) You have (in my notes or in the book) the formula for the complex Gaussian
pdf. Remark: The maximum-likelihood (ML) decision rule for choosing the hypothesis would be to compare to 1 (or log to 0). If you assume this, you will get a speci…c value for the threshold involving quantities that are presumed know (e.g., Rv). However, if you use say a Neyman-Pearson test, then the threshold will be an adjustable parameter that determines the probability of false alarm (probability of choosing H1 when H0 is true) and probability of detection (probability of choosing H1 when H1 is true). Either way, the point is the form of the test will be to compare wH u to a threshold, where w again has the same form as the other parts of the problem.
2. Consider an AR(2) process where the poles are 0:8 and 0:6. The exact model is:
x [n] = v [n] a1x [n 1] a2x [n 2]
where v is unit variance white noise.
(a) Find a1 and a2, and compute the PSD S (!) (you will graph it later, superimposed with certain estimates). Also compute r [m] (exact values) up to order 10.
(b) Generate 103 samples of x, estimate r^[m] up to order 10.
(c) Find the maximum absolute error between the exact and estimated correlation. Also compute the spectral norm of the di¤erence between the actual correlation matrix and the estimated one (for order 10).
1
(d)
Use the Levinson-Durbin algorithm to compute the re‡ection coe¢ cients up to
order 10 using both the exact and estimated values of r. Find the maximum
absolute error between these values.
(e)
Examine the second order FPEF. It can be interpreted as providing an estimate
of a1; a2, and thus the pole values. Find the maximum absolute error for the AR
coe¢ cients, and for the poles.
(f)
In theory, the re‡ection coe¢ cients for order beyond 2 should be zero. Check that
for both the exact and approximate cases.
(g)
Plot the powers of the prediction errors Pm for m = 0 to 10.
1
(h)
Compute exp
R log S (!) d! (numerically using the theoretical PSD) and
2
check that it is what it should be.
(i) Compute M1 log (det RM ) where RM is the M M correlation matrix, for M = 1 up to 11. Plot these values. Are they strictly decreasing? Do they converge?
3. Consider the following ARMA(1,1) process:
u [n] = 0:5u [n 1] + v [n] 0:2v [n 1]
where v [n] is unit variance white noise. Consider the problem of forming an AR(3) model for this process.
(a) Generate 1000 samples to estimate the correlation r [m] for jmj 3. Formulate the corresponding correlation matrix, compute its eigenvalues, and determine the bound on for a steepest descent approach.
(b) Use the Levinson-Durbin algorithm to …nd the 3rd order FPEF, exactly. Specify the coe¢ cients of the …lter, and also the re‡ection coe¢ cients.
(c) Now instead apply the steepest descent method for the values = 0:1 max,
0:5 max and 0:9 max for enough iterations until the tap weight vector w [n] ap-proximates the ideal w0 [n] so that each coe¢ cient is within 10 3 of the correct answer. Compare the number of iterations required for each case.
(d) This problem is a modi…cation of 4.14. There, Haykin expects you to determine the exact AR(3) coe¢ cients. A look at the solution key gives the trick:
1 + x + x2 + + xN 1 1 x
for small x. If the ideal innovation …lter is:
1 + bz 1
H (z) =
1 + az 1
then by using the approximation:
1 + bz 1
1
1 + 1z 1 + 2z 2 +
2
(you should determine the i’s) and from that you can approximate:
1
H (z) AM (z)
where AM (z) is a polynomial of degree M. Do this to obtain an AR(3) model, and use the coe¢ cients of the 3rd order FPEF, running Levinson-Durbin BACK-WARDS, to …nd the correlation r [m]. Are you getting the same results? (Based on the discussion in the course, there is no reason to presume in advance that this method of approximation yields the AR(M) model, i.e., the Mth order FPEF).
4. Here we examine ideal beamformers for sensor arrays. Let s ( ) denote a steering vector normalized to unit length. If w is the beamformer vector, then the array pattern
is de…ned as:
We would normally graph this in dB. In this problem we consider a uniform linear array (for reference, aligned along the z-axis). Take the locations of the array, for 1 m M, as:
~rm = (m 1) da^z
~
If k ( ) is the incident wavenumber vector, then:
~
2 d
k ( ) ~rm =
cos
We assume M = 20 and d= = 0:5 in this problem. The array pattern will only depend on a single angle parameter, (the angle between the incident AOA and the axis of the array), not the azimuthal angle . Thus, A ( ) is a function of a single variable, with 0 180 . = 90 is called the broadside AOA, and all the sensors pick up the wave in phase. If = 0 or 180 , this is called the end…re AOA and the phase di¤erence between the sensors is maximal.
Consider three uncorrelated sources at angles = 10 , 30 , 50 . If we take the 10 wave having unit power, then assume the 30 source is 5dB lower, the 50 source is 15dB lower, and the background noise is 25dB lower.
(a) Compute the (theoretical) R matrix for this environment.
(b) For each source, compute the array patterns for the MVDR and GSC beamformers (i.e., take a source as the "primary", forcing a distortionless response in that direction; for the case of the GSC, impose exact nulls in the directions of the other sources). For each source:
1. Plot the MVDR and GSC, either superimposed or as subplots in the same Figure, whichever looks better. Make sure to scale the vertical axis reason-ably.
2. The GSC beamformer puts exact nulls ( 1dB) at the interfering sources. The MVDR beamformer will try to suppress the signal in those directions, but won’t put exact nulls. Compute the amount of attenuation the MVDR places in those directions.
3
3. Since the GSC places exact nulls, you would expect the peak gain in the sidelobes, or in the mainlobe (remember the peak gain may not be 1, although you are enforcing 1 in the primary source direction) to be larger than that of the MVDR. Compare these peak values (you can read it o¤ your graphs).
(c) We now want to consider the “invisible region”. Recall the electrical angle is de…ned as:
!^ = 2 d cos
As such, the array pattern can be interpreted as A (^!) = jW (^!)j2 where W (^!) is the frequency response of an FIR …lter with coe¢ cients fwmg. Since cos ranges from 1 to 1, if d= > 0:5, we get spatial aliasing: di¤erent AOAs may give rise to the same !^, and hence the beamformer can’t distinguish among them. If d= < 0:5, then only a limited band of !^ corresponds to a physical AOA ; the range f 2 d= < !^ < 2 d= g is called the visible region; the portion of the [ ; ] interval outside this range is called the INVISIBLE region. Now, compute the MVDR and GSC beamformers for each of the 3 sources assuming d= = 0:4. Graph the array patterns as a function of !^ (not ), over the full range f < !^ < g, and on your graph clearly label the boundary of the visible region (e.g., draw vertical lines on your graphs). For each case (you basically have 6 beamformers all together), compute the peak value of the array pattern in the invisible region.
4