Starting from:
$35

$29

Project: Kalman Filters Solution

In this project, you will be implementing and studying the performance of several forms of the discrete-time Kalman …lter, speci…cally:

the “standard”(i.e., covariance) Kalman …lter the information Kalman …lter

the square-root covariance Kalman …lter the extended Kalman …lter (EKF)

You can use numerical routines for matrix analysis in MATLAB, such as eig, qrd, svd, etc. However, do NOT use “advanced”MATLAB functions related to Kalman …ltering, such as kalman, except as noted below (e.g., …nd an appropriate function to solve the discrete-time ARE, as described below).

NOTE: All data is real in this project!!! Also, in this write-up: for a vector u, juj is the Euclidean length; and for a matrix A, kAk is the spectral norm.
In the …rst three cases, the system model will be TIME-VARYING and is given by:

x (n + 1)  =  A (n) x (n) + v (n)

y (n)  =  C (n) x (n) + w (n)

In order to examine the e¤ect of time-varying behavior on the system, for iterations 1    n
N1, take A (n) = A1, and for N1 + 1    n    N1 + N2, take A (n) = A2 where:

A1

2
0:9
1

0


0

3
A2

2
0:9   1

0

0
3

=

00:90


0



=

0
0:9

0

0



6







7


6






7


4
0
0





5


4
0
0




5


6



0:5   0:5
7


6



0:5   0:5
7



1
0

0:5


0:5




1
0

0:5

0:5

For all times take:





0
1
0
1














C =


1
1
1
1









Assume the covariance matrices Qv and Qw of the process noise v (n) and observation noise w (n), respectively, are constant and given by:

Qv =
2
1=4
1=4
0
0
3
Qw = 0:1I


1=4  1=2
1=4
0



6




7


4
0
1=4
1=2
1=4
5


6




7



0
0
1=4
1=2




1
Studying the Prescribed System

Start by computing the eigenvalues of A1 and A2. Con…rm: each is stable; each has an eigenvalue of algebraic multiplicity 2 but geometric multiplicity 1; there are also a pair of complex eigenvalues. You should see the signi…cance of the “change”from A1 to A2.

Given the following model:

x (n + 1) = Ax (n) + v (n)

where v (n) is white with covariance matrix Qv, and assuming A is stable, the steady-state covariance matrix of x (n) satis…es the discrete-time Lyapunov equation:

K = AKAT + Qv

Do not convuse this K with the covariance matrices of the Kalman state …ltered and pre-
dicted ERRORS. This K = lim
n!1
E x (n) xT (n)
does not depend on the initial condition


MATLAB function that computes the solution to the
x (0) (assuming stability). There is a

discrete-time Lyapunov equation. Find it for the matrices A1; A2, with the prescribed Qv. This will give you a sense of the "size" of the ~x at steady-state. The eigenvalues of A can give you a sense as to how long until steady-state conditions are met; indeed, let p be the magnitude of the dominant eigenvalue in A, and let us take N to be the smallest value such that pN < 0:01. We can use N to model the time until the e¤ect of initial condition x (0) dissipates. When you run the nonstationary model, take N1 and N2 to be this computed parameter N for the respective A1; A2 matrices.

Check that in each case (A; C) is observable (form the observability matrix and compute its rank). Combined with the fact that Qv is full-rank, that means each matrix would be associated with a steady-state prediction error covariance matrix that solves an algebraic Riccati equation (ARE). Find the MATLAB function that solves the discrete-time ARE, and use it to compute these steady-state covariance matrices, say K1;ss and K2;ss. Note that you will not only have to …nd the function, but …nd how to “call it”correctly. Compute the eigenvalues of these covariance matrices, con…rming they are positive de…nite. Also compute kK1;ss K2;ssk, where k k denotes spectral norm. This is one way of seeing the signi…cance of switching the matrices.

Now just do an experiment. Start with initial condition x (0) the all 1’s vector. Run the process equation for N time stepts using A1, Now switch to A2 and run for an additional N time steps. Now plot the state vector versus time (i.e, plot(1 : endtime; x) ). Observe the di¤erent behavior under A1 and A2, and the transition when the matrices switch.


Covariance Kalman Filter

Write a “core”function that runs one iteration of the Kalman …lter, taking as input x^ (njn 1) and K (n; n 1). It should return x^ (njn) and x^ (n + 1jn) as well as K (n; n), K (n + 1; n), and G (n).

As you perform all the following, I want you to monitor (after each iteration) that K (n; n) > 0; I suspect it will be. However, if you ever detect a failure of this condition— continue the experiment to the full number of iterations, and see what happens; and then switch over to the Joseph form to get correct results.


2
First do a preliminary study. In each case, create initial x (0) as a vector with iid N (0; 1) components. Take K(1; 0) as the identity. First take A1 (constant), and repeat 100 times and …nd the average number of iterations it takes for kK (n; n 1) K1;ssk = kK1;ssk < 0:01. Do the same for A2.

So let N1 and N2 be the average times you found. If either is < 10, replace them with 10 (i.e., enforce that N1 10 and N2 10). Now run the routine with, …rst N1 iterations with matrix A (n) = A1 followed by an IMMEDIATE switch to A2 and another N2 iterations with A (n) = A2. First, graph jx (n)j, and then do a 3-D plot of (x1; x2; x3) (lines connecting the discrete points) to get a sense of the actual state trajectory. Repeat this a couple times just to get a sense that what you are showing appears to be “typical”. You may want to superimpose results of 5 trial runs, for example. Maybe by using two colors, clearly indicate the point where the switch from A1 to A2 occurs. Superimpose graphs of jx (n) x^ (njn 1)j, jx (n) x^ (njn)j. Also graph kK (n; n 1) Ki;ssk where Ki;ss is K1;ss for n N1, and K2;ss for N1 +1 n N1 +N2. You should repeat this a couple times just to check the results you present are “typical”. I am not saying average them- just make sure there isn’t a statistical ‡uke.
Finally, show the following graphs: kK (n; n    1)k, kK (n; n)k and kK (n; n)    K (n    1; n    1)k.


Information Kalman Filter

Basically repeat the above. With the inverse covariance matrix denoted P , note that for ex-
ample we should have P1;ss = K1;ss1 and so forth. Now instead of graphing kKk or kK Kk, graph kP k or kP P k.

Now go back. Do ONE test (not 100 and average): take K (1; 0) = 104; run the covariance Kalman …lter, and the information Kalman …lter, and see if you notice a di¤erence in per-formance. Keep in mind you may need to run more iterations just to get to the steady-state covariance matrix.


Square-Root Kalman Filter

You are going to apply the square-root covariance Kalman …lter to the same time-varying system as described above. Pretty much repeat the experiment for the covariance Kalman …lter. See if you notice any appreciable di¤erence in the performance.

Extended Kalman Filter

Here we are going to examine the EKF applied to di¤erent types of nonlinearities. We will have 4 state variables and 4 measurements (i.e., ~x and ~y vectors are both in R4).
The process and observation equations are now:

x (n + 1)  =  f (x (n) ; n) + v (n)

y (n)  =  h (x (n) ; n) + w (n)

where f ( ) and h ( ) each apply a nonlinear operations. Assume v; w are white noise with the same Qv = 0:2I and Qw = 0:1I (note there are ; Qw as above. We are going to take two

3
cases: …rst where there is a high degree of nonlinearity, and in particular h is non-monotonic.

In the second case, f is not di¤erentiable.. So …rst take:


2
10  sin (x1
x3)
3
f (~x) =

10  sin (x2  x4)


6

4
7

4
10  sin (x )
5

6

7


10  sin (x
3)

and h ( ) is the following function applied componentwise:

h (x) = sin (x)

Note that each state variable is bounded by 10 in magnitude, but the measurement function h ( ) "folds" the value of x (i.e., multiple values of x map to the same y).

Take as initial condition x (0) the all 1’s vector, and run this for 100 time steps, and plot the state and, separately, the measurements y versus time.

Compute the Jacobians F (~x) and H (~x) and run the EKF for 100 time steps. Take the …ltered state estimate x^ (njn), and plot kx (n) x^ (njn)k. Use reasonable initial conditions. Repeat this experiment for the case h (x) = tan 1 (x) (applied componentwise), which is monotonic. Does the EKF perform any better?

Now we are going to take f (x) to be nondi¤erentiable. Let g ( ) = j j for scalar , and for a vector we apply it componentwise. Let the process equation have the form:

x (n + 1) = g (A1x (n)) + v (n)

In order to compute the Jacobian F (~x) we would need g0 ( ), which of course is not de…ned at 0. So replace g0 in the formula with the function:


8
1
>
gd ( ) =
<
0
j j

:




1
<
Since we want an extreme e¤ect, take = 0:1. From this, you can obtain a function F (~x) to use for the EKF. Take h (x) = tan 1 (x), and before you start with the Kalman …lter, synthesize the state vector and output over 100 time steps, with initial condition x (0) the all 10s vector, and plot them as before. Then run the EKF for 100 time steps, with reasonable initial conditions. Plot kx (n) x^ (njn)k :
















4

More products