Starting from:
$35

$29

Assignment: Kalman Filtering (part 2) Solution




Problem set due Thursday, March 15, 8:00PM. Submit a pdf le (a neat scan of your handwritten solution) via bCourses. The copier on 6th oor of Etcheverry is available 24/7, and can be used without a login to email scans to any berkeley.edu address.













Generalizing (and programing) the batch-estimation: In class (and the notes) we derived the batch-
formulation of Kalman ltering, generating the correct matrices to compute L(xkjy0; : : : ; yk 1): It was mentioned that we could also improve our estimates of previous states as new information (the y values) are presented, for example




L(x0jy0; : : : ; yk 1); or L(xkjy0; : : : ; yk 1; yk)




In this problem, we introduce the appropriate matrices (and some recursions to make building the batch matrices easier) to compute



02
x1
3






y
0
1




x0




2


y
1
3




.






B6
..
7








C
L
x


6


..


B6
7


y




7C


B6
k 1
7


6




7C


B6
x
7


6


k
1
7C


B6
k
7


4






5C


@4


5










A






Remark: You can easily modify the ideas below to get batch solutions to



02
x1
3


2
y1
31




x0






y0


L


.




.




B6


7


6




7C


B6
x
7


6
y


7C


B6
j
7


6


k
7C


@4


5


4




5A






for any j and k. Although we did not do it in class, there are recursive formulations ( xed amount of computation each step) to compute x^jjk for any j and k. Recall that in lecture, we derived recursive formula only for x^k 1jk and x^kjk. Now onto the batch generalization...




Dimensions: xk 2 Rnx ; yk 2 Rny ; wk 2 Rny




Relevant Matrices: Note that I am using blue to distinguish the state-evolution matrix Ak at time k from the caligraphic Ak;j.






Ak;j := Ak
1 Aj 2 Rnx nx
k j 0




Nk :=
Ak;1E0 Ak;2E1
Ak;3E2Ak;k
1Ek 2 Ek 1


2 Rnx knw



1 of 13
Available: March 8 Due: March 15















2




E0




0






0








0
0
3


















0




0






0








0
0












k :=
6
A2;.1E0


E.
1






0.




.


0.
0.
7


R
(k+1)nx


knw
6




..




..








..




. .


..
..
7
2






6


































7










6


k 2;1
E
0
k 2;2
E
1


k 2;3
E
2




0
0
7












6
A




A










7












6
k 1;1
E
0
A


E
1
k 1;3
E
2
E
k 2
0
7












6
A


k 1;2


A








7












6


k;1E0


A






k;3E2


k;k 1Ek 2
Ek 1
7












6
A


k;2E1






A
7












4






A






A










5










3






I














2


C1


1;0
3














A2;0






























6




1;0
7












6


C0
7
































A












k :=
A.


R
(k+1)nx


nx
; Rk :=


C2
2;0


R
kny


nx
6




..
7
2








A.


2






6






7










6




..


7










6






7












6








7












6
A
k
2;0
7












6
C






7












6




7












6
C
k 2
A
k 2;0
7












6
A
k
1;0
7












6




7












6


k;0
7












6


k 1
A
k 1;0
7












6
A
7












4






5












4


5


































3








x0
















2
w1
3












2


y1
3
















x2












































6
x1


7












6
w0
7












6


y0
7




























































k :=
.




R
(k+1)nx


k 1
:=
w2


R
knw


k 1
:=


y2


R
kny




X
6
..


7
2


W


.
2


Y


.
2








6
x


7








6


..
7








6


..
7












6


7












6




7












6




7














6
k
2
7












6
w


7












6
y


7














6
x


7












6
w
k 2
7












6
y
k 2
7














6
k
1
7












6
k 1
7












6
k 1
7














6
xk


7












6


7












6


7














6


7












4




5












4




5














4




5


















































2


C1E0






F1






0










0




0
3






6


F0






0






0










0




0
7




Sk :=


C2A.1E0






C2.E1






F.2




.




0.




0.
2
Rkny knw


6


..






..








..




. .




..




..
7




6










































7






6
C
k 2 k 2;1
E
0
C
k 2 k 2;2
E
1
C


k 2;3
E
2




F
k 2


0
7






6








k 2
A












7






6
C
A
E
0
C
A
E
C
k 1;3
E
2
C
k 1
E
k 2
F
k 1
7






6


k 1 k 1;1




k 1 k 1;2


1
k 1
A










7






4


A






A


























5




Initializations: Recall that Matlab can handle empty arrays with some nonzero dimensions (though at least one dimension needs to be 0 so that the array is truly empty. In the de ni-tions below, I’ve included, correctly, the nonzero dimension.




0 = empty; 0nx 0;
0 = Inx nx ;N0 = empty; 0nx 0;
S0 = empty; 00 0;
R0 = empty; 00 nx ;A0;0 = Inx nx
Recursions: for i 0
































Ai+1;0 = AiAi;0;
i+1 =
i
; Ni+1 = AiNi Ei
;
Ai+1;0
2 of 13
Available: March 8






 






Due: March 15




























i+1
=


Ni+1
x
w
; Ri+1
=
CiAi;0
; Si+1
=
CiNi
Fi






i
0(i+1)n
n






Ri




Si
0iny
nw



For i 0, the quantities

fAi+1;0; i+1; Ni+1; i+1; Ri+1; Si+1g

can be calculated (recursively) from




fAi;0; i; Ni; i; Ri; Sig and fAi; Ei; Ci; Fig












1




2




3




4




5




6




7




8




9




10




11




12




13




14




15




16




17




18




19




20




21




22




23




24




25




26




This is implemented in buildBatch.m, shown below.




begin code







function [calA,Psi,N,Gam,R,S] = buildBatch(calA,Psi,N,Gam,R,S,A,E,C,F)




Initialization call uses one input argument (state dimension)



[calA,Psi,N,Gam,R,S] = buildBatch(nX)



General recursive call uses the function definition line



[calA,Psi,N,Gam,R,S] = buildBatch(calA,Psi,N,Gam,R,S,A,E,C,F) if nargin==1



nx = calA; % first (and only) argument is NX calA = eye(nx); Psi = eye(nx); Gam = zeros(nx,0); N = zeros(nx,0); S = zeros(0,0); R = zeros(0,nx);




else




Determine k, ny, nw, nx from dimensions of A, E, C, F and (eg) S [ny,nx] = size(C);



nw = size(E,2);




k = size(S,1)/ny;




Use recursions to build new, larger matrices. Use temporary



variable for the updated calA and N, since the original values are



needed to define some of the other updated matrices.



newCalA = A*calA;




newN = [A*N E];




Psi = [Psi;newCalA];




Gam = [[Gam zeros((k+1)*nx,nw)];newN];




R = [R;C*calA];




S = [S zeros(k*ny,nw);C*N F];




calA = newCalA;




N = newN;




end




end code






Evolution equations:




Xk = kx0 + kWk 1; Yk 1 = Rkx0 + SkWk 1




De ne Z := Xk; P := Yk 1. We are interested in L(ZjP ) and the variance of Z L (ZjP ). In general, these are




L(ZjP ) = Z;P P 1(P P ) + Z ; Z (ZjP ) = Z Z;P P 1 TZ;P






3 of 13
Available: March 8
 


Due: March 15


Speci cally, for the Kalman lter application, the various quantities are


Z = km0;P = Rkm0
and










T
+
T


Z = k 0 k
k W k


T




T


P = Rk 0Rk
+ Sk W Sk


T
+


T


Z;P = k 0Rk
k W Sk
Plugging in everything gives our optimal estimate of all states from x0 to xk, based on all of the

measurements y0 to yk
1,








Rk 0RkT + Sk W SkT 1 (Yk 1










L(XkjYk 1) = k 0RkT + k W SkT
Rkm0) + km0
Simple regrouping
























































L(XkjYk 1) =






T
+




T












T






T


1
Yk 1










k 0Rk
k W Sk


Rk 0Rk
+ Sk W Sk














|




k
k 0






k{z


W






k 0


+ Sk
}
W






k 0






h




k
:=Lkbatch


k batch


k












k
1
i
































+






R
T






T
R R
T








T
R m










+S










S










|














































}
























:=V{zk
























and error variance

Xk (XkjYk 1) =
T
+


k 0 k
k W
( k 0RkT +







T






















k
























T
T
T
1
(
T
+
T
T
k W Sk )(Rk 0Rk
+ Sk W Sk )


k 0Rk
k W Sk )


The diagonal block-entries of (each of dimension nx nx) are most important, as they signify the variance (\spread") in the error of the estimate of each time-instance of the state.




Summarizing: for each k 1, there exist matrices Lbatchk and Vkbatch such that




L(XkjYk 1) = LbatchkYk 1 + Vkbatchm0















1




2




3




4




5




6




7




8




9




10




11




12




13




The function batchKF calculates these, using all of the ideas presented.




begin code







function [LkBatch,VkBatch,ekVar] = ...




batchKF(Amat,Emat,Cmat,Fmat,Sigma0,SigmaW,k)




nx = size(Amat,1);




Initialize the 6 batch matrices [calA,Psi,N,Gam,R,S] = buildBatch(nx);



Fill in batch matrices in order to estimate x0 to xk, as linear



combinations of y from 0,1,...,k-1. This needs the state-space matrices



defined on i = [0,1,...,k-1]



for i=0:k-1




Ai = Amat(:,:,i+1); % i+1 is needed because Matlab indices start at 1




Ei = Emat(:,:,i+1);




Ci = Cmat(:,:,i+1);



4 of 13
Available: March 8 Due: March 15










Fi = Fmat(:,:,i+1);



[calA,Psi,N,Gam,R,S] = buildBatch(calA,Psi,N,Gam,R,S,Ai,Ei,Ci,Fi);



end



% The next line assumes that the variance of w_k is constant across k,



% given by the single matrix SigmaW. The KRON command simply repeats this



% matrix in a block diagonal form, as in the lecture slides.



SigmaBarW = kron(eye(k),SigmaW);



SigmaZ = Psi*Sigma0*Psi’ + Gam*SigmaBarW*Gam’;



SigmaP = R*Sigma0*R’ + S*SigmaBarW*S’;



SigmaZP = Psi*Sigma0*R’ + Gam*SigmaBarW*S’;



LkBatch = SigmaZP/SigmaP;



VkBatch = Psi - Lk*R;



ekVar = SigmaZ - Lk*SigmaZP’;



end code







Here k 1 is an integer, and Amat is a 3-dimensional array, of dimension [nX, nX, N], where N k, and for each nonnegative i, Amat(:,:,i+1) equals Ai (note the index issue that we being our sequences with an index of 0, but Matlab begins their arrays with an index of 1).




~
batch
~
batch
, for
Finally, if Lk
is de ned as the last 2nx rows of Lk
and Vk
as the last 2nx rows of Vk
example with the code




begin code







Ltildek = LkBatch(end-2*nX+1:end,:)



Vtildek = VkBatch(end-2*nX+1:end,:)













end code




then












x^kjkj


1
= L
xk




Yk 1
= L~kYk 1 + V~km0


k
1


x^k 1
1


xk


































which is more related to the recursive derivation in lecture, which did not consider estimates of older states using newer measurements (eg., in lecture, we did not derive the recursion for x^4j9 (estimate of x4 based on y0; y1; : : : ; y9).




Verify the expressions in the Recursions section, given the de nitions in the Relevant Matrices section.



Verify the expressions in the Evolution Equations section.



Examine the estimation problem solved below. The command batchKF.m is included in the assignment .zip le, and has the buildBatch code as a local function, so you do not have to create these les. Note that for k = 1; 2; : : : ; 6, we use the batch-code to get the best estimate of Xk from Yk 1. The code displays certain rows of Lbatchk and Vkbatch which are speci cally relevant for the estimate x^k 1jk 1



1




2

begin code







Amat = repmat(1,[1 1 20]);




nX = size(Amat,1);






5 of 13
Available: March 8 Due: March 15













3




4




5




6




7




8




9




10




11




12







Emat = repmat(0,[1 1 20]);




Cmat = repmat(1,[1 1 20]);




Fmat = repmat(1,[1 1 20]);




sX = 1000; sW = 1; m0=2;




for k=1:6




[LkBatch,qkBatch,VkBatch,eVar] = ...




batchKF(Amat,Emat,Cmat,Fmat,m0,sX,sW,k); LkBatch(end-2*nX+1:end-nX,:) VkBatch(end-2*nX+1:end-nX)




end




end code






i. Interpret the estimation problem, speci cally




In the absence of process noise, how would the state evolve?




In the absence of measurement noise, how are the state and measurement related? How are the process noise, measurement noise and initial condition variances related? ii. The solution is computed for several di erent horizons. Study the \gain" matrix LkBatch and o set qkBatch and interpret how the optimal estimate is processing Yk 1, and using

m0 to obtain the estimate xk 1jk 1




iii. If you had to intuitively design an estimator for exactly this problem, without any Kalman ltering knowledge, given the speci c statistical properties of the initial con-dition, process noise, and measurement noise, would you be likely to pick a similar estimation strategy?




iv. Repeat the exercise for the following example begin code







Amat = repmat(1,[1 1 20]);



nX = size(Amat,1);



Emat = repmat(0,[1 1 20]);



Cmat = repmat(1,[1 1 20]);



Fmat = repmat(1,[1 1 20]);



sX = 0.1; sW = 5; m0=4;



for k=1:6



[LkBatch,qkBatch,VkBatch,eVar] = ...



9 batchKF(Amat,Emat,Cmat,Fmat,m0,sX,sW,k);




LkBatch(end-2*nX+1:end-nX,:)



VkBatch(end-2*nX+1:end-nX)



end



end code










Enhance KF code: The code for the Kalman lter, as derived in class, is included in the assignment .zip le.



Included in the assignment .zip le is a function KF231BtoBuildOn.m with declaration line



begin code







function [xk1k,Sxk1k,xkk,Sxkk,Sykk1] = ...



2
KF231BtoBuildOn(xkk1,Sxkk1,A,B,C,E,F,Swk,uk,yk)




end code











6 of 13
Available: March 8 Due: March 15






















1




2







Modify the code, renaming it KF231B, to have four additional outputs, namely as begin code







function [xk1k,Sxk1k,xkk,Sxkk,Sykk1,Lk,Hk,Gk,wkk] = ...




KF231B(xkk1,Sxkk1,A,B,C,E,F,Swk,uk,yk)




end code






for the matrices Lk; Hk; Gk and the estimate w^kjk as de ned in the slides. This should only require adding one line of code to properly de ne Lk, and one or two lines for w^kjk. If you look carefully at the existing code, you will see that Hk and Gk are already de ned. If you look at the variables which already exist, you should be able to do this task without any additional \inverses" (or the use of the backslash operator). Make sure to modify the help for KF231B.m as follows.

begin code







% Implements one step of the Kalman filter, using the notation in



% slides at the end of Estimation, Part 3. The input arguments are:



%xkk1 = xhat_{k|k-1}



%Sxkk1 = Sigma^x_{k|k-1}



%A, B, C, E, F: state matrices at time k



%Swk = variance in zero-mean disturbance w_k



%uk = deterministic control input u_k



%yk = measurement y_k



% The output arguments are:



%xk1k = xhat_{k+1|k}



%Sxk1k = Sigma^x_{k+1|k}



%xkk = xhat_{k|k}



%Sxkk = Sigma^x_{k|k}



%Sykk1 = Sigma^y_{k|k-1}



%Lk (from last slide, Part 3) for:



16
%
xhat_{k+1|k} = Ak x_{k|k-1} + Lk*(yk - Ck*x_{k|k-1})



%Hk (from fact #16), for:



18
%
xhat_{k|k} = x_{k|k-1} + Hk*(yk - Ck*x_{k|k-1})



%Gk (from fact #17), for: what_{k|k} = Gk*ek



%wkk = what_{k|k}



% The input signals, xkk1, uk, yk may all be empty matrices, implying



% that the function will only update the error variances, and will not



% provide any state estimates (so xk1k, xkk and wkk will be returned



% empty as well).



end code







Steady-State Behavior: As mentioned in class, if the process state-space matrices (Ak; Ek; Ck; Fk) and the variance of wk (which we now notate as Wk := E(wkwkT )) are constant (ie., not time-varying), then (under very mild technical conditions, which we will not worry about) the gain and variance matrices in the Kalman lter converge to steady-state values, namely



lim L




lim H




lim
x
x
;
lim
x
x


k
= L;
k
= H;
kjk
=
kjk 1
=
1
k!1


k!1


k!1




k!1





Create some random data (A; E; C; F; W ), and con rm the convergence by calling KF231B in a loop, monitoring the di erences between Li+1 and Li (and so on for the other matrices), and exiting the loop when suitable convergence is attained.



7 of 13
Available: March 8 Due: March 15










Hint: Recall that KF231B can be called in such a way that signals are not needed, and only the variances and gains are updated. Carefully read the help again if needed.




Separating signal from noise: Suppose P1 and P2 are linear systems (possibly time-varying), with known state-space models. The input to system P1 is v (producing output y1), and the input to system P2 is d (producing output y2). Both v and d are random sequences, with the following properties (for all k and j 6= k)
E(vk) = 0; E(dk) = 0;




and




E(vkvkT ) = Vk; E(dkdTk ) = Dk; E(vkdTk ) = 0; E(vkvjT ) = 0 E(dkdTj ) = 0; E(vkdTj ) = 0 Let y := y1 + y2. Draw a simple block diagram, and label y1 as \noise" and y2 as \signal."




Explain how we can use the Kalman ltering theory to estimate y2 from y. De ne the appropriate matrices, so that appropriate calls to KF231B along with some simple arithmetic would produce the desired estimate.



Note: In this problem we can interpret y2 as a meaningful signal, which is additively corrupted by noise, y1, to produce a measurement y. The goal of ltering is to recover the meaningful signal (y2) from the measured, \noisy" signal y. Remark: Be careful with subscripts. In the notation above, the subscript on d and v was interpreted as a time-index, but the subscript on y1 and y2 is just referring to two di erent signals (each of which depends on time).




In the commented example below, P1 is a rst-order high-pass lter, and P2 is a rst-order low-pass lter, and the separation is done using the steady-state values of the Kalman lter.



Study and run the code. Carefully examine the plots.




begin code







%% Separating Signal from Noise



% ME C231B, UC Berkeley,



3




%% Sample Time



% For this discrete-time example, set TS=-1. In Matlab, this



% just means an unspecified sampling time, totally within the



% context of pure discrete-time systems.



TS = -1;



9




%% Create high-pass filter



P1 = 0.4*tf([.5 -.5],[1 0],TS);



[A1,E1,C1,F1] = ssdata(P1);



nX1 = size(A1,1); % will be 1



nW1 = size(E1,2); % will be 1



15




%% Create low-pass filter



P2 = tf(.04,[1 -.96],TS);



[A2,E2,C2,F2] = ssdata(P2);



nX2 = size(A2,1); % will be 1



nW2 = size(E2,2); % will be 1



8 of 13
Available: March 8 Due: March 15













21




22




23




24




25




26




27




28




29




30




31




32




33




34




35




36




37




38




39




40




41




42




43




44




45




46




47




48




49




50




51




52




53




54




55




56




57




58




59




60




61




62




63




64




65




66




67




68













Bode plot of both bOpt = bodeoptions; bOpt.PhaseVisible = ’off’; bOpt.MagUnits = ’abs’; bOpt.MagScale = ’log’; bOpt.FreqScale = ’linear’; bOpt.Xlim = [0 pi]; bOpt.Ylim = [1e-4 2]; bodeplot(P2,’r’,P1,’k’,bOpt)



Form overall system which adds the outputs A = blkdiag(A1,A2);



E = blkdiag(E1,E2); C = [C1 C2];




F = [F1 F2];




nX = size(A,1); nY = size(C,1); nW = size(E,2);




Noise variance and initial condition variance



Keep it simple, and make everything Identity (appropriate



dimension)



SigW = eye(nW);




Sxkk1 = eye(nX);




Run several iterations to get the steady-state Kalman Gains nIter = 40;



for i=1:nIter Swk = SigW; [~,Sxk1k,~,Sxkk,Sykk1,Lk,Hk,Gk,~] = ...




KF231B([],Sxkk1,A,[],C,E,F,Swk,[],[]); Sxkk1 = Sxk1k;




end




Form Kalman filter with 3 outputs



AKF = A-Lk*C;




BKF = Lk;




CKF = [eye(nX);eye(nX)-Hk*C;-Gk*C];




DKF = [zeros(nX,nY);Hk;Gk];




SSKF = ss(AKF,BKF,CKF,DKF,TS);




%% Form matrix to extract estimate of y2_{k|k}




We need [0 C2]*xhat_{k|k} + [0 F2]*what_{k|k}. Everything



is scalar dimension, but we can form this matrix properly



so that the example would work on other systems too.



M = [zeros(nY,nX) zeros(nY,nX1) C2 zeros(nY,nW1) F2];



9 of 13
Available: March 8 Due: March 15













69




70




71




72




73




74




75




76




77




78




79




80




81




82




83




84




85




86




87




88




89




90




91




92




93




94




95




96




97




98




99




100




101




102







%% Bode plot of filter




It makes sense that the filter will try to "pass" some



low frequencies, to preserve y2, but will cutoff



high-frequencies to reject y1. The "pass" region should



extend over the region where P2 has modest gain. The Bode



magnitude plot confirms this bodeplot(P2,’r’,P1,’k’,M*SSKF,bOpt) legend(’P2’,’P1’,’Filter’);



%% Single Simulation




Create a w sequence consistent with variance assumption wSeq = randn(100,2);



%% Get y1 and y2 (separate simulations) for later comparison y1 = lsim(P1,wSeq(:,1));




y2 = lsim(P2,wSeq(:,2)); y = y1 + y2;




%%




Form the cascade (system output goes directly to Kalman



Filter), and simulate, obtaining the outputs of Kalman



Filter



Est = lsim(SSKF*ss(A,E,C,F,TS),wSeq);




%% Form Estimate of y2




Est matrix is 100-by-6, so use transpose correctly to do



reconstruction as a matrix multiply



y2Est = (M*Est’)’;




Plot subplot(1,2,1); plot(0:99,y2,’b+’,0:99,y2Est,’ko’,0:99,y,’r*’); legend(’y2 (actual)’,’y2 (Est)’,’y (Measured)’); subplot(1,2,2); plot(0:99,y2,’b+’,0:99,y2Est,’ko’); legend(’y2 (actual)’,’y2 (Est)’);



end code






In the code cell above, we iterated 40 steps to get the \steady-state" gains, but did not verify convergence. Using simple methods, estimate approximately how many steps are actually needed for \convergence."



Follow the main ideas of the template KFexamplesTemplateWithSignalsButNoControl.m to rerun this example using the time-varying (not steady-state) lter. Compare the estimates achieved by the time-varying lter and the steady-state lter over the rst 20 time steps.



Converting between Kalman lter formulations: In our class notes, we formulated the dis-turbance/noise as one vector valued noise variable, wk 2 Rnw , with two matrices, E 2 Rn nw and F 2 Rny nw where Ew additively a ects the state evolution, and F w additively a ects the



10 of 13
Available: March 8 Due: March 15










measurement. The variance wk was W 2 Rnw nw . By contrast, in the Matlab kalman imple-mentation, there are two separate disturbance/noises, labeled d 2 Rnd and v 2 Rny . The state evolution is additively a ected by Gd, and the measurement by Hd + v for matrices G 2 Rnx nd and H 2 Rny nd . The variance of the combined vector is

E
dkdkT
dkvkT
=
Q N


vkdkT
vkvkT
NT R
Suppose the problem is speci ed in the Matlab format, namely




fnd; G; H; Q; N; Rg




How do you convert this description into our class format, namely




fnw; E; F; W g




so that the problems are equivalent? Hint: Look at the means and variances of

Hd + v
and
F w
Gd


Ew






For the problems to be equivalent, these statistical properties need to be equal.




This problem generalized fact #10 in the Kalman derivation. Suppose Dk and Rk are known matrices, and a random variable k is de ned k := Dkxk + Rkwk.



^
1 := L( kjy0; y1; : : : ; yk
1). Show that


(a) De ne kjk






^






kjk 1 = Dkx^kjk 1


^
:= L( kjy0; y1; : : : ; yk 1
; yk). Show that


(b) De ne kjk




^kjk = Dkx^kjk + RkWkFkT kyjk 1
1


ek
In the derivation in class, there was a missing variance calculation, associated with x^kjk in Fact



16. It is easy to derive this. Recall the setup for Fact 16:




Z = xk; P = Yk 1; Q = yk




We want to determine kxjk, which is de ned to be xk
x^kjk . In terms of Z; P and Q, this is
xk x^kjk = Z (ZjP;Q)




The general formula for Z (ZjP;Q) is






Z (ZjP )Z (ZjP );Q Q1
(Q P) ZT
(Z
P);Q


j
j





as derived in Fact #6. Fact #16 has all of these terms (for the speci c choices of Z; P; Q). Task:




Substitute these expressions to derive




kxjk = kxjk 1 kxjk 1CT kyjk 1
1
C kxjk 1
as claimed in the nal Kalman Filter equations.




11 of 13
Available: March 8 Due: March 15










(nothing to turn in - please read carefully) In the derivation in class, there was no control input in the dynamics - we derived the Kalman Filter under the dynamics



xk+1 = Akxk + Ekwk




If the dynamics include control, then




xk+1 = Akxk + Bkuk + Ekwk




There are a few situations to consider:




Case 1: the sequence fukg1k=0 is deterministic and known. This occurs if the signal uk is just a prespeci ed forcing function.




Case 2: the value of uk is a linear combination of Yk, say uk = JkYk for some deterministic matrix Jk. Note that the dimension of Jk changes with k. In this case, since Yk is a random variable, it follows that uk is a random variable. This situation occurs if there is a feedback controller mapping the measurement yk to uk through a linear, time-varying dynamical system.




Case 3: the value of uk is a linear combination of Yk and a deterministic, known signal. This situation is a combination of the two simpler cases, and is very common (a feedback controller with an external reference input). Once they are understood, handling this case is quite easy.




In order to modify the derivation, it is important to consider all of the slides, starting with some of the batch matrix manipulations, and especially at Fact #8. Regarding Fact #8, the two cases di er as follows:




For Case 1, the expression for xk still is linear in x0 and Wk 1, but also has a linear term with Uk 1. Since u is deterministic, this only changes the mean of xk, but not any correlations or variances.



For Case 2, since yk is a linear combination of xk and wk, the expression for xk is of the identi-cal form (linear in x0 and Wk 1, but the matrices are di erent, and involve B0; B1; : : : ; Bk 1 and J0; J1; : : : ; Jk 1.



Because of these simple di erences, the boxed conclusions from Fact 8 remain true:




xk;wk = 0; Yk 1;wk = 0:

Since Facts 9-18 do not involve the evolution equation for xk+1, they are unchanged.




Fact 19 does change, since it involve the evolution equation. For Case 1, the term Bkuk is simply added to the update, since it is not a random variable. For Case 2, the evolution is xk+1 = Akxk + BkJkYk + Ekwk. The lineariy of the a ne estimator means that




L(xk+1jYk) = L(Akxk + BkJkYk + EkwkjYk)

AkL(xkjYk) + BkJkL(YkjYk) + EkL(wkjYk)
Akx^kjk + BkJkYk + Ekw^kjk
Akx^kjk + Bkuk + Ekw^kjk
where we used the fact that L(YkjYk) = Yk and uk = JkYk.







12 of 13
Available: March 8 Due: March 15










So, the change to the boxed expression on Fact 19 is simply the addition of the term Bkuk, in both cases.







Fact 20 is unchanged. Fact 21 involves the expression xk+1 x^k+1jk, so it must be studied. However the only di erence (for both Case 1 and 2) is that now both terms, xk+1 and x^k+1jk include the additional additive term Bkuk. By subtraction, these cancel, and the calculation of the variances is una ected.




Hence, for both Case 1 and Case 2 (and Case 3, by a simple argument involving both ideas), the only change to the Kalman lter equations is the inclusion of the bkuk term in the State Prediction equation, namely




x^k+1jk = Akx^kjk + Bkuk + EkWkFkT kyjk 1
1
ek
















































































































































13 of 13

More products