Search
Duplicate

김영길/ 선형대수학/ Directional preference of covariance matrix for random process

Simulation Solutions

HH=K=EΛEH H^{\dagger} = K = E \Lambda E^{\dagger}
I=H1EΛEH1\Rightarrow I = H^{-1} E \Lambda E^{\dagger} H^{-1 \dagger}
UH1EΛ12U^{\dagger} \triangleq H^{-1}E \Lambda^{1 \over 2}는 unitary
H=EΛ12U\Rightarrow H = E \Lambda^{1 \over 2} U
Simulation Solutions
z0(u)=EΛ12Uw(u)z_{0}(u) = E \Lambda^{1 \over 2} U w(u)
orthogonal matrix UU 는 항상 diagonal 위에는 모두 00이 나오는 EΛ12UE \Lambda^{1 \over 2}U를 선택할 수 있다.
EΛ12UE \Lambda^{1 \over 2}U는 causal operator가 된다.

Example 4.1

Eigenvector들을 이용한 Covariance matrix의 Causal Factorization
random vector y(u)y(u)에 대하여
Covariance matrix Ky=[112121211212121]K_{y} = \left[ \begin{array}{rrr} 1 & -{1 \over 2} & -{1 \over 2} \\ -{1 \over 2} & 1 & -{1 \over 2} \\ -{1 \over 2} & - {1 \over 2} & 1 \end{array} \right]
HH=KyHH^{\dagger} = K_{y} 가 되는 matrix HH 를 찾으면
y(u)y(u)y(u)=Hw(u)y(u) = Hw(u)를 통해 simulate 될 수 있다.
HH를 구하는 방법
(KyλI)e=0(K_{y} - \lambda I)e = 0
det(KyλI)=0det(K_{y} - \lambda I) = 0
det[1λ1212121λ1212121λ]=λ4(2λ3)2=0det \left[ \begin{array}{rrr} 1 - \lambda & -{1 \over 2} & -{1 \over 2} \\ -{1 \over 2} & 1 - \lambda & -{1 \over 2} \\ -{1 \over 2} & - {1 \over 2} & 1 - \lambda \end{array} \right] = -{\lambda \over 4} (2 \lambda - 3)^{2} = 0
Kye1=0e1t=13(1,1,1)K_{y}e_{1} = 0 \Rightarrow e_{1}^{t} = {1 \over \sqrt{3}}(1, 1, 1)
(Ky32I)e2=0e2t=12(1,1,0)(K_{y} - {3 \over 2} I)e_{2} = 0 \Rightarrow e_{2}^{t} = {1 \over \sqrt{2}}(1, -1, 0)
(Ky32I)e3=0e3t=23(12,12,1)(K_{y} - {3 \over 2} I)e_{3} = 0 \Rightarrow e_{3}^{t} = \sqrt{{2 \over 3}}({1 \over 2}, {1 \over 2}, -1)
EΛ12=[0321203212001]E \Lambda^{1 \over 2} = \left[ \begin{array}{rrr} 0 & {\sqrt{3} \over 2} & {1 \over 2} \\ 0 & -{\sqrt{3} \over 2} & {1 \over 2} \\ 0 & 0 & -1 \end{array} \right]
y(u)=[3212321201]w(u)y(u) = \left[ \begin{array}{rrr} {\sqrt{3} \over 2} & {1 \over 2} \\ -{\sqrt{3} \over 2} & {1 \over 2} \\ 0 & -1 \end{array} \right] w(u)
By choosing U=[0013212012320]U = \left[ \begin{array}{rrr} 0 & 0 & 1 \\ {\sqrt{3} \over 2} & -{1 \over 2} & 0 \\ {1 \over 2} & {\sqrt{3} \over 2} & 0 \end{array} \right]
EΛ12U=[1001232012320]E \Lambda^{1 \over 2} U = \left[ \begin{array}{rrr} 1 & 0 & 0 \\ -{1 \over 2} & {\sqrt{3} \over 2} & 0 \\ -{1 \over 2} & -{\sqrt{3} \over 2} & 0 \end{array} \right]
z(u)=[1012321232]w(u)z(u) = \left[ \begin{array}{rrr} 1 & 0 \\ -{1 \over 2} & {\sqrt{3} \over 2} \\ -{1 \over 2} & -{\sqrt{3} \over 2} \end{array} \right] w(u)

Brute Force Factorization

K=HHK = HH^{\dagger}
[112121211212121]=[h1100h21h220h31h32h33][h11h21h310h22h3200h33]\left[ \begin{array}{rrr} 1 & -{1 \over 2} & -{1 \over 2} \\ -{1 \over 2} & 1 & -{1 \over 2} \\ -{1 \over 2} & - {1 \over 2} & 1 \end{array} \right] = \left[ \begin{array}{rrr} h_{11} & 0 & 0 \\ h_{21} & h_{22} & 0 \\ h_{31} & h_{32} & h_{33} \end{array} \right] \left[ \begin{array}{rrr} h_{11}^{*} & h_{21}^{*} & h_{31}^{*} \\ 0 & h_{22}^{*} & h_{32}^{*} \\ 0 & 0 & h_{33}^{*} \end{array} \right]
1=h112h11=i1 = |h_{11}|^{2} \Leftarrow h_{11} = i
12=h21h11=h21ih21=i2-{1 \over 2} = h_{21} h_{11}^{*} = -h_{21}i \Leftrightarrow h_{21} = - {i \over 2}
12=h31h11=h31ih31=i2-{1 \over 2} = h_{31} h_{11}^{*} = -h_{31}i \Leftrightarrow h_{31} = - {i \over 2}
1=h212+h222=14+h222h22=321 = |h_{21}|^{2} + |h_{22}|^{2} = {1 \over 4} + |h_{22}|^{2} \Leftarrow h_{22} = - {\sqrt{3} \over 2}
H=[i0i232i232]H = \left[ \begin{array}{rrr} i & 0 \\ -{i \over 2} & -{\sqrt{3} \over 2} \\ -{i \over 2} & {\sqrt{3} \over 2} \end{array} \right]
K=EΛEK = E \Lambda E^{\dagger}
=[λ1e1λ2e2...λnen][e1e2...en]=i=1nλieiei= \left[ \begin{array}{rrrr} \lambda_{1} e_{1} & \lambda_{2} e_{2} & ... & \lambda_{n} e_{n} \end{array} \right] \left[ \begin{array}{rrrr} e_{1}^{\dagger} \\ e_{2}^{\dagger} \\ ... \\ e_{n}^{\dagger} \end{array} \right] = \sum_{i=1}^{n} \lambda_{i} e_{i} e_{i}^{\dagger}
KK의 eigenvalue들은 KK의 spectrum이라 부른다.
x=j=1najejx = \sum_{j = 1}^{n} a_{j} e_{j}
y=Kx=(i=1nλieiei)(j=1najej)y = Kx = (\sum_{i = 1}^{n} \lambda_{i} e_{i} e_{i}^{\dagger})(\sum_{j = 1}^{n} a_{j} e_{j})
eie_{i}의 orthonormaliity를 이용해서
y=i=1nλiaieiy = \sum_{i = 1}^{n} \lambda_{i} a_{i} e_{i}

Random Vectors의 Eigen-Representation

simulation problem
z(u)=EΛ12w(u)+mzz(u) = E \Lambda^{1 \over 2} w(u) + m_{z}
z(u)=mz+j=1nxj(u)ej=mz+j=1nλjwj(u)ejz(u) = m_{z} + \sum_{j=1}^{n} x_{j}(u) e_{j} = m_{z} + \sum_{j = 1}^{n} \sqrt{\lambda_{j}} w_{j}(u) e_{j}
uncorrelated coefficients와 함께 알려진 벡터들의 random 선형 결합 z(u)z(u)의 representation를 Karhuenen-Loeve expansion의 finite-dimensional analog라고 한다.

Average Length Measures for Random Vectors

E{z(u)2}=E{z(u)z(u)}=t=1nRz(t,t)=TR(Rz)\mathbb{E} \{|z(u)|^{2}\} = \mathbb{E} \{ z^{\dagger}(u) z(u) \} = \sum_{t=1}^{n} R_{z}(t, t) = TR(R_{z})
E{z(u)2}=i=1nλi\mathbb{E} \{ |z(u)|^{2} \} = \sum_{i = 1}^{n} \lambda_{i}

Directional Preference

길이가 b=1|b| = 1 인 벡터에 대하여
(z0(u),b)bbz0(u)b(z_{0}(u), b) b \triangleq b^{\dagger} z_{0}(u) b의 mean-squared length는 다음과 같다.
E{(z0(u),b)b2}=E{(bz0(u))2}b2=E{bz0(u)z0(u)b}=bKzb\mathbb{E} \{ |(z_{0}(u), b) b|^{2} \} = \mathbb{E} \{ |(b^{\dagger} z_{0}(u))|^{2} \} |b|^{2} = \mathbb{E} \{ b^{\dagger} z_{0}(u) z_{0}^{\dagger}(u) b \} = b^{\dagger} K_{z}b
z(u)z(u)가 mean-zero real random vector라고 가정하면
Kz= [4227]K_{z} = \left[ \begin{array}{rr} 4 & 2 \\ 2 & 7 \end{array} \right]
λ1=3,e1=15[21]\lambda_{1} = 3, e_{1} = {1 \over \sqrt{5}} \left[ \begin{array}{rr} 2 \\ -1 \end{array} \right]
λ2=8,e2=15[12]\lambda_{2} = 8, e_{2} = {1 \over \sqrt{5}} \left[ \begin{array}{rr} 1 \\ 2 \end{array} \right]
z(u)z(u)의 rms length 는
E{z(u)2}=113.32\sqrt{\mathbb{E} \{ |z(u)|^{2} \}} = \sqrt{11} \approx 3.32
$latex z(u) &s=2$의 directional preference 계산은 $latex b &s=2$를 $latex K_{z} &s=2$의 eigen-vector들을 선택해서 한다.
$latex b = e_{1}, \beta_{1} = 1, \beta_{2} = 0 &s=2$
E{[e1tz(u)]2}=e1tKze1=λ11.73\sqrt{\mathbb{E} \{ [e_{1}^{t} z(u)]^{2} \}} = \sqrt{e_{1}^{t} K_{z} e_{1}} = \sqrt{\lambda_{1}} \approx 1.73
b=e2,β1=0,β2=1b = e_{2}, \beta_{1} = 0, \beta_{2} = 1
E{[e2tz(u)]2}=e2tKze2=λ22.83\sqrt{\mathbb{E} \{ [e_{2}^{t} z(u)]^{2} \}} = \sqrt{e_{2}^{t} K_{z} e_{2}} = \sqrt{\lambda_{2}} \approx 2.83
bb의 rms projection의 maximum과 minimum이 eigen-vector의 방향을 가리킨다.
bbKzK_{z}의 eigen-vector들로 쓰면 다음과 같다.
b=i=1nβieib = \sum_{i = 1}^{n} \beta_{i} e_{i}
βi=eib\beta_{i} = e_{i}^{\dagger} b
bb는 unit vector
EE의 eigenvector column들은 orthonormal set
1=b2=i=1nβiei2=i=1nj=1nβiβjeiej=i=1nβi21 = |b|^{2} = |\sum_{i=1}^{n} \beta_{i} e_{i}|^{2} = \sum_{i=1}^{n} \sum_{j = 1}^{n} \beta_{i}^{*} \beta_{j} e_{i}^{\dagger} e_{j} = \sum_{i=1}^{n} |\beta_{i}|^{2}
E{(z0(u),b)b2}=bKzb=i=0nj=0nβiβjeiKzej=i=1nβi2λi\mathbb{E} \{ |(z_{0}(u), b) b|^{2} \} = b^{\dagger} K_{z} b = \sum_{i=0}^{n} \sum_{j=0}^{n} \beta_{i}^{*} \beta_{j} e_{i}^{\dagger} K_{z} e_{j} = \sum_{i=1}^{n} |\beta_{i}|^{2} \lambda_{i}
βi2|\beta_{i}|^{2}에 대해
miniλii=1nβi2λimaxiλi\min_{i} \lambda_{i} \leq \sum_{i=1}^{n} |\beta_{i}|^{2} \lambda_{i} \leq \max_{i} \lambda_{i}
이런 이유로 projection variance는 KzK_{z}의 eigenvalue의 largest와 smallest 사이에 존재한다.