Linear Algebra
Contents
- TASHOUN(她笑)
- Spectral Decomposition
- Equivalent Conditions of an Invertible Matrix
- Consistency of Ax=b
- A Motivation of Adjoint Operators
- A Motivation of Normal Operators
- Transpose, Trace, Determinant, Rank, Inverse
- A Geometric Interpretation of Determinant
- Properties of the Determinant
- 遞迴方程式的線性代數觀點recurrence relation (linear algebra aspect)
- Diamond
- The Importance of Strang's Big Picture
- Jordan Canonical Form
TASHOUN(她笑)
real | complex | ||||||
---|---|---|---|---|---|---|---|
transpose At |
adjoint A∗ |
||||||
orthogonally diagonalizable |
spectral theorem ⇔ |
symmetric At=A |
Hermitian (self-adjoint) A∗=A |
⇒ | unitarily diagonalizable |
spectral theorem ⇔ |
normal |
orthogonal AtA=I |
unitary A∗A=I |
注意到上面這張圖,美中不足的地方在於Hermitian並不是unitarily diagonalizable的等價條件,讓這張圖並沒有呈現完美的對稱。
Spectral Decomposition
Let A=PDPt=(u1|u2|⋯|un)(λ10⋯00λ2⋯0⋮⋮⋱⋮00⋯λn)(ut1ut2⋮utn)=λ1u1ut1+λ2u2ut2+⋯+λnunutn be the orthogonal diagonalization of A. For any v, uiutiv=(utiv)ui=⟨ui,v⟩ui=⟨v,ui⟩ui=⟨v,ui⟩||ui||2ui=projspan(ui)v=the orthogonal projection of v on the subspace span(ui)≤EλiEquivalent Conditions of an Invertible Matrix
See Anton's Theorem 6.4.5 in Elementary Linear Algebra.
If A is an n×n matrix, then the following statements are equivalent.
- A is invertible.
- Ax=0 has only the trivial solution.
- The reduced row echelon form of A is In.
- A is expressible as a product of elementary matrices.
- Ax=b is consistent for every n×1 matrix b.
- Ax=b has exactly one solution for every n×1 matrix b.
- det.
- The column vectors of A are linearly independent .
- The row vectors of A are linearly indenpendent.
- The column vectors of A span \mathbb{R}^n.
- The row vectors of A span \mathbb{R}^n.
- The column vectors of A form a basis for \mathbb{R}^n.
- The row vectors of A form a basis for \mathbb{R}^n.
- A has rank n.
- A has nullity 0.
- The orthogonal complement of the null space of A is \mathbb{R}^n.
- The orthogonal complement of the row space of A is \{\mathbf{0}\}.
- The kernel of T_A is \{\mathbf{0}\}.
- The range of T_A is \mathbb{R}^n.
- T_A is one-to-one.
- \lambda=0 is not an eigenvalue of A.
- A^T A is invertible.
上面這個列表不太好用,因為一般來說不一定有 m=n,所以我個人比較喜歡用下面的等價關係。下面 A 為一 m\times n 的矩陣。點擊箭頭可以看到證理由。
線性變換的觀點 | 向量空間的觀點 | 維度的觀點 | 向量的觀點 | |||||
columns are linearly independent | ||||||||
⇗rank = the number of linearly independent columns ⇙rank = the number of linearly independent columns | ||||||||
A:F^n\to F^m is one-to-one | ⇔Theorem 8.2.1 | kernel A = 0 | ⇔Definition | nullity A= 0 | ⇔Dimension Theorem | rank A = n | ||
⇘rank A^T = n <=> A^T columns span F^n ⇖rank A^T = n <=> A^T columns span F^n | ||||||||
rows span F^n | ||||||||
rows are linearly independent | ||||||||
⇗rank = the number of linearly independent rows ⇙rank = the number of linearly independent rows | ||||||||
A:F^n\to F^m is onto | ⇔Definition | image A = F^m | ⇔rank = dimension of image | rank A = m | ||||
⇘column space = image A ⇖column space = image A | ||||||||
columns span F^m |
Consistency of A\mathbf{x}=\mathbf{b}
- \text{rank}(A)\neq \text{rank}(A|\mathbf{b})\Leftrightarrow A\mathbf{x}=\mathbf{b} has no solutions
- \text{rank}(A)=\text{rank}(A|\mathbf{b})=n\Leftrightarrow A\mathbf{x}=\mathbf{b} has exactly one solution
- \text{rank}(A)=\text{rank}(A|\mathbf{b})< n\Leftrightarrow A\mathbf{x}=\mathbf{b} has infinitely many solutions
上面這三個敘述可以利用上面的one-to-one及onto的等價敘述得到。另外,A\mathbf{x}=\mathbf{b} 的解只有三種可能,無解、恰有一組解、無線多組解。證明如下:假設 \mathbf{x}_1, \mathbf{x}_2 為相異兩組解,則 \mathbf{x}_1+c(\mathbf{x}_1-\mathbf{x}_2) 皆為解,其中 c 為任意實數。
注意到,例如空間中的三個平面,既使判斷方程式是無解、無限多組解還是唯一解之後,仍然要再更進一步探討(可以利用法向量)三個平面的相交情況。(平面相交的圖形來自於Anton的書)
A Motivation of Adjoint Operators
A Motivation of Normal Operators
Transpose, Trace, Determinant, Rank, Inverse
transpose | trace | det | rank | inverse | |
---|---|---|---|---|---|
cA | (cA)^t=cA^t by def. | \text{tr }cA=c\cdot \text{tr }A by def. | \det{cA}=c^n \det{A} by def. | \text{rank }cA=\text{rank }A, c\neq 0 | (cA)^{-1}=c^{-1}A^{-1}, c\neq 0 by def. |
A+B | (A+B)^t=A^t+B^t by def. | \text{tr }(A+B)=\text{tr }A+\text{tr }B by def. | \text{rank }(A+B)\leq \text{rank }A+\text{rank }B | ||
AB | (AB)^t=B^t A^t by def. | \text{tr }AB=\color{red}{\text{tr }BA} by def. | \det{AB}=\det{A}\cdot \det{B} difficult | \text{rank }AB\leq \min{\{\text{rank }A, \text{rank }B\}} | (AB)^{-1}=B^{-1}A^{-1} by def. |
A^t | (A^t)^t=A by def. | \text{tr }A^t=\text{tr }A by def. | \det{A^t}=\det{A} by def. | \text{rank }A^t=\text{rank }A difficult | (A^t)^{-1}=(A^{-1})^t |
A Geometric Interpretation of Determinant
參考這裡,動畫版參考這裡。另一個比較常見的證法是這個,還有一個比較少見的證法,不過我覺得後面兩個都比不上第一個簡單。
Properties of the Determinant
See Friedberg's Linear Algebra
- If B is a matrix obtained by interchanging any two rows or interchanging any two columns of an n\times n matrix A, then \det{B} = −\det{A}. (Proof: By the definition.)
- If B is a matrix obtained by multiplying each entry of some row or column of an n\times n matrix A by a scalar k, then \det{B} = k\cdot \det{A}. (Proof: By the definition.)
- If B is a matrix obtained from an n\times n matrix A by adding a multiple of row i to row j or a multiple of column i to column j for i\neq j, then \det{B} = \det{A}. (Proof: Waiting for proving.)
- The determinant of an upper triangular matrix is the product of its diagonal entries. In particular, \det{I} = 1. (Proof: By the definition.)
- If two rows (or columns) of a matrix are identical, then the determinant of the matrix is zero. (Proof: By 3 and 10.)
- For any n\times n matrices A and B, \det{AB}=\det{A}\cdot \det{B}. (Proof: It is difficult to prove.)
- An n\times n matrix A is invertible if and only if \det{A}\neq 0. Furthermore, if A is invertible, then \det{A^{-1}}=\frac{1}{\det{A}}. (Proof: By 4 and 6.)
- For any n\times n matrix A, the determinants of A and A^t are equal. (Proof: By the definition.)
- If A and B are similar matrices, then \det{A}=\det{B}. (Proof: By 6 and 7.)
- If A has a 0 row or a 0 column, then \det{A}=0. (Proof: By the definition.)
- If ith row equals a multiple of jth row, then \det{A}=0. (Proof: 3 and 10.)
- \det{A} is row linear. (Proof: Waiting for proving.)
遞迴方程式的線性代數觀點recurrence relation (linear algebra aspect)
我們大學時(甚至是高中),即學過遞迴方程式(recurrence relation),例如 a_{n+2}=5a_{n+1}-6a_n,其中 n 為大於等於 1 的正整數,一般書上教我們的解法是,先求出此遞迴方程式的特徵方程式(characteristic equation) r^2=5r-6,並求出此特徵方程式的根為 r=2 或是 r=3,則此遞迴方程式的解為 a_n=c_1\cdot 2^n+c_2\cdot 3^n.
又例如遞迴方程式 a_{n+2}=4a_{n+1}-4a_n.,此遞迴方程式的特徵方程式為 r^2=4r-4,有重根 r=2,這時候我們要調整遞迴方程式的解為 a_n=c_1\cdot 2^n+c_2\cdot n\cdot 2^n,學生到這裡可能會有三個問題。
- 特徵方程式是我們在線性代數中學習到的觀念,遞迴方程式是否跟向量空間(vector space)、線性變換(linear transformation)有關係,為何在解遞迴方程式時會用到特徵方程式?
- 我們何以知道當特徵方程式有重根時,其遞迴方程式的解要做如此的調整?
- 一般書上都是給出遞迴方程式的解,然後用代入驗證的方法證明,然而,相信數學家們不是一開始研究遞迴方程式時就知道其解的形式,數學家們當初是如何知道遞迴方程式的解的形式呢?
本文的目的在回答這三個問題。
Diamond
Singular Value Decomposition |
→ | Pseudoinverse | ||||
↗ | ↘ | |||||
↗ | ↘ | |||||
Gram-Schmidt Process |
← | ← | Orthogonal Projection |
→ | → | Least Square Problem |
↘ | ↓ | ↗ | ||||
↘ | Householder Reflection |
↗ | ||||
↘ | ↓ | ↗ | ||||
QR-decomposition |
No comments:
Post a Comment