If
the group is called Abelian or commutative. Vector spaces form an Abelian group for addition and multiplication: $1\cdot\vec{a}=\vec{a}$, $\lambda(\mu\vec{a})=(\lambda\mu)\vec{a}$, $(\lambda+\mu)(\vec{a}+\vec{b})=\lambda\vec{a}+\lambda\vec{b}+\mu\vec{a}+\mu\vec{b}$.
$W$ is a linear subspace if $\forall \vec{w}_1,\vec{w}_2\in W$ holds: $\lambda\vec{w}_1+\mu\vec{w}_2\in W$.
$W$ is an invariant subspace of $V$ for the operator $A$ if $\forall\vec{w}\in W$ holds: $A\vec{w}\in W$.
The set vectors $\{\vec{a}_n\}$ is linear independent if: \[ \sum\limits_i\lambda_i\vec{a}_i=0~~\Leftrightarrow~~\forall_i\lambda_i=0 \] The set $\{\vec{a}_n\}$ is a basis if it is 1. independent and 2. $V=<\vec{a}_1,\vec{a_2},...>=\sum\lambda_i\vec{a}_i$.
The transpose of $A$ is defined by: $a_{ij}^T=a_{ji}$. For this holds $(AB)^T=B^TA^T$ and $(A^T)^{-1}=(A^{-1})^T$. For the inverse matrix holds: $(A\cdot B)^{-1}=B^{-1}\cdot A^{-1}$. The inverse matrix $A^{-1}$ has the property that $A\cdot A^{-1}=I\hspace{-1mm}I$ and can be found by diagonalization: $(A_{ij}|I\hspace{-1mm}I)\sim(I\hspace{-1mm}I |A_{ij}^{-1})$.
The inverse of a $2\times2$ matrix is: \[ \left(\begin{array}{cc}a&b\\ c&d\end{array}\right)^{-1}=\frac{1}{ad-bc} \left(\begin{array}{cc}d&-b\\ -c&a\end{array}\right) \]
The determinant function $D=\det(A)$ is defined by: \[ \det(A)=D(\vec{a}_{*1},\vec{a}_{*2},...,\vec{a}_{*n}) \] For the determinant $\det(A)$ of a matrix $A$ holds: $\det(AB)=\det(A)\cdot\det(B)$. Een $2\times2$ matrix has determinant: \[ \det\left(\begin{array}{cc}a&b\\ c&d \end{array}\right)=ad-cb \] The derivative of a matrix is a matrix with the derivatives of the coefficients: \[ \frac{dA}{dt}=\frac{da_{ij}}{dt}~~~\mbox{and}~~~\frac{dAB}{dt}=B\frac{dA}{dt}+A\frac{dB}{dt} \] The derivative of the determinant is given by: \[ \frac{d\det(A)}{dt}=D(\frac{d\vec{a}_1}{dt},...,\vec{a}_n)+ D(\vec{a}_1,\frac{d\vec{a}_2}{dt},...,\vec{a}_n)+...+D(\vec{a}_1,...,\frac{d\vec{a}_n}{dt}) \] When the rows of a matrix are considered as vectors the row rank of a matrix is the number of independent vectors in this set. Similar for the column rank. The row rank equals the column rank for each matrix.
Let $\tilde{A}:\tilde{V}\rightarrow\tilde{V}$ be the complex extension of the real linear operator $A:V\rightarrow V$ in a finite dimensional $V$. Then $A$ and $\tilde{A}$ have the same caracteristic equation.
When $A_{ij}\in I\hspace{-1mm}R$ and $\vec{v}_1+i\vec{v_2}$ is an eigenvector of $A$ at eigenvalue $\lambda=\lambda_1+i\lambda_2$, than holds:
If $\vec{k}_n$ are the columns of $A$, than the transformed space of $A$ is given by:
\[
R(A)==<\vec{k}_1,...,\vec{k}_n>
\]
If the columns $\vec{k}_n$ of a $n\times m$ matrix $A$ are independent, than
the nullspace ${\cal N}(A)=\{\vec{0}\}$.
The equation
\[
A\cdot\vec{x}=\vec{0}
\]
has exactly one solution $\neq\vec{0}$ if $\det(A)=0$, and if
$\det(A)\neq0$ the solution is $\vec{0}$.
Cramer's rule for the solution of systems of linear equations is: let the
system be written as
\[
A\cdot\vec{x}=\vec{b}\equiv\vec{a}_1x_1+...+\vec{a}_nx_n=\vec{b}
\]
then $x_j$ is given by:
\[
x_j=\frac{D(\vec{a}_1,...,\vec{a}_{j-1},\vec{b},\vec{a}_{j+1},...,\vec{a}_n)}{\det(A)}
\]
Some common linear transformations are:
5.3.2 Matrix equations
We start with the equation
\[
A\cdot\vec{x}=\vec{b}
\]
and $\vec{b}\neq\vec{0}$. If $\det(A)=0$ the only solution is $\vec{0}$. If
$\det(A)\neq0$ there exists exactly one solution $\neq\vec{0}$.
5.4 Linear transformations
A transformation $A$ is linear if:
$A(\lambda\vec{x}+\beta\vec{y})=\lambda A\vec{x}+\beta A\vec{y}$.
Transformation type | Equation |
---|---|
Projection on the line $<\vec{a}>$ | $P(\vec{x})=(\vec{a},\vec{x})\vec{a}/(\vec{a},\vec{a})$ |
Projection on the plane $(\vec{a},\vec{x})=0$ | $Q(\vec{x})=\vec{x}-P(\vec{x})$ |
Mirror image in the line $<\vec{a}>$ | $S(\vec{x})=2P(\vec{x})-\vec{x}$ |
Mirror image in the plane $(\vec{a},\vec{x})=0$ | $T(\vec{x})=2Q(\vec{x})-\vec{x}=\vec{x}-2P(\vec{x})$ |
For a projection holds: $\vec{x}-P_W(\vec{x})\perp P_W(\vec{x})$ and $P_W(\vec{x})\in W$.
If for a transformation $A$ holds: $(A\vec{x},\vec{y})=(\vec{x},A\vec{y})=(A\vec{x},A\vec{y})$, than $A$ is a projection.
Let $A:W\rightarrow W$ define a linear transformation; we define:
Than $A(S)$ is a linear subspace of $W$ and the {\it inverse transformation} $A^\leftarrow(T)$ is a linear subspace of $V$. From this follows that $A(V)$ is the {\it image space} of $A$, notation: ${\cal R}(A)$. $A^\leftarrow(\vec{0})=E_0$ is a linear subspace of $V$, the {\it null space} of $A$, notation: ${\cal N}(A)$. Then the following holds: \[ {\rm dim}({\cal N}(A))+{\rm dim}({\cal R}(A))={\rm dim}(V) \]
The distance $d$ between 2 points $\vec{p}$ and $\vec{q}$ is given by $d(\vec{p},\vec{q})=\|\vec{p}-\vec{q}\|$.
In $I\hspace{-1mm}R^2$ holds: The distance of a point $\vec{p}$ to the line $(\vec{a},\vec{x})+b=0$ is \[ d(\vec{p},\ell)=\frac{|(\vec{a},\vec{p})+b|}{|\vec{a}|} \] Similarly in $I\hspace{-1mm}R^3$: The distance of a point $\vec{p}$ to the plane $(\vec{a},\vec{x})+k=0$ is \[ d(\vec{p},V)=\frac{|(\vec{a},\vec{p})+k|}{|\vec{a}|} \] This can be generalized for $I\hspace{-1mm}R^n$ and $\mathbb{C}^n$ (theorem from Hesse).
The matrix $A_\alpha^\beta$ transforms a vector given w.r.t. a basis $\alpha$ into a vector w.r.t. a basis $\beta$. It is given by: \[ A_\alpha^\beta=\left(\beta(A\vec{a}_1),...,\beta(A\vec{a}_n)\right) \] where $\beta(\vec{x})$ is the representation of the vector $\vec{x}$ w.r.t. basis $\beta$.
The transformation matrix $S_\alpha^\beta$ transforms vectors from coordinate system $\alpha$ into coordinate system $\beta$: \[ S_\alpha^\beta:=I\hspace{-1mm}I_\alpha^\beta=\left(\beta(\vec{a}_1),...,\beta(\vec{a}_n)\right) \] and $S_\alpha^\beta\cdot S_\beta^\alpha=I\hspace{-1mm}I$
The matrix of a transformation $A$ is than given by: \[ A_\alpha^\beta=\left(A_\alpha^\beta\vec{e}_1,...,A_\alpha^\beta\vec{e}_n\right) \] For the transformation of matrix operators to another coordinate system holds: $A_\alpha^\delta=S_\lambda^\delta A_\beta^\lambda S_\alpha^\beta$, $A_\alpha^\alpha=S_\beta^\alpha A_\beta^\beta S_\alpha^\beta$ and $(AB)_\alpha^\lambda=A_\beta^\lambda B_\alpha^\beta$.
Further is $A_\alpha^\beta=S_\alpha^\beta A_\alpha^\alpha$, $A_\beta^\alpha=A_\alpha^\alpha S_\beta^\alpha$. A vector is transformed via $X_\alpha=S_\alpha^\beta X_\beta$.
The eigen values $\lambda_i$ are independent of the chosen basis. The matrix of $A$ in a basis of eigenvectors, with $S$ the transformation matrix to this basis, $S=(E_{\lambda_1},...,E_{\lambda_n})$, is given by: \[ \Lambda=S^{-1}AS={\rm diag}(\lambda_1,...,\lambda_n) \] When 0 is an eigen value of $A$ than $E_0(A)={\cal N}(A)$.
When $\lambda$ is an eigen value of $A$ holds: $A^n\vec{x}=\lambda^n\vec{x}$.
When $W$ is an invariant subspace if the isometric transformation $A$ with dim$(A)<\infty$, than also $W^\perp$ is an invariante subspace.
Let $A:V\rightarrow V$ be orthogonal with dim$(V)<\infty$. Than $A$ is:
Direct orthogonal if $\det(A)=+1$. $A$ describes a rotation. A rotation in $I\hspace{-1mm}R^2$ through angle $\varphi$ is given by: \[ R= \left(\begin{array}{cc} \cos(\varphi)&-\sin(\varphi)\\ \sin(\varphi)&\cos(\varphi) \end{array}\right) \] So the rotation angle $\varphi$ is determined by Tr$(A)=2\cos(\varphi)$ with $0\leq\varphi\leq\pi$. Let $\lambda_1$ and $\lambda_2$ be the roots of the characteristic equation, than also holds: $\Re(\lambda_1)=\Re(\lambda_2)=\cos(\varphi)$, and $\lambda_1=\exp(i\varphi)$, $\lambda_2=\exp(-i\varphi)$.
In $I\hspace{-1mm}R^3$ holds: $\lambda_1=1$, $\lambda_2=\lambda_3^*=\exp(i\varphi)$. A rotation over $E_{\lambda_1}$ is given by the matrix \[ \left(\begin{array}{ccc} 1&0&0\\ 0&\cos(\varphi)&-\sin(\varphi)\\ 0&\sin(\varphi)&\cos(\varphi) \end{array}\right) \] Mirrored orthogonal if $\det(A)=-1$. Vectors from $E_{-1}$ are mirrored by $A$ w.r.t. the invariant subspace $E^\perp_{-1}$. A mirroring in $I\hspace{-1mm}R^2$ in $<(\cos(\frac{1}{2}\varphi),\sin(\frac{1}{2}\varphi))>$ is given by: \[ S= \left(\begin{array}{cc} \cos(\varphi)&\sin(\varphi)\\ \sin(\varphi)&-\cos(\varphi) \end{array}\right) \] Mirrored orthogonal transformations in $I\hspace{-1mm}R^3$ are rotational mirrorings: rotations of axis $<\vec{a}_1>$ through angle $\varphi$ and mirror plane $<\vec{a}_1>^\perp$. The matrix of such a transformation is given by: \[ \left(\begin{array}{ccc} -1&0&0\\ 0&\cos(\varphi)&-\sin(\varphi)\\ 0&\sin(\varphi)&\cos(\varphi) \end{array}\right) \] For all orthogonal transformations $O$ in $I\hspace{-1mm}R^3$ holds that $O(\vec{x})\times O(\vec{y})=O(\vec{x}\times\vec{y})$.
$I\hspace{-1mm}R^n$ $(n<\infty)$ can be decomposed in invariant subspaces with dimension 1 or 2 for each orthogonal transformation.
Theorem: for a $n\times n$ matrix $A$ the following statements are equivalent:
For each matrix $B\in I\hspace{-1mm}M^{m\times n}$ holds: $B^TB$ is symmetric.
If the transformations $A$ and $B$ are Hermitian, than their product $AB$ is Hermitian if: $[A,B]=AB-BA=0$. $[A,B]$ is called the commutator of $A$ and $B$.
The eigenvalues of a Hermitian transformation belong to $I\hspace{-1mm}R$.
A matrix representation can be coupled with a Hermitian operator $L$. W.r.t. a basis $\vec{e}_i$ it is given by $L_{mn}=(\vec{e}_m,L\vec{e}_n)$.
Definition: the linear transformation $A$ is normal in a complex vector space $V$ if $A^*A=AA^*$. This is only the case if for its matrix $S$ w.r.t. an orthonormal basis holds: $A^\dagger A=AA^\dagger$.
If $A$ is normal holds:
Let the different roots of the characteristic equation of $A$ be $\beta_i$ with multiplicities $n_i$. Than the dimension of each eigenspace $V_i$ equals $n_i$. These eigenspaces are mutually perpendicular and each vector $\vec{x}\in V$ can be written in exactly one way as \[ \vec{x}=\sum_i\vec{x}_i~~~\mbox{with}~~~\vec{x}_i\in V_i \] This can also be written as: $\vec{x}_i=P_i\vec{x}$ where $P_i$ is a projection on $V_i$. This leads to the {\it spectral mapping theorem}: let $A$ be a normal transformation in a complex vector space $V$ with dim$(V)=n$. Than:
Lemma: if $E_\lambda$ is the eigenspace for eigenvalue $\lambda$ from $A_1$, than $E_\lambda$ is an invariant subspace of all transformations $A_i$. This means that if $\vec{x}\in E_\lambda$, than $A_i\vec{x}\in E_\lambda$.
Theorem. Consider $m$ commuting Hermitian matrices $A_i$. Than there exists a unitary matrix $U$ so that all matrices $U^\dagger A_iU$ are diagonal. The columns of $U$ are the common eigenvectors of all matrices $A_j$.
If all eigenvalues of a Hermitian linear transformation in a $n$-dimensional complex vector space differ, than the normalized eigenvector is known except for a phase factor $\exp(i\alpha)$.
Definition: a commuting set Hermitian transformations is called complete if for each set of two common eigenvectors $\vec{v}_i,\vec{v}_j$ there exists a transformation $A_k$ so that $\vec{v}_i$ and $\vec{v}_j$ are eigenvectors with different eigenvalues of $A_k$.
Usually a commuting set is taken as small as possible. In quantum physics one speaks of commuting observables. The required number of commuting observables equals the number of quantum numbers required to characterize a state.
Due to (1) holds: $(\vec{a},\vec{a})\in I\hspace{-1mm}R$. The inner product space $\mathbb{C}^n$ is the complex vector space on which a complex inner product is defined by: \[ (\vec{a},\vec{b})=\sum_{i=1}^na_i^*b_i \] For function spaces holds: \[ (f,g)=\int\limits_a^bf^*(t)g(t)dt \] For each $\vec{a}$ the length $\|\vec{a}\|$ is defined by: $\|\vec{a}\|=\sqrt{(\vec{a},\vec{a})}$. The following holds: $\|\vec{a}\|-\|\vec{b}\|\leq\|\vec{a}+\vec{b}\|\leq\|\vec{a}\|+\|\vec{b}\|$, and with $\varphi$ the angle between $\vec{a}$ and $\vec{b}$ holds: $(\vec{a},\vec{b})=\|\vec{a}\|\cdot\|\vec{b}\|\cos(\varphi)$.
Let $\{\vec{a}_1,...,\vec{a}_n\}$ be a set of vectors in an inner product space $V$. Than the {\it Gramian G} of this set is given by: $G_{ij}=(\vec{a}_i,\vec{a}_j)$. The set of vectors is independent if and only if $\det(G)=0$.
A set is {\it orthonormal} if $(\vec{a}_i,\vec{a}_j)=\delta_{ij}$. If $\vec{e}_1,\vec{e}_2,...$ form an orthonormal row in an infinite dimensional vector space Bessel's inequality holds: \[ \|\vec{x}\|^2\geq\sum_{i=1}^\infty|(\vec{e}_i,\vec{x})|^2 \] The equal sign holds if and only if $\lim\limits_{n\rightarrow\infty}\|\vec{x}_n-\vec{x}\|=0$.
The inner product space $\ell^2$ is defined in $\mathbb{C}^\infty$ by: \[ \ell^2=\left\{\vec{a}=(a_1,a_2,...)~|~\sum_{n=1}^\infty|a_n|^2<\infty\right\} \] A space is called a {\it Hilbert space} if it is $\ell^2$ and if also holds: $\lim\limits_{n\rightarrow\infty}|a_{n+1}-a_n|=0$.
Than there exists a Laplace transform for $f$.
The Laplace transformation is a generalisation of the Fourier transformation. The Laplace transform of a function $f(t)$ is, with $s\in\mathbb{C}$ and $t\geq0$: \[ F(s)=\int\limits_0^\infty f(t){\rm e}^{-st}dt \] The Laplace transform of the derivative of a function is given by: \[ {\cal L}\left(f^{(n)}(t)\right)=-f^{(n-1)}(0)-sf^{(n-2)}(0)-...-s^{n-1}f(0)+s^nF(s) \] The operator $\cal L$ has the following properties:
If $s\in I\hspace{-1mm}R$ than holds $\Re(\lambda f)={\cal L}(\Re(f))$ and $\Im(\lambda f)={\cal L}(\Im(f))$.
For some often occurring functions holds:
$f(t)=$ | $F(s)={\cal L}(f(t))=$ |
---|---|
$\displaystyle\frac{t^n}{n!}{\rm e}^{at}$ | $(s-a)^{-n-1}$ |
${\rm e}^{at}\cos(\omega t)$ | $\displaystyle\frac{s-a}{(s-a)^2+\omega^2}$ |
${\rm e}^{at}\sin(\omega t)$ | $\displaystyle\frac{\omega}{(s-a)^2+\omega^2}$ |
$\delta(t-a)$ | $\exp(-as)$ |
If ${\cal L}(f)=F_1\cdot F_2$, than is $f(t)=f_1*f_2$.
Assume that $\lambda=\alpha+i\beta$ is an eigenvalue with eigenvector $\vec{v}$, than $\lambda^*$ is also an eigenvalue for eigenvector $\vec{v}^*$. Decompose $\vec{v}=\vec{u}+i\vec{w}$, than the real solutions are \[ c_1[\vec{u}\cos(\beta t)-\vec{w}\sin(\beta t)]{\rm e}^{\alpha t}+c_2[\vec{v}\cos(\beta t)+\vec{u}\sin(\beta t)]{\rm e}^{\alpha t} \]
There are two solution strategies for the equation $\ddot{\vec{x}}=A\vec{x}$:
Starting with the equation \[ ax^2+2bxy+cy^2+dx+ey+f=0 \] we have $|A|=ac-b^2$. An ellipse has $|A|>0$, a parabola $|A|=0$ and a hyperbole $|A|<0$. In polar coordinates this can be written as: \[ r=\frac{ep}{1-e\cos(\theta)} \] An ellipse has $e<1$, a parabola $e=1$ and a hyperbola $e>1$.
Rank 2: \[ p\frac{x^2}{a^2}+q\frac{y^2}{b^2}+r\frac{z}{c^2}=d \]
Rank 1: \[ py^2+qx=d \]