Gramian matrix




In linear algebra, the Gram matrix (Gramian matrix or Gramian) of a set of vectors v1,…,vn{displaystyle v_{1},dots ,v_{n}}v_{1},dots ,v_{n} in an inner product space is the Hermitian matrix of inner products, whose entries are given by Gij=⟨vi,vj⟩{displaystyle G_{ij}=langle v_{i},v_{j}rangle }G_{ij}=langle v_{i},v_{j}rangle .[1]


An important application is to compute linear independence: a set of vectors are linearly independent if and only if the Gram determinant (the determinant of the Gram matrix) is non-zero.


It is named after Jørgen Pedersen Gram.




Contents






  • 1 Examples


    • 1.1 Applications




  • 2 Properties


    • 2.1 Positive-semidefiniteness


    • 2.2 Derivation of positive-semidefiniteness


    • 2.3 Change of basis




  • 3 Gram determinant


  • 4 See also


  • 5 References


  • 6 External links





Examples


For finite-dimensional real vectors with the usual Euclidean dot product, the Gram matrix is simply G=VTV{displaystyle G=V^{mathrm {T} }V}G=V^{mathrm {T} }V (or G=V¯TV{displaystyle G={overline {V}}^{mathrm {T} }V}{displaystyle G={overline {V}}^{mathrm {T} }V} for complex vectors using the conjugate transpose), where V{displaystyle V}V is a matrix whose columns are the vectors vk{displaystyle v_{k}}v_{k}.


Most commonly, the vectors are elements of a Euclidean space, or are functions in an L2 space, such as continuous functions on a compact interval [a,b]{displaystyle [a,b]}[a,b] (which are a subspace of L2([a,b]){displaystyle L^{2}([a,b])}{displaystyle L^{2}([a,b])}).


Given real-valued functions {ℓi(⋅),i=1,…,n}{displaystyle {ell _{i}(cdot ),,i=1,dots ,n}}{ell _{i}(cdot ),,i=1,dots ,n} on the interval [t0,tf]{displaystyle [t_{0},t_{f}]}[t_{0},t_{f}], the Gram matrix G=[Gij]{displaystyle G=[G_{ij}]}G=[G_{{ij}}] is given by the standard inner product on functions:


Gij=∫t0tfℓi(τ)ℓ)dτ.{displaystyle G_{ij}=int _{t_{0}}^{t_{f}}ell _{i}(tau ){bar {ell _{j}}}(tau ),dtau .}G_{ij}=int _{t_{0}}^{t_{f}}ell _{i}(tau ){bar {ell _{j}}}(tau ),dtau .

For a general bilinear form B{displaystyle B}B on a finite-dimensional vector space over any field we can define a Gram matrix G{displaystyle G}G attached to a set of vectors v1,…,vn{displaystyle v_{1},dots ,v_{n}}v_{1},dots ,v_{n} by Gij=B(vi,vj){displaystyle G_{ij}=B(v_{i},v_{j})}{displaystyle G_{ij}=B(v_{i},v_{j})}. The matrix will be symmetric if the bilinear form B{displaystyle B}B is symmetric.



Applications


  • In Riemannian geometry, given an embedded k{displaystyle k}k-dimensional Riemannian manifold M⊂Rn{displaystyle Msubset mathbb {R} ^{n}}{displaystyle Msubset mathbb {R} ^{n}} and a coordinate chart ϕ:U→Rn{displaystyle phi :Uto mathbb {R} ^{n}}{displaystyle phi :Uto mathbb {R} ^{n}} for (x1,…,xk)∈U⊂Rk{displaystyle (x_{1},ldots ,x_{k})in Usubset mathbb {R} ^{k}}{displaystyle (x_{1},ldots ,x_{k})in Usubset mathbb {R} ^{k}}, the volume form ω{displaystyle omega }omega on M{displaystyle M}M induced by the embedding may be computed using the Gramian of the coordinate tangent vectors:

ω=detG dx1⋯dxk,G=[⟨∂ϕxi,∂ϕxj⟩].{displaystyle omega ={sqrt {det G}} dx_{1}cdots dx_{k},quad G=left[leftlangle {tfrac {partial phi }{partial x_{i}}},{tfrac {partial phi }{partial x_{j}}}rightrangle right].}{displaystyle omega ={sqrt {det G}} dx_{1}cdots dx_{k},quad G=left[leftlangle {tfrac {partial phi }{partial x_{i}}},{tfrac {partial phi }{partial x_{j}}}rightrangle right].}

This generalizes the classical surface integral of a parametrized surface ϕ:U→S⊂R3{displaystyle phi :Uto Ssubset mathbb {R} ^{3}}{displaystyle phi :Uto Ssubset mathbb {R} ^{3}} for (x,y)∈U⊂R2{displaystyle (x,y)in Usubset mathbb {R} ^{2}}{displaystyle (x,y)in Usubset mathbb {R} ^{2}}:


Sf dA = ∬Uf(ϕ(x,y))|∂ϕϕy|dxdy.{displaystyle int _{S}f dA = iint _{U}f(phi (x,y)),left|{tfrac {partial phi }{partial x}},{times },{tfrac {partial phi }{partial y}}right|,dx,dy.}{displaystyle int _{S}f dA = iint _{U}f(phi (x,y)),left|{tfrac {partial phi }{partial x}},{times },{tfrac {partial phi }{partial y}}right|,dx,dy.}


  • If the vectors are centered random variables, the Gramian is approximately proportional to the covariance matrix, with the scaling determined by the number of elements in the vector.

  • In quantum chemistry, the Gram matrix of a set of basis vectors is the overlap matrix.

  • In control theory (or more generally systems theory), the controllability Gramian and observability Gramian determine properties of a linear system.

  • Gramian matrices arise in covariance structure model fitting (see e.g., Jamshidian and Bentler, 1993, Applied Psychological Measurement, Volume 18, pp. 79–94).

  • In the finite element method, the Gram matrix arises from approximating a function from a finite dimensional space; the Gram matrix entries are then the inner products of the basis functions of the finite dimensional subspace.

  • In machine learning, kernel functions are often represented as Gram matrices.[2]



Properties



Positive-semidefiniteness


The Gramian matrix is positive-semidefinite, and every positive symmetric semidefinite matrix is the Gramian matrix for some set of vectors. Further, in finite-dimensions it determines the vectors up to isomorphism, i.e. any two sets of vectors with the same Gramian matrix must be related by a single unitary matrix. These facts follow from taking the spectral decomposition of any positive-semidefinite matrix P{displaystyle P}P, so that
P=UDUH=(UD)(UD)H{displaystyle P=UDU^{mathrm {H} }=(U{sqrt {D}})(U{sqrt {D}})^{mathrm {H} }}{displaystyle P=UDU^{mathrm {H} }=(U{sqrt {D}})(U{sqrt {D}})^{mathrm {H} }}
and so P{displaystyle P}P is the Gramian matrix of the rows of UD{displaystyle U{sqrt {D}}}U{sqrt {D}}.
The Gramian matrix of any orthonormal basis is the identity matrix. The infinite-dimensional analog of this statement is Mercer's theorem.



Derivation of positive-semidefiniteness


The fact that the Gramian matrix is positive-semidefinite can be seen from the following simple derivation:


xTGx=∑i,j⟨vi,vj⟩xixj=∑i,j⟨vixi,vjxj⟩=⟨ivixi,∑jvjxj⟩=⟨ivixi,∑ivixi⟩0.{displaystyle x^{mathrm {T} }{mathbf {G}}x=sum _{i,j}langle v_{i},v_{j}rangle x_{i}x_{j}=sum _{i,j}langle v_{i}x_{i},v_{j}x_{j}rangle =langle sum _{i}v_{i}x_{i},sum _{j}v_{j}x_{j}rangle =langle sum _{i}v_{i}x_{i},sum _{i}v_{i}x_{i}rangle geq 0.}{displaystyle x^{mathrm {T} }{mathbf {G}}x=sum _{i,j}langle v_{i},v_{j}rangle x_{i}x_{j}=sum _{i,j}langle v_{i}x_{i},v_{j}x_{j}rangle =langle sum _{i}v_{i}x_{i},sum _{j}v_{j}x_{j}rangle =langle sum _{i}v_{i}x_{i},sum _{i}v_{i}x_{i}rangle geq 0.}

The first equality follows from the definition of matrix multiplication, the second and third from the bi-linearity of the inner-product, and the last from the positive definiteness of the inner product. Note that this also shows that the Gramian matrix is positive definite if and only if the vectors vi{displaystyle v_{i}}v_{i} are linearly independent.



Change of basis


Under change of basis, to an orthogonal basis, represented by an invertible orthogonal matrix P{displaystyle P}P, the Gram matrix will change by a matrix congruence to PTGP{displaystyle P^{mathrm {T} }GP}{displaystyle P^{mathrm {T} }GP}.



Gram determinant


The Gram determinant or Gramian is the determinant of the Gram matrix:


G(x1,…,xn)=|⟨x1,x1⟩x1,x2⟩x1,xn⟩x2,x1⟩x2,x2⟩x2,xn⟩xn,x1⟩xn,x2⟩xn,xn⟩|.{displaystyle G(x_{1},dots ,x_{n})={begin{vmatrix}langle x_{1},x_{1}rangle &langle x_{1},x_{2}rangle &dots &langle x_{1},x_{n}rangle \langle x_{2},x_{1}rangle &langle x_{2},x_{2}rangle &dots &langle x_{2},x_{n}rangle \vdots &vdots &ddots &vdots \langle x_{n},x_{1}rangle &langle x_{n},x_{2}rangle &dots &langle x_{n},x_{n}rangle end{vmatrix}}.}G(x_{1},dots ,x_{n})={begin{vmatrix}langle x_{1},x_{1}rangle &langle x_{1},x_{2}rangle &dots &langle x_{1},x_{n}rangle \langle x_{2},x_{1}rangle &langle x_{2},x_{2}rangle &dots &langle x_{2},x_{n}rangle \vdots &vdots &ddots &vdots \langle x_{n},x_{1}rangle &langle x_{n},x_{2}rangle &dots &langle x_{n},x_{n}rangle end{vmatrix}}.

Geometrically, the Gram determinant is the square of the volume of the parallelotope formed by the vectors. In particular, the vectors are linearly independent if and only if the Gram determinant is nonzero (if and only if the Gram matrix is nonsingular).


The Gram determinant can also be expressed in terms of the exterior product of vectors by


G(x1,…,xn)=‖x1∧xn‖2.{displaystyle G(x_{1},dots ,x_{n})=|x_{1}wedge cdots wedge x_{n}|^{2}.}G(x_{1},dots ,x_{n})=|x_{1}wedge cdots wedge x_{n}|^{2}.


See also



  • Controllability Gramian

  • Observability Gramian



References





  1. ^ Horn & Johnson 2013, p. 441
    Theorem 7.2.10 Let v1,…,vm{displaystyle v_{1},ldots ,v_{m}}{displaystyle v_{1},ldots ,v_{m}} be vectors in an inner product space V{displaystyle V}V with inner product ,⋅{displaystyle langle {cdot ,cdot }rangle }{displaystyle langle {cdot ,cdot }rangle } and let G=[⟨vj,vi⟩]i,j=1m∈Mm{displaystyle G=[langle {v_{j},v_{i}}rangle ]_{i,j=1}^{m}in M_{m}}{displaystyle G=[langle {v_{j},v_{i}}rangle ]_{i,j=1}^{m}in M_{m}}. Then
    (a) G{displaystyle G}G is Hermitian and positive-semidefinite
    (b) G{displaystyle G}G is positive-definite if and only if the vectors v1,…,vm{displaystyle v_{1},ldots ,v_{m}}{displaystyle v_{1},ldots ,v_{m}} are linearly-independent.
    (c) rank⁡G=dim⁡span⁡{v1,…,vm}{displaystyle operatorname {rank} G=dim operatorname {span} {v_{1},ldots ,v_{m}}}{displaystyle operatorname {rank} G=dim operatorname {span} {v_{1},ldots ,v_{m}}}



  2. ^ Lanckriet, G. R. G.; Cristianini, N.; Bartlett, P.; Ghaoui, L. E.; Jordan, M. I. (2004). "Learning the kernel matrix with semidefinite programming". Journal of Machine Learning Research. 5: 27–72 [p. 29]..mw-parser-output cite.citation{font-style:inherit}.mw-parser-output q{quotes:"""""""'""'"}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-lock-free a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-limited a,.mw-parser-output .cs1-lock-registration a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-subscription a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{font-size:100%}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}




  • Horn, Roger A.; Johnson, Charles R. (2013). "7.2 Characterizations and Properties". Matrix Analysis (Second Edition). Cambridge University Press. ISBN 978-0-521-83940-2.


External links




  • Hazewinkel, Michiel, ed. (2001) [1994], "Gram matrix", Encyclopedia of Mathematics, Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4


  • Volumes of parallelograms by Frank Jones









Popular posts from this blog

澳門輕軌系統

水泉澳邨

Indian Forest Service