COVARIANT, CONTRAVARIANT and METRIC TENSOR:


Change to oblique coordinates:

From this introductory differential geometry series online ushering in the concepts of contravariant (\(A^i\)) and covariant (\(A_i\)) of a vector as a change of coordinates:

In the setting of oblique coordinates, \(y\) axis is tilted, but the \(x\) axis stays unchanged:

The contravariant is calculated as:

\(\tan(\alpha) =\large \frac{y}{*}\); and therefore, \(\large * = \frac{y}{\tan(\alpha)}\)

and

\[\begin{align}c^u &= x - * \\[3ex]&= x - \frac{y}{\tan(\alpha)}\end{align}\]

while

\[c^v = \frac{y}{\sin(\alpha)}\]

Therefore the original orthogonal coordinates will transform into the new oblique coordinates through the matrix

\[H = \begin{bmatrix} 1 & - \frac{1}{\text{tan}(\alpha)} \\ 0 & \frac{1}{\text{sin}(\alpha)} \end{bmatrix}\]

as follows:

\[\begin{bmatrix} 1 & - \frac{1}{\text{tan}(\alpha)} \\ 0 & \frac{1}{\text{sin}(\alpha)} \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix}= \begin{bmatrix} c^u \\ c^v \end{bmatrix} = \begin{bmatrix} c^1 \\ c^2 \end{bmatrix} \]


On the other hand, the covariant is calculated as:

where,

\[c_u = x\] and

\[c_v = x\,\cos(\alpha) + y\, \sin(\alpha)\tag 1\]


This expression (1) is derived as follows:

The angles \(\alpha\) and \(\beta\) are complementary, and \(\sin \alpha = \cos \beta\).

\(\varphi= b \cos \beta = b \sin \alpha\)

\(\psi = a \cos \alpha\)

\(c_v = \psi + \varphi = a \cos \alpha + b \sin \alpha\)


It follows that the covariant transformation will be through the matrix

\[M = \begin{bmatrix} 1 & 0 \\ \cos \alpha & \sin \alpha \end{bmatrix}\]

Therefore,

\[\begin{bmatrix} 1 & 0 \\ \cos\alpha & \sin \alpha \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix}= \begin{bmatrix} c_u \\ c_v \end{bmatrix} = \begin{bmatrix} c_1 \\ c_2 \end{bmatrix} \]


From this video,

\(G\) is the METRIC TENSOR, and turns the covariate into the contravariate:

\[\begin{align} \bf G &= H\,M^{-1}\\[3ex] &= \frac{1}{\sin\alpha}\begin{bmatrix}1&-\frac{1}{\tan\alpha}\\0&\frac{1}{\sin\alpha}\end{bmatrix} \begin{bmatrix} \sin\alpha & 0\\-\cos\alpha &1\end{bmatrix}\\[3ex] &=\frac{1}{\sin\alpha} \begin{bmatrix}\sin\alpha+\frac{\cos\alpha}{\tan\alpha} & -\frac{1}{\tan \alpha}\\ -\frac{\cos\alpha}{\sin\alpha} & \sin\alpha \end{bmatrix}\\[3ex] &=\frac{1}{\sin^2\alpha}\begin{bmatrix}1 & - \cos\alpha\\-\cos\alpha\end{bmatrix} \end{align}\]

See this post for metric tensor, and this other one.


METRIC TENSOR:

From this presentation:

To preserve the length of a vector \(\vec v\) regardless of the whether referenced to coordinates with orthonormal basis vectors \(\{\vec e_1,\vec e_2\},\) or the alternative oblique coordinates \(\{\vec{ \tilde e_1}, \vec {\tilde e_2}\}\) we need to calculate norm as the dot product.

With respect to the orthonormal basis vectors,

\[\begin{align} \Vert \vec v \Vert^2 &= \vec v \cdot \vec v \\[2ex] &=(v^1\vec e_1 + v^2 \vec e_2) \cdot (v^1 \vec e_1 + v^2 \vec e_2)\\[2ex] &=(v^1)^2 (\vec e_1 \cdot \vec e_2) + 2 v^1v^2(\vec e_1 \cdot e_2) + (v^2)^2(\vec e_2\cdot \vec e_2)\tag 1\\[2ex] &=(v^1)^2 (\vec e_1 \cdot \vec e_2) + (v^2)^2(\vec e_2\cdot \vec e_2)\\[2ex] &=(2)^2 + (1)^2 =5\end{align}\]

The last line is consequence of the orthogonality of the basis vectors. However, when using non-orthogonal coordinates we’ll need to calculate the dot products of the basis vectors. Paralleling equation (1):

\[\begin{align}\Vert \vec v \Vert^2&=(\tilde v^1)^2 \left(\vec{\tilde e_1}\cdot \vec{\tilde e_1}\right)+ 2 \tilde v^1 \tilde v^2 \left(\vec{\tilde e_1}\cdot \vec{\tilde e_2}\right)+(\tilde v^2)^2\left(\vec{\tilde e_2 }\cdot\vec{\tilde e_2}\right)\end{align}\tag 2\]

Calculating the dot products, then,

\[\begin{align} \vec{\tilde e_1}\cdot \vec{\tilde e_1}&= (2\vec e_1+1\vec e_2)\cdot (2\vec e_1 + 1\vec e_2)\\[2ex] &=4 \vec e_1 \cdot \vec e_1 + 2\cdot 2 \vec e_1 \vec e_2 + 1 \vec e_2\vec e_2\\[2ex] &= 4 + 1 \\[2ex] &=5 \end{align}\] \[\begin{align} \vec{\tilde e_2}\cdot \vec{\tilde e_2}&= \left(-\frac{1}{2}\vec e_1+\frac{1}{4}\vec e_2\right)\cdot \left(-\frac{1}{2}\vec e_1+\frac{1}{4}\vec e_2\right)\\[2ex] &=\frac{1}{4} \vec e_1 \cdot \vec e_1 - 2\cdot \frac{1}{8} \vec e_1 \vec e_2 + \frac{1}{16} \vec e_2\vec e_2\\[2ex] &= \frac{1}{4} + \frac{1}{16} \\[2ex] &=\frac{5}{16} \end{align}\]

\[\begin{align} \vec{\tilde e_1}\cdot \vec{\tilde e_2}&= (2\vec e_1+1\vec e_2)\cdot \left(-\frac{1}{2}\vec e_1+\frac{1}{4}\vec e_2\right)\\[2ex] &=-1 \vec e_1 \cdot \vec e_1 + 2\cdot \frac{1}{2} \vec e_1 \vec e_2 + \frac{1}{4} \vec e_2\vec e_2\\[2ex] &= -1 + \frac{1}{4} \\[2ex] &=-\frac{3}{4} \end{align}\]

Plugging these results in equation (2):

\[\begin{align}\Vert \vec v \Vert^2&=(\tilde v^1)^2 5+ 2 \tilde v^1 \tilde v^2 \left(-\frac{3}{4}\right)+(\tilde v^2)^2\left(\frac{5}{16}\right)\end{align}\] Now, in the graph above we have as given that \(\vec v = \frac{5}{4} \vec {\tilde e_1} + 3 \vec{\tilde e_2}.\) Therefore,

\[\begin{align}\Vert \vec v \Vert^2&=\left(\frac{5}{4}\right)^2 \left(5\right)+ 2 \left(\frac{5}{4}3\right) \left(-\frac{3}{4}\right)+\left(3\right)^2\left(\frac{5}{16}\right)=5\end{align}\]

This can be expressed as

\[\begin{bmatrix}\tilde v^1 & \tilde v^2\end{bmatrix}\;\color{blue}{\begin{bmatrix}5 & -\frac{3}{4}\\-\frac{3}{4}& \frac{5}{16}\end{bmatrix}}\;\begin{bmatrix}\tilde v^1 \\ \tilde v^2\end{bmatrix}=\begin{bmatrix}\frac{5}{4} & 3\end{bmatrix}\;\color{blue}{\begin{bmatrix}5 & -\frac{3}{4}\\-\frac{3}{4}& \frac{5}{16}\end{bmatrix}}\;\begin{bmatrix}\frac{5}{4} \\ 3\end{bmatrix}\]

The metric tensor is

\[g_{\vec{\tilde e_i}}=\color{blue}{\begin{bmatrix}5 & -\frac{3}{4}\\-\frac{3}{4}& \frac{5}{16}\end{bmatrix}}_{\vec{\tilde e_i}}\]

In the case of the orthogonal matrix, the metric tensor is

\[g_{\vec{ e_i}}=\color{blue}{\begin{bmatrix}1 & 0\\0& 1\end{bmatrix}}_{\vec{ e_i}}\]

So the metric tensor is the matrix of dot products, such in the case of

\[g_{\vec{\tilde e_i}}=\color{blue}{\begin{bmatrix}\vec{\tilde e_1}\cdot \vec{\tilde e_1}& \vec{\tilde e_1}\cdot \vec{\tilde e_2}\\\vec{\tilde e_2}\cdot \vec{\tilde e_1}& \vec{\tilde e_2}\cdot \vec{\tilde e_2}\end{bmatrix}}_{\vec{\tilde e_i}}\]

In Einstein’s notation

\[\Vert \vec v\Vert^2=\tilde v^i\tilde v^j\left(\vec{\tilde e_i\cdot \tilde e_j}\right)=\tilde v^i\tilde v^j\,\color{blue}{g_{ij}}.\] The metric tensor is a \((0,2)\)-tensor, and it does eat two vectors (e.g. \(\vec v\) and \(\vec v\) in \(\Vert \vec v \Vert\)) to produce a scalar number.