11. linalg extension

11.1. Tensor decomposition

As mention in the Manipulate UniTensor, the specification of rowrank makes it convinient to apply linear algebra operations on UniTensors.

11.1.1. Singular value decomposition

Here is an example where a singular value decomposition (SVD) is performed on a UniTensor:

  • In Python:

 1T = cytnx.UniTensor(cytnx.ones([5,5,5,5,5]), rowrank = 3)
 2S, U, Vt = cytnx.linalg.Svd(T)
 3U.set_name('U')
 4S.set_name('S')
 5Vt.set_name('Vt')
 6
 7
 8T.print_diagram()
 9S.print_diagram()
10U.print_diagram()
11Vt.print_diagram()

Output >>

-----------------------
tensor Name :
tensor Rank : 5
block_form  : False
is_diag     : False
on device   : cytnx device: CPU
          ---------
         /         \
   0 ____| 5     5 |____ 3
         |         |
   1 ____| 5     5 |____ 4
         |         |
   2 ____| 5       |
         \         /
          ---------
-----------------------
tensor Name : S
tensor Rank : 2
block_form  : False
is_diag     : True
on device   : cytnx device: CPU
               -----------
              /           \
   _aux_L ____| 25     25 |____ _aux_R
              \           /
               -----------
-----------------------
tensor Name : U
tensor Rank : 4
block_form  : False
is_diag     : False
on device   : cytnx device: CPU
          ----------
         /          \
   0 ____| 5     25 |____ _aux_L
         |          |
   1 ____| 5        |
         |          |
   2 ____| 5        |
         \          /
          ----------
-----------------------
tensor Name : Vt
tensor Rank : 3
block_form  : False
is_diag     : False
on device   : cytnx device: CPU
               ----------
              /          \
   _aux_R ____| 25     5 |____ 3
              |          |
              |        5 |____ 4
              \          /
               ----------

When calling Svd, the first three legs of T are automatically reshaped into one leg according to rowrank=3. After the SVD, the matrices U and Vt are automatically reshaped back into the corresponding index form of the original tensor. This way, we get the original UniTensor T if we contract \(U \cdot S \cdot Vt\):

  • In Python:

1Tsymm_diff=T-cytnx.Contracts([U,S,Vt])
2print(Tsymm_diff.Norm()/T.Norm())

Output >>

Total elem: 1
type  : Double (Float64)
cytnx device: CPU
Shape : (1)
[7.87947e-16 ]

If we contract \(U \cdot S \cdot Vt\), we get a tensor of the same shape as T and we can subtract the two tensors. The error \(\frac{|T-U \cdot S \cdot Vt|}{|T|}\) is of the order of machine precision, as expected.

Here we demonstrate the usage of a more important function Svd_truncate() which appears frequently in the tensor network algorithm for truncatiing the bond dimension. In this example we print the singular values from doing Svd() and compare it to the result of Svd_truncate():

  • In Python:

1T = cytnx.UniTensor(cytnx.ones([5,5,5,5,5]), rowrank = 3)
2S1, _, _ = cytnx.linalg.Svd(Tin = T)
3print(S1)
4S2, _, _, err = cytnx.linalg.Svd_truncate(Tin = T, keepdim = 10, err = 1e-12, is_UvT = True, return_err = 1)
5print(S2)
6print(err)

Output >>

-------- start of print ---------
Tensor name:
is_diag    : True
contiguous : True

Total elem: 25
type  : Double (Float64)
cytnx device: CPU
Shape : (25)
[5.59017e+01 4.65490e-15 9.57650e-46 3.07595e-77 8.59824e-108 6.24040e-138 1.64397e-168 2.17184e-199 6.65227e-230 2.54214e-261 8.00572e-292 1.46017e-308 5.43472e-323 5.43472e-323 5.43472e-323 4.94066e-323 4.94066e-323 4.94066e-323 4.94066e-323 4.94066e-323 4.94066e-323 4.94066e-323 4.94066e-323 4.94066e-323 4.94066e-323 ]




-------- start of print ---------
Tensor name:
is_diag    : True
contiguous : True

Total elem: 1
type  : Double (Float64)
cytnx device: CPU
Shape : (1)
[5.59017e+01 ]




-------- start of print ---------
Tensor name:
is_diag    : False
contiguous : True

Total elem: 1
type  : Double (Float64)
cytnx device: CPU
Shape : (1)
[4.65490e-15 ]

We note that the singular values obtained by doing Svd_truncate() is truncated according to our err requirment, a keepdim argument is also passed to specify a maximum desired dimension. Finally we note that by setting return_err = 1 we can get the largest truncated singular values, it is also possible to obtain all truncated value by passing any int >1.

Note

The Svd() and Svd_truncate() calls xgesdd routines from LAPACKE internally, for other algorithms like xgesvd routines one may consider Gesvd() and Gesvd_truncate() functions.

11.1.2. Eigenvalue decomposition

  • In Python:

 1# Create a rank-2 Tensor which represents a square matrix
 2T = cytnx.arange(4*4).reshape(4,4)
 3# Eigenvalue decomposition
 4eigvals, V = cytnx.linalg.Eig(T)
 5# Create UniTensors corresponding to V, D, Inv(V), T
 6uV=cytnx.UniTensor(V, labels=["a","b"], name="uV")
 7uD=cytnx.UniTensor(eigvals, is_diag=True, labels=["b","c"], name="uD")
 8uV_inv=cytnx.UniTensor(cytnx.linalg.InvM(V), labels=["c","d"], name="Inv(uV)")
 9uT=cytnx.UniTensor(T, labels=["a","d"], name="uT")
10# Compare uT with Uv * uD * uV_inv
11diff = cytnx.Contracts([uV,uD,uV_inv]) - uT
12print(diff.Norm()) # 1.71516e-14

11.1.3. QR decomposition

The QR decomposition decomposes a matrix M to the form M = QR, where Q is an orthogonal matrix (Q Q^T = I), and R is a upper-right triangular matrix. One can perform a QR decomposition by using Qr().

Qr(Tin, is_tau)
Parameters:
  • Tin (cytnx.UniTensor) – input tensor

  • is_tau (bool) – If is_tau=True, the function returns an additional one-dimensional tensor tau that contains the scaling factors of the Householder reflectors that generate Q along with R. See [ABB+99] for details. Default: is_tau=False

Here is an example of a QR decomposition:

  • In Python:

 1uT = cytnx.UniTensor(cytnx.ones([5,5,5,5,5]), rowrank = 3, name="uT")
 2Q, R = cytnx.linalg.Qr(uT)
 3Q.set_name("Q")
 4R.set_name("R")
 5
 6Q.print_diagram()
 7R.print_diagram()
 8
 9# Verify the recomposition
10print((cytnx.Contract(Q,R)-uT).Norm())

Output >>

-----------------------
tensor Name : Q
tensor Rank : 4
block_form  : False
is_diag     : False
on device   : cytnx device: CPU
          ----------
         /          \
   0 ____| 5     25 |____ _aux_
         |          |
   1 ____| 5        |
         |          |
   2 ____| 5        |
         \          /
          ----------
-----------------------
tensor Name : R
tensor Rank : 3
block_form  : False
is_diag     : False
on device   : cytnx device: CPU
              ----------
             /          \
   _aux_ ____| 25     5 |____ 3
             |          |
             |        5 |____ 4
             \          /
              ----------

Total elem: 1
type  : Double (Float64)
cytnx device: CPU
Shape : (1)
[3.64214e-14 ]
[ABB+99]

E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, and D. Sorensen. LAPACK Users' Guide. Society for Industrial and Applied Mathematics, Philadelphia, PA, third edition, 1999. ISBN 0-89871-447-8 (paperback). https://www.netlib.org/lapack/lug/node69.html.