10.1. LinOp class

Cytnx provides the ability to define a customized linear operators using the LinOp class.

Before diving into the LinOp class, let’s take a look at a simple example:

In linear algebra, a linear operator can be represented as a matrix \(\boldsymbol{\hat{H}}\) that operates on a vector \(\boldsymbol{x}\), resulting in an output vector \(\boldsymbol{y}\) as

\[y = \hat{H} x\]

If we consider \(\boldsymbol{\hat{H}}\) to be a matrix, this matrix-vector multiplication can simply be achieved by using linalg.Dot.

As an example, we multiply a matrix \(\boldsymbol{\hat{H}}\) with shape (4,4) and a vector \(\boldsymbol{x}\) with 4 elements:

  • In Python:

1x = cytnx.ones(4)
2H = cytnx.arange(16).reshape(4,4)
3
4y = cytnx.linalg.Dot(H,x)
5
6print(x)
7print(H)
8print(y)
  • In C++:

1auto x = cytnx::ones(4);
2auto H = cytnx::arange(16).reshape(4, 4);
3
4auto y = cytnx::linalg::Dot(H, x);
5
6cout << x << endl;
7cout << H << endl;
8cout << y << endl;

Output >>

Total elem: 4
type  : Double (Float64)
cytnx device: CPU
Shape : (4)
[1.00000e+00 1.00000e+00 1.00000e+00 1.00000e+00 ]




Total elem: 16
type  : Double (Float64)
cytnx device: CPU
Shape : (4,4)
[[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 ]
 [4.00000e+00 5.00000e+00 6.00000e+00 7.00000e+00 ]
 [8.00000e+00 9.00000e+00 1.00000e+01 1.10000e+01 ]
 [1.20000e+01 1.30000e+01 1.40000e+01 1.50000e+01 ]]




Total elem: 4
type  : Double (Float64)
cytnx device: CPU
Shape : (4)
[6.00000e+00 2.20000e+01 3.80000e+01 5.40000e+01 ]

Such an explicit matrix-vector multiplication can be done for small matrices \(\boldsymbol{\hat{H}}\).

What if the matrix is very large but sparse? Or if more internal structure can be used?

Clearly, if most of the elements of the matrix are zero, using a dense structure is not only memory-inefficient but also leads to unnecessarily large computational costs as many zeros need to be multiplied and added. One way to solve this problem is to use a sparse representation with a pre-defined sparse structure. Indeed, most linear algebra libraries provide these standardized sparse data structures.

There is, however, a more general way to represent linear operator \(\boldsymbol{\hat{H}}\). Instead of thinking of it as a matrix and making use of various different internal data structures, we can think of this linear operator \(\boldsymbol{\hat{H}}\) as a linear function that maps an input vector \(\boldsymbol{x}\) to an output vector \(\boldsymbol{y}\).

This is precisely what the LinOp class is designed to do. Users can define the mapping operation from an input vector \(\boldsymbol{x}\) to an output vector \(\boldsymbol{y}\) inside the LinOp class.

Hint

This functionality can also be helpful if the matrix has a known internal structure which can be used to speed up the algorithm. For example, in typical tensor network algorithms, the linear operator is often defined by the contraction of a tensor network. Instead of explicitly doing all the contractions and storing the result in a possibly large matrix \(\boldsymbol{\hat{H}}\), it can be much more efficient to contract the tensor network directly with the input tensor \(\boldsymbol{x}\). Then, the order of the index summations can be chosen in a way that minimizes the number of operations needed. An example of this is given in the DMRG algorithm.

10.1.1. Inherit the LinOp class

Cytnx exposes the interface LinOp.matvec, which provides a way to implement any linear mapping from an input to an output vector. This can be achieved with inheritance from the LinOp class.

Let’s demonstrate this in a simple example of an operator that acts on an input vector \(\boldsymbol{x}\) with 4 elements. It interchanges the 1st and 4th element. Additionally, a constant, which is an external parameter, is added to the 2nd and 3rd element. The output then is again a dim=4 vector \(\boldsymbol{y}\).

First, let’s create a class that inherits from LinOp, with a class member AddConst.

  • In Python:

 1class MyOp(cytnx.LinOp):
 2    AddConst = 1# class member.
 3
 4    def __init__(self,aconst):
 5        # here, we fix nx=4, dtype=double on CPU,
 6        # so the constructor only takes the external argument 'aconst'
 7
 8        ## Remember to init the mother class.
 9        ## Here, we don't specify custom_f!
10        LinOp.__init__(self,"mv",4,cytnx.Type.Double,\
11                                   cytnx.Device.cpu )
12
13        self.AddConst = aconst
  • In C++:

 1using namespace cytnx;
 2class MyOp : public LinOp {
 3 public:
 4  double AddConst;
 5
 6  MyOp(double aconst)
 7      :  // invoke base class constructor!
 8        LinOp("mv", 4, Type.Double, Device.cpu) {
 9    this->AddConst = aconst;
10  }
11};

Next, we need to overload the matvec member function, as it defines the mapping from input \(\boldsymbol{x}\) to the output \(\boldsymbol{y}\).

  • In Python:

 1class MyOp(cytnx.LinOp):
 2    AddConst = 1# class member.
 3
 4    def __init__(self,aconst):
 5        # here, we fix nx=4, dtype=double on CPU,
 6        # so the constructor only takes the external argument 'aconst'
 7
 8        ## Remember to init the mother class.
 9        ## Here, we don't specify custom_f!
10        cytnx.LinOp.__init__(self,"mv",4,cytnx.Type.Double,\
11                                         cytnx.Device.cpu )
12
13        self.AddConst = aconst
14
15    def matvec(self, v):
16        out = v.clone()
17        out[0],out[3] = v[3],v[0] # swap
18        out[1]+=self.AddConst #add constant
19        out[2]+=self.AddConst #add constant
20        return out
  • In C++:

 1using namespace cytnx;
 2class MyOp : public LinOp {
 3 public:
 4  double AddConst;
 5
 6  MyOp(double aconst) : LinOp("mv", 4, Type.Double, Device.cpu) {  // invoke base class constructor!
 7    this->AddConst = aconst;
 8  }
 9
10  Tensor matvec(const Tensor& v) override {
11    auto out = v.clone();
12    out(0) = v(3);  // swap
13    out(3) = v(0);  // swap
14    out(1) += this->AddConst;  // add const
15    out(2) += this->AddConst;  // add const
16    return out;
17  }
18};

Now, the class can be be used. We demonstrate this in in the following and set the constant to be added to 7:

  • In Python:

1myop = MyOp(7)
2x = cytnx.arange(4)
3y = myop.matvec(x)
4
5print(x)
6print(y)
  • In C++:

1auto myop = MyOp(7);
2auto x = cytnx::arange(4);
3auto y = myop.matvec(x);
4
5cout << x << endl;
6cout << y << endl;

Output >>

Total elem: 4
type  : Double (Float64)
cytnx device: CPU
Shape : (4)
[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 ]




Total elem: 4
type  : Double (Float64)
cytnx device: CPU
Shape : (4)
[3.00000e+00 8.00000e+00 9.00000e+00 0.00000e+00 ]

10.1.2. Example: sparse data structure with mapping function

With the flexibility provided by overloading the matvec member function, users can actually define their own sparse data structures of an operator.

As an example, we want to define a sparse matrix \(\boldsymbol{A}\) with shape=(1000,1000) with ONLY two non-zero elements A[1,100]=4 and A[100,1]=7. All other elements are zero. We do not have to construct a dense tensor with size \(10^6\). Instead, we can simply use the LinOp class:

  • In Python:

 1class Oper(cytnx.LinOp):
 2    Loc = []
 3    Val = []
 4
 5    def __init__(self):
 6        cytnx.LinOp.__init__(self,"mv",1000)
 7
 8        self.Loc.append([1,100])
 9        self.Val.append(4.)
10
11        self.Loc.append([100,1])
12        self.Val.append(7.)
13
14    def matvec(self,v):
15        out = cytnx.zeros(v.shape(),v.dtype(),v.device())
16        for i in range(len(self.Loc)):
17            out[self.Loc[i][0]] += v[self.Loc[i][1]]*self.Val[i]
18        return out
19
20
21A = Oper();
22x = cytnx.arange(1000)
23y = A.matvec(x)
24
25print(x[1].item(),x[100].item())
26print(y[1].item(),y[100].item())

Output >>

1.0 100.0
400.0 7.0

Hint

In this example, we use the python API. C++ can be used similarly.

10.1.3. Prestore/preconstruct sparse elements

In the previous example, we showed how to construct a linear operator by overloading the matvec member function of the LinOp class. This is straight forward and simple, but in cases where the custom mapping contains many for-loops, handling them in Python is not optimal for performance reasons.

Since v0.6.3a, the option “mv_elem” is available in the constructor of the LinOp class. It allows users to pre-store the indices and values of the non-zero elements, similar to the standard sparse storage structure. If this is used, Cytnx handles the internal structure and optimizes the matvec performance. Again, let’s use the previous example: a sparse matrix \(\boldsymbol{A}\) with shape=(1000,1000) and ONLY two non-zero elements A[1,100]=4 and A[100,1]=7.

  • In Python:

 1class Oper(cytnx.LinOp):
 2
 3    def __init__(self):
 4        cytnx.LinOp.__init__(self,"mv_elem",1000)
 5
 6        self.set_elem(1,100,4.)
 7        self.set_elem(100,1,7.)
 8
 9A = Oper();
10x = cytnx.arange(1000)
11y = A.matvec(x)
12
13print(x[1].item(),x[100].item())
14print(y[1].item(),y[100].item())

Output >>

1.0 100.0
400.0 7.0

Notice that instead of overloading the matvec function, we use the set_elem member function in the LinOp class to set the indices and values of the elements. This information is then stored internally in the LinOp class, and we let the LinOp class provide and optimize matvec.

In Lanczos solver, we will see how we can benefit from the LinOp class by passing this object to Cytnx’s iterative solver. This way the eigenvalue problem can be solved efficiently with our customized linear operator.