3.4. Tensor arithmetic

In Cytnx, arithmetic operations such as +, -, *, /, +=, -=, *=, /= can be performed between a Tensor and either another Tensor or a scalar, just like the standard way it is done in Python. See also Linear algebra for a list of arithmetic operations and further linear algebra functions.

3.4.1. Type promotion

Arithmetic operations in Cytnx follow a similar pattern of type promotion as standard C++/Python. When an arithmetic operation between a Tensor and another Tensor or scalar is performed, the output Tensor will have the same dtype as the input with the stronger type.

The Type order from strong to weak is:

  • Type.ComplexDouble

  • Type.ComplexFloat

  • Type.Double

  • Type.Float

  • Type.Int64

  • Type.Uint64

  • Type.Int32

  • Type.Uint32

  • Type.Int16

  • Type.Uint16

  • Type.Bool

3.4.2. Tensor-scalar arithmetic

Several arithmetic operations between a Tensor and a scalar are possible. For example:

  • In Python:

1A = cytnx.ones([3,4])
2print(A)
3
4B = A + 4
5print(B)
6
7C = A - 7j # type promotion
8print(C)
  • In C++:

1auto A = cytnx::ones({3, 4});
2cout << A << endl;
3
4auto B = A + 4;
5cout << B << endl;
6
7auto C = A - std::complex<double>(0, 7);  // type promotion
8cout << C << endl;

Output >>

Total elem: 12
type  : Double (Float64)
cytnx device: CPU
Shape : (3,4)
[[1.00000e+00 1.00000e+00 1.00000e+00 1.00000e+00 ]
 [1.00000e+00 1.00000e+00 1.00000e+00 1.00000e+00 ]
 [1.00000e+00 1.00000e+00 1.00000e+00 1.00000e+00 ]]




Total elem: 12
type  : Double (Float64)
cytnx device: CPU
Shape : (3,4)
[[5.00000e+00 5.00000e+00 5.00000e+00 5.00000e+00 ]
 [5.00000e+00 5.00000e+00 5.00000e+00 5.00000e+00 ]
 [5.00000e+00 5.00000e+00 5.00000e+00 5.00000e+00 ]]




Total elem: 12
type  : Complex Double (Complex Float64)
cytnx device: CPU
Shape : (3,4)
[[1.00000e+00-7.00000e+00j 1.00000e+00-7.00000e+00j 1.00000e+00-7.00000e+00j 1.00000e+00-7.00000e+00j ]
 [1.00000e+00-7.00000e+00j 1.00000e+00-7.00000e+00j 1.00000e+00-7.00000e+00j 1.00000e+00-7.00000e+00j ]
 [1.00000e+00-7.00000e+00j 1.00000e+00-7.00000e+00j 1.00000e+00-7.00000e+00j 1.00000e+00-7.00000e+00j ]]

3.4.3. Tensor-Tensor arithmetic

Arithmetic operations between two Tensors of the same shape are possible. For example:

  • In Python:

1A = cytnx.arange(12).reshape(3,4)
2print(A)
3
4B = cytnx.ones([3,4])*4
5print(B)
6
7C = A * B
8print(C)
  • In C++:

1auto A = cytnx::arange(12).reshape(3, 4);
2cout << A << endl;
3
4auto B = cytnx::ones({3, 4}) * 4;
5cout << B << endl;
6
7auto C = A * B;
8cout << C << endl;

Output >>

Total elem: 12
type  : Double (Float64)
cytnx device: CPU
Shape : (3,4)
[[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 ]
 [4.00000e+00 5.00000e+00 6.00000e+00 7.00000e+00 ]
 [8.00000e+00 9.00000e+00 1.00000e+01 1.10000e+01 ]]




Total elem: 12
type  : Double (Float64)
cytnx device: CPU
Shape : (3,4)
[[4.00000e+00 4.00000e+00 4.00000e+00 4.00000e+00 ]
 [4.00000e+00 4.00000e+00 4.00000e+00 4.00000e+00 ]
 [4.00000e+00 4.00000e+00 4.00000e+00 4.00000e+00 ]]




Total elem: 12
type  : Double (Float64)
cytnx device: CPU
Shape : (3,4)
[[0.00000e+00 4.00000e+00 8.00000e+00 1.20000e+01 ]
 [1.60000e+01 2.00000e+01 2.40000e+01 2.80000e+01 ]
 [3.20000e+01 3.60000e+01 4.00000e+01 4.40000e+01 ]]

Note

An elementwise multiplication is applied if the operator ‘*’ is used. For tensor contractions or matrix multiplications, see Contraction.

3.4.4. Equivalent APIs (C++ only)

Cytnx also provides some equivalent APIs for users who are familiar with/coming from pytorch and similar libraries. For example, there are two different ways to perform the + operation: Tensor.Add()/Tensor.Add_() and linalg.Add()

  • In C++:

1auto A = cytnx::ones({3, 4});
2auto B = cytnx::arange(12).reshape(3, 4);
3
4// these two are equivalent to C = A+B;
5auto C = A.Add(B);
6auto D = cytnx::linalg::Add(A, B);
7
8// this is equivalent to A+=B;
9A.Add_(B);

Note

  1. All the arithmetic functions such as Add,Sub,Mul,Div…, as well as the linear algebra functions all start with capital characters. Beware, since they all start with lower-case characters in pytorch.

  2. All the arithmetic operations with an underscore (such as Add_, Sub_, Mul_, Div_) are inplace versions that modify the current instance.

Hint

  1. If the input is of type ComplexDouble/ComplexFloat/Double/Float and both inputs are of the same type, the arithmetic operations internally call BLAS/cuBLAS/MKL ?axpy routines.

  2. Arithmetic operations between other types (including different types) are accelerated with OpenMP on the CPU. On a GPU, custom kernels are used to perform the operations.