Cytnx v1.0.0
Loading...
Searching...
No Matches
Cytnx

Feature:

Python x C++

Benefit from both side. One can do simple prototype on python side and easy transfer to C++ with small effort!

// c++ version:
#include "cytnx.hpp"
cytnx::Tensor A({3,4,5},cytnx::Type.Double,cytnx::Device.cpu)
an tensor (multi-dimensional array)
Definition Tensor.hpp:41
Device_class Device
data on which devices.
# python version:
import cytnx
A = cytnx.Tensor([3,4,5],dtype=cytnx.Type.Double,device=cytnx.Device.cpu)

1. All the Storage and Tensor have mulitple type support.

Avaliable types are : (please refer to Type )

cytnx type c++ type Type object
cytnx_double double Type.Double
cytnx_float float Type.Float
cytnx_uint64 uint64_t Type.Uint64
cytnx_uint32 uint32_t Type.Uint32
cytnx_uint16 uint16_t Type.Uint16
cytnx_int64 int64_t Type.Int64
cytnx_int32 int32_t Type.Int32
cytnx_int16 int16_t Type.Int16
cytnx_complex128 std::complex<double> Type.ComplexDouble
cytnx_complex64 std::complex<float> Type.ComplexFloat
cytnx_bool bool Type.Bool

2. Multiple devices support.

  • simple moving btwn CPU and GPU (see Device and below)

Objects:

linear algebra functions:

See cytnx::linalg for further details

func inplace CPU GPU callby Tensor Tensor UniTensor
Add Add_ Add Add Add
Sub Sub_ Sub Sub Sub
Mul Mul_ Mul Mul Mul
Div Div_ Div Div Div
Mod x Mod Mod Mod
Cpr x Cpr Cpr x
+,+= += += +,+= +,+=
-,-= -= -= -,-= -,-=
*,*= *= *= *,*= *,*=
/,/= /= /= /,/= /,/=
Svd x Svd Svd Svd
Gesvd x x Gesvd Gesvd
Svd_truncate x x Svd_truncate Svd_truncate
Gesvd_truncate x x Gesvd_truncate Gesvd_truncate
InvM InvM_ InvM InvM InvM
Inv Inv_ Inv Inv x
Conj Conj_ Conj Conj Conj
Exp Exp_ Exp Exp x
Expf Expf_ x Expf x
Eigh x Eigh Eigh Eigh
ExpH x x ExpH ExpH
ExpM x x x ExpM ExpM
Matmul x x Matmul x
Diag x x Diag x
Tensordot x x Tensordot x
Outer x x Outer x
Vectordot x x Vectordot x
Tridiag x x Tridiag x
Kron x x Kron x
Norm x Norm Norm Norm
Dot x x Dot x
Eig x x x Eig Eig
Pow Pow_ Pow Pow Pow
Abs Abs_ Abs Abs x
Qr x x Qr Qr
Qdr x x x Qdr Qdr
Det x x Det x
Min x Min Min x
Max x Max Max x
Sum x x Sum x
Trace x x Trace Trace Trace
Matmul_dg x x Matmul_dg x
Tensordot_dg x x x Tensordot_dg x
Lstsq x x x Lstsq x
Axpy Axpy_ x x Axpy x
Ger x x Ger x
Gemm Gemm_ x Gemm x
Gemm_Batch x x Gemm_Batch x

iterative solver:

func CPU GPU Tensor UniTensor
Lanczos Lanczos Lanczos
Lanczos_Exp x x Lanczos_Exp
Arnoldi x Arnoli Arnoli

Container Generators

Tensor: zeros(), ones(), arange(), identity(), eye(),

Physics Category

Tensor: spin(), pauli()

Random

See cytnx::random for further details

func UniTensor Tensor Storage CPU GPU
^normal x normal x
^uniform x uniform x
*normal_ normal_ normal_ normal_
*uniform_ uniform_ uniform_ uniform_

* this is initializer

^ this is generator

Note
The difference of initializer and generator is that initializer is used to initialize the Tensor, and generator generates a new Tensor.

conda install

[Currently Linux only]

without CUDA

  • python 3.6/3.7/3.8: conda install -c kaihsinwu cytnx

with CUDA

  • python 3.6/3.7/3.8: conda install -c kaihsinwu cytnx_cuda

Some snippets:

Storage

  • Memory container with GPU/CPU support. Type conversions (type casting between Storages) and moving between devices easily possible.
  • Generic type object, the behavior is very similar to Python.
    Storage A(400,Type.Double);
    for(int i=0;i<400;i++)
    A.at<double>(i) = i;
    Storage B = A; // A and B share same memory, this is similar to Python
    Storage C = A.to(Device.cuda+0);
    constexpr Type_class Type
    data type
    Definition Type.hpp:426

Tensor

  • A tensor, API very similar to numpy and pytorch.
  • Simple moving btwn CPU and GPU:
    Tensor A({3,4},Type.Double,Device.cpu); // create tensor on CPU (default)
    Tensor B({3,4},Type.Double,Device.cuda+0); // create tensor on GPU with gpu-id=0
    Tensor C = B; // C and B share same memory.
    // move A to GPU
    Tensor D = A.to(Device.cuda+0);
    // inplace move A to GPU
    A.to_(Device.cuda+0);
  • Type conversion possible:
    Tensor A({3,4},Type.Double);
    Tensor B = A.astype(Type.Uint64); // cast double to uint64_t
  • Virtual swap and permute. All the permute and swap operations do not change the underlying memory immediately. Minimized cost of moving elements.
  • Use contiguous() when needed to actually move the memory layout.
    Tensor A({3,4,5,2},Type.Double);
    A.permute_(0,3,1,2); // this will not change the memory, only the shape info is changed.
    cout << A.is_contiguous() << endl; // false
    A.contiguous_(); // call contiguous() to actually move the memory.
    cout << A.is_contiguous() << endl; // this will be true!
  • Access single element using .at
    Tensor A({3,4,5},Type.Double);
    double val = A.at<double>(0,2,2);
  • Access elements similar to Python slices:
    typedef Accessor ac;
    Tensor A({3,4,5},Type.Double);
    Tensor out = A(0,":","1:4");
    // equivalent to Python: out = A[0,:,1:4]

UniTensor

  • Extension of Tensor, specifically designed for Tensor network simulations.
  • UniTensor is a tensor with additional information such as Bond, Symmetry and labels. With these information, one can easily implement the tensor contraction.
    ++
    Tensor A({3,4,5},Type.Double);
    UniTensor tA = UniTensor(A); // convert directly.
    UniTensor tB = UniTensor({Bond(3),Bond(4),Bond(5)},{}); // init from scratch.
    // Relabel the tensor and then contract.
    tA.relabels_({"common_1", "common_2", "out_a"});
    tB.relabels_({"common_1", "common_2", "out_b"});
    UniTensor out = cytnx::Contract(tA,tB);
    tA.print_diagram();
    tB.print_diagram();
    out.print_diagram();
    UniTensor Contract(const UniTensor &inL, const UniTensor &inR, const bool &cacheL=false, const bool &cacheR=false)
    Contract two UniTensor by tracing the ranks with common labels.
    Output:
    -----------------------
    tensor Name :
    tensor Rank : 3
    block_form : False
    is_diag : False
    on device : cytnx device: CPU
    ---------
    / \
    common_1 ____| 3 4 |____ common_2
    | |
    | 5 |____ out_a
    \ /
    ---------
    -----------------------
    tensor Name :
    tensor Rank : 3
    block_form : False
    is_diag : False
    on device : cytnx device: CPU
    ---------
    / \
    common_1 ____| 3 4 |____ common_2
    | |
    | 5 |____ out_b
    \ /
    ---------
    -----------------------
    tensor Name :
    tensor Rank : 2
    block_form : False
    is_diag : False
    on device : cytnx device: CPU
    --------
    / \
    | 5 |____ out_a
    | |
    | 5 |____ out_b
    \ /
    --------
  • UniTensor supports Block form, which is useful if the physical system has a symmetry. See user guide for more details.

Developers & Maintainers

Creator and Project manager Affiliation Email
Kai-Hsin Wu Boston Univ., USA kaihs.nosp@m.inwu.nosp@m.@gmai.nosp@m.l.co.nosp@m.m


Developers Affiliation Roles
Chang-Teng Lin NTU, Taiwan major maintainer and developer
Ke Hsu NTU, Taiwan major maintainer and developer
Ivana Gyro NTU, Taiwan major maintainer and developer
Hao-Ti Hung NTU, Taiwan documentation and linalg
Ying-Jer Kao NTU, Taiwan setuptool, cmake

Contributors

Contributors Affiliation
PoChung Chen NTHU, Taiwan
Chia-Min Chung NSYSU, Taiwan
Ian McCulloch NTHU, Taiwan
Manuel Schneider NYCU, Taiwan
Yen-Hsin Wu NTU, Taiwan
Po-Kwan Wu OSU, USA
Wen-Han Kao UMN, USA
Yu-Hsueh Chen NTU, Taiwan
Yu-Cheng Lin NTU, Taiwan

References