Cytnx v0.9.3
Loading...
Searching...
No Matches
Cytnx

Current Version:

v0.9.1

Feature:

Python x C++

Benefit from both side. One can do simple prototype on python side and easy transfer to C++ with small effort!

// c++ version:
#include "cytnx.hpp"
an tensor (multi-dimensional array)
Definition Tensor.hpp:41
Device_class Device
data on which devices.
Type_class Type
data type
# python version:
import cytnx
A = cytnx.Tensor([3,4,5],dtype=cytnx.Type.Double,device=cytnx.Device.cpu)

1. All the Storage and Tensor can now have mulitple type support.

The avaliable types are :

cytnx type c++ type Type object
cytnx_double double Type.Double
cytnx_float float Type.Float
cytnx_uint64 uint64_t Type.Uint64
cytnx_uint32 uint32_t Type.Uint32
cytnx_uint16 uint16_t Type.Uint16
cytnx_int64 int64_t Type.Int64
cytnx_int32 int32_t Type.Int32
cytnx_int16 int16_t Type.Int16
cytnx_complex128 std::complex<double> Type.ComplexDouble
cytnx_complex64 std::complex<float> Type.ComplexFloat
cytnx_bool bool Type.Bool

2. Multiple devices support.

  • simple moving btwn CPU and GPU (see below)

Objects:

linear algebra functions:

See cytnx::linalg for further details

func inplace CPU GPU callby tn
Add x Y Y Y
Sub x Y Y Y
Mul x Y Y Y
Div x Y Y Y
Cpr x Y Y Y
Mod x Y Y Y
+,+=[tn] x Y Y Y (Tensor.Add_)
-,-=[tn] x Y Y Y (Tensor.Sub_)
*,*=[tn] x Y Y Y (Tensor.Mul_)
/,/=[tn] x Y Y Y (Tensor.Div_)
== [tn] x Y Y Y (Tensor.Cpr_)
Svd x Y Y Y
*Svd_truncate x Y Y N
InvM InvM_ Y Y Y
Inv Inv_ Y Y Y
Conj Conj_ Y Y Y
Exp Exp_ Y Y Y
Expf Expf_ Y Y Y
*ExpH x Y Y N
*ExpM x Y Y N
Eigh x Y Y Y
Matmul x Y Y N
Diag x Y Y N
*Tensordot x Y Y N
Outer x Y Y N
Kron x Y N N
Norm x Y Y Y
Vectordot x Y .Y N
Tridiag x Y N N
*Dot x Y Y N
Eig x Y N Y
Pow Pow_ Y Y Y
Abs Abs_ Y N Y
Qr x Y N N
Qdr x Y N N
Min x Y N Y
Max x Y N Y
*Trace x Y N N
iterative solver

    \link cytnx::linalg::Lanczos_ER Lanczos_ER\endlink


* this is a high level linalg

^ this is temporary disable

. this is floating point type only

Container Generators

Tensor: zeros(), ones(), arange(), identity(), eye(),

Physics Category

Tensor: spin() pauli()

Random

See cytnx::random for further details

func Tn Stor CPU GPU
*Make_normal Y Y Y Y
^normal Y x Y Y
  • this is initializer

^ this is generator

[Note] The difference of initializer and generator is that initializer is used to initialize the Tensor, and generator generates a new Tensor.

conda install

[Currently Linux only]

without CUDA

  • python 3.6/3.7/3.8: conda install -c kaihsinwu cytnx

with CUDA

  • python 3.6/3.7/3.8: conda install -c kaihsinwu cytnx_cuda

Some snippets:

Storage

  • Memory container with GPU/CPU support. maintain type conversions (type casting btwn Storages) and moving btwn devices.
  • Generic type object, the behavior is very similar to python.
    Storage A(400,Type.Double);
    for(int i=0;i<400;i++)
    A.at<double>(i) = i;
    Storage B = A; // A and B share same memory, this is similar as python
    Storage C = A.to(Device.cuda+0);

Tensor

  • A tensor, API very similar to numpy and pytorch.
  • simple moving btwn CPU and GPU:
    Tensor A({3,4},Type.Double,Device.cpu); // create tensor on CPU (default)
    Tensor B({3,4},Type.Double,Device.cuda+0); // create tensor on GPU with gpu-id=0
    Tensor C = B; // C and B share same memory.
    // move A to gpu
    Tensor D = A.to(Device.cuda+0);
    // inplace move A to gpu
    A.to_(Device.cuda+0);
  • Type conversion in between avaliable:
    Tensor A({3,4},Type.Double);
    Tensor B = A.astype(Type.Uint64); // cast double to uint64_t
  • vitual swap and permute. All the permute and swap will not change the underlying memory
  • Use Contiguous() when needed to actual moving the memory layout.
    Tensor A({3,4,5,2},Type.Double);
    A.permute_(0,3,1,2); // this will not change the memory, only the shape info is changed.
    cout << A.is_contiguous() << endl; // this will be false!
    A.contiguous_(); // call Configuous() to actually move the memory.
    cout << A.is_contiguous() << endl; // this will be true!
  • access single element using .at
    Tensor A({3,4,5},Type.Double);
    double val = A.at<double>(0,2,2);
  • access elements with python slices similarity:
    typedef Accessor ac;
    Tensor A({3,4,5},Type.Double);
    Tensor out = A(0,":","1:4");
    // equivalent to python: out = A[0,:,1:4]

Fast Examples

See test.cpp for using C++ .
See test.py for using python

Developers & Maintainers

[Creator and Project manager]
Kai-Hsin Wu (Boston Univ.) kaihsinwu@gmail.com

Chang Teng Lin (NTU, Taiwan): major maintainer and developer
Ke Hsu (NTU, Taiwan): major maintainer and developer
Hao Ti (NTU, Taiwan): documentation and linalg
Ying-Jer Kao (NTU, Taiwan): setuptool, cmake

Contributors

Yen-Hsin Wu (NTU, Taiwan)
Po-Kwan Wu (OSU)
Wen-Han Kao (UMN, USA)
Yu-Hsueh Chen (NTU, Taiwan)
PoChung Chen  (NCHU, Taiwan)

Refereces:

* example/DMRG:
    https://www.tensors.net/dmrg