|
Cytnx v0.9.1
|
an tensor (multi-dimensional array) More...
#include <Tensor.hpp>
Public Member Functions | |
| void | Save (const std::string &fname) const |
| Save current Tensor to file. | |
| void | Save (const char *fname) const |
| void | Tofile (const std::string &fname) const |
| void | Tofile (const char *fname) const |
| void | Tofile (std::fstream &f) const |
| void | Init (const std::vector< cytnx_uint64 > &shape, const unsigned int &dtype=Type.Double, const int &device=-1, const bool &init_zero=true) |
| initialize a Tensor | |
| Tensor (const std::vector< cytnx_uint64 > &shape, const unsigned int &dtype=Type.Double, const int &device=-1, const bool &init_zero=1) | |
| Construct a new Tensor object. | |
| unsigned int | dtype () const |
| the dtype-id of the Tensor | |
| int | device () const |
| the device-id of the Tensor | |
| std::string | dtype_str () const |
| the dtype (in string) of the Tensor | |
| std::string | device_str () const |
| the device (in string) of the Tensor | |
| const std::vector< cytnx_uint64 > & | shape () const |
| the shape of the Tensor | |
| cytnx_uint64 | rank () const |
| the rank of the Tensor | |
| Tensor | clone () const |
| return a clone of the current Tensor. | |
| Tensor | to (const int &device) const |
| copy a tensor to new device | |
| void | to_ (const int &device) |
| move the current Tensor to the device. | |
| const bool & | is_contiguous () const |
| Tensor | permute_ (const std::vector< cytnx_uint64 > &rnks) |
| Tensor | permute (const std::vector< cytnx_uint64 > &rnks) const |
| perform tensor permute on the cytnx::Tensor and return a new instance. | |
| Tensor | contiguous () const |
| Make the Tensor contiguous by coalescing the memory (storage). | |
| Tensor | contiguous_ () |
| Make the Tensor contiguous by coalescing the memory (storage), inplacely. | |
| void | reshape_ (const std::vector< cytnx_int64 > &new_shape) |
| reshape the Tensor, inplacely | |
| Tensor | reshape (const std::vector< cytnx_int64 > &new_shape) const |
| return a new Tensor that is reshaped. | |
| Tensor | reshape (const std::vector< cytnx_uint64 > &new_shape) const |
| Tensor | reshape (const std::initializer_list< cytnx_int64 > &new_shape) const |
| Tensor | astype (const int &new_type) const |
| return a new Tensor that cast to different dtype. | |
| template<class T > | |
| T & | at (const std::vector< cytnx_uint64 > &locator) |
| [C++ only] get an element at specific location. | |
| template<class T > | |
| const T & | at (const std::vector< cytnx_uint64 > &locator) const |
| template<class T > | |
| T & | item () |
| get an from a rank-0 Tensor | |
| Tensor | get (const std::vector< cytnx::Accessor > &accessors) const |
| get elements using Accessor (C++ API) / slices (python API) | |
| void | set (const std::vector< cytnx::Accessor > &accessors, const Tensor &rhs) |
| set elements with the input Tensor using Accessor (C++ API) / slices (python API) | |
| template<class T > | |
| void | set (const std::vector< cytnx::Accessor > &accessors, const T &rc) |
| set elements with the input constant using Accessor (C++ API) / slices (python API) | |
| Storage & | storage () const |
| return the storage of current Tensor. | |
| template<class T > | |
| void | fill (const T &val) |
| fill all the element of current Tensor with the value. | |
| bool | equiv (const Tensor &rhs) |
| compare the shape of two tensors. | |
| Tensor | real () |
| return the real part of the tensor. | |
| Tensor | imag () |
| return the imaginary part of the tensor. | |
| template<class T > | |
| Tensor & | operator+= (const T &rc) |
| addition assignment operator with a Tensor or a scalar. | |
| template<class T > | |
| Tensor & | operator-= (const T &rc) |
| subtraction assignment operator with a Tensor or a scalar. | |
| template<class T > | |
| Tensor & | operator*= (const T &rc) |
| multiplication assignment operator with a Tensor or a scalar. | |
| template<class T > | |
| Tensor & | operator/= (const T &rc) |
| division assignment operator with a Tensor or a scalar. | |
| template<class T > | |
| Tensor | Add (const T &rhs) |
| Addition function with a Tensor or a scalar. Same as cytnx::operator+(const Tensor &self, const T &rhs). | |
| template<class T > | |
| Tensor & | Add_ (const T &rhs) |
| Addition function with a Tensor or a scalar, inplacely. Same as operator+=(const T &rhs). | |
| template<class T > | |
| Tensor | Sub (const T &rhs) |
| Subtraction function with a Tensor or a scalar. Same as cytnx::operator-(const Tensor &self, const T &rhs). | |
| template<class T > | |
| Tensor & | Sub_ (const T &rhs) |
| Subtraction function with a Tensor or a scalar, inplacely. Same as operator-=(const T &rhs). | |
| template<class T > | |
| Tensor | Mul (const T &rhs) |
| Multiplication function with a Tensor or a scalar. Same as cytnx::operator*(const Tensor &self, const T &rhs). | |
| template<class T > | |
| Tensor & | Mul_ (const T &rhs) |
| Multiplication function with a Tensor or a scalar, inplacely. Same as operator*=(const T &rhs). | |
| template<class T > | |
| Tensor | Div (const T &rhs) |
| Division function with a Tensor or a scalar. Same as cytnx::operator/(const Tensor &self, const T &rhs). | |
| template<class T > | |
| Tensor & | Div_ (const T &rhs) |
| Division function with a Tensor or a scalar, inplacely. Same as operator/=(const T &rhs). | |
| template<class T > | |
| Tensor | Cpr (const T &rhs) |
| The comparison function. | |
| template<class T > | |
| Tensor | Mod (const T &rhs) |
| Tensor | operator- () |
| The negation function. | |
| Tensor | flatten () const |
| The flatten function. | |
| void | flatten_ () |
| The flatten function, inplacely. | |
| void | append (const Tensor &rhs) |
| the append function. | |
| void | append (const Storage &srhs) |
| the append function of the Storage. | |
| template<class T > | |
| void | append (const T &rhs) |
| the append function of the scalar. | |
| bool | same_data (const Tensor &rhs) const |
| std::vector< Tensor > | Svd (const bool &is_UvT=true) const |
the SVD member function. Same as cytnx::linalg::Svd(const Tensor &Tin, const bool &is_UvT) , where Tin is the current Tensor. | |
| std::vector< Tensor > | Eigh (const bool &is_V=true, const bool &row_v=false) const |
the Eigh member function. Same as cytnx::linalg::Eigh(const Tensor &Tin, const bool &is_V, const bool &row_v) , where Tin is the current Tensor. | |
| Tensor & | InvM_ () |
the InvM_ member function. Same as cytnx::linalg::InvM_(Tensor &Tin), where Tin is the current Tensor. | |
| Tensor | InvM () const |
the InvM member function. Same as cytnx::linalg::InvM(const Tensor &Tin), where Tin is the current Tensor. | |
| Tensor & | Inv_ (const double &clip) |
| the Inv_ member function. Same as cytnx::linalg::Inv_(Tensor &Tin, const double &clip) | |
| Tensor | Inv (const double &clip) const |
| the Inv member function. Same as cytnx::linalg::Inv(const Tensor &Tin, const double &clip) | |
| Tensor & | Conj_ () |
the Conj_ member function. Same as cytnx::linalg::Conj_(Tensor &Tin), where Tin is the current Tensor. | |
| Tensor | Conj () const |
the Conj member function. Same as cytnx::linalg::Conj(const Tensor &Tin), where Tin is the current Tensor. | |
| Tensor & | Exp_ () |
the Exp_ member function. Same as linalg::Exp_(Tensor &Tin), where Tin is the current Tensor. | |
| Tensor | Exp () const |
the Exp member function. Same as linalg::Exp(const Tensor &Tin), where Tin is the current Tensor. | |
| Tensor | Norm () const |
the Norm member function. Same as linalg::Norm(const Tensor &Tin), where Tin is the current Tensor. | |
| Tensor | Pow (const cytnx_double &p) const |
the Pow member function. Same as linalg::Pow(const Tensor &Tin, const cytnx_double
&p), where Tin is the current Tensor. | |
| Tensor & | Pow_ (const cytnx_double &p) |
the Pow_ member function. Same as linalg::Pow_(Tensor &Tin, const cytnx_double
&p), where Tin is the current Tensor. | |
| Tensor | Trace (const cytnx_uint64 &a=0, const cytnx_uint64 &b=1) const |
the Trace member function. Same as linalg::Trace(const Tensor &Tin, const
cytnx_uint64 &a, const cytnx_uint64 &b), where Tin is the current Tensor. | |
| Tensor | Abs () const |
the Abs member function. Same as linalg::Abs(const Tensor &Tin), where Tin is the current Tensor. | |
| Tensor & | Abs_ () |
the Abs_ member function. Same as linalg::Abs_(Tensor &Tin), where Tin is the current Tensor. | |
| Tensor | Max () const |
the Max member function. Same as linalg::Max(const Tensor &Tin), where Tin is the current Tensor. | |
| Tensor | Min () const |
the Min member function. Same as linalg::Min(const Tensor &Tin), where Tin is the current Tensor. | |
Static Public Member Functions | |
| static Tensor | Load (const std::string &fname) |
| Load current Tensor from file. | |
| static Tensor | Load (const char *fname) |
| static Tensor | Fromfile (const std::string &fname, const unsigned int &dtype, const cytnx_int64 &count=-1) |
| static Tensor | Fromfile (const char *fname, const unsigned int &dtype, const cytnx_int64 &count=-1) |
| static Tensor | from_storage (const Storage &in) |
an tensor (multi-dimensional array)
|
inline |
Construct a new Tensor object.
This is the constructor of Tensor. It will call cytnx::Tensor::Init() to initialize the Tensor.
| [in] | shape | the shape of tensor |
| [in] | dtype | the dtype of tensor. This can be any of type defined in cytnx::Type. |
| [in] | device | the device that tensor to be created. This can be cytnx::Device.cpu or cytnx::Device.cuda+<gpuid>, see cytnx::Device for more detail. |
| [in] | init_zero | if true, the content of Tensor will be initialized to zero. If false, the content of Tensor will be un-initialized. |
| Tensor cytnx::Tensor::Abs | ( | ) | const |
the Abs member function. Same as linalg::Abs(const Tensor &Tin), where Tin is the current Tensor.
| Tensor & cytnx::Tensor::Abs_ | ( | ) |
the Abs_ member function. Same as linalg::Abs_(Tensor &Tin), where Tin is the current Tensor.
|
inline |
Addition function with a Tensor or a scalar. Same as cytnx::operator+(const Tensor &self, const T &rhs).
| [in] | rhs | the added Tensor or scalar. |
|
inline |
Addition function with a Tensor or a scalar, inplacely. Same as operator+=(const T &rhs).
| [in] | rhs | the added Tensor or scalar. |
|
inline |
the append function of the Storage.
This function is the append function of the Storage. It will same as the append(const Tensor &rhs) function, but the rhs is a Storage.
| [in] | srhs | the appended Storage. |
|
inline |
the append function of the scalar.
This function is the append function of the scalar. It can only append scalar into rank-1 Tensor.
| [in] | rhs | the appended scalar. |
|
inline |
the append function.
This function is the append function. It will append the rhs tensor to the current tensor. The rhs tensor must have the same shape as the current tensor, except the first dimension. For example, if the current tensor is \(A(i,j,k)\) and the rhs tensor is \(B(j,k)\), then the output tensor is \(C(i,j,k)\) where
\[ C(i,j,k) = \begin{cases} A(i,j,k) & \text{if } i \neq N \\ B(j,k) & \text{if } i = N \end{cases} \]
where \(N\) is the number of the first dimension of the current tensor.
| [in] | rhs | the appended tensor. |
rhs is different from the current tensor, the rhs will be casted to the dtype of the current tensor.
|
inline |
return a new Tensor that cast to different dtype.
| [in] | new_type | the new dtype. It can be any type defined in cytnx::Type |
Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (3,4,5) [[[0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ]] [[0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ]] [[0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ]]] Total elem: 60 type : Uint64 cytnx device: CPU Shape : (3,4,5) [[[ 0 0 0 0 0 ] [ 0 0 0 0 0 ] [ 0 0 0 0 0 ] [ 0 0 0 0 0 ]] [[ 0 0 0 0 0 ] [ 0 0 0 0 0 ] [ 0 0 0 0 0 ] [ 0 0 0 0 0 ]] [[ 0 0 0 0 0 ] [ 0 0 0 0 0 ] [ 0 0 0 0 0 ] [ 0 0 0 0 0 ]]] 1
Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (3,4,5) [[[0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ]] [[0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ]] [[0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ]]] Total elem: 60 type : Uint64 cytnx device: CPU Shape : (3,4,5) [[[ 0 0 0 0 0 ] [ 0 0 0 0 0 ] [ 0 0 0 0 0 ] [ 0 0 0 0 0 ]] [[ 0 0 0 0 0 ] [ 0 0 0 0 0 ] [ 0 0 0 0 0 ] [ 0 0 0 0 0 ]] [[ 0 0 0 0 0 ] [ 0 0 0 0 0 ] [ 0 0 0 0 0 ] [ 0 0 0 0 0 ]]] True
|
inline |
[C++ only] get an element at specific location.
| [in] | locator | the location of the element |
|
inline |
|
inline |
return a clone of the current Tensor.
In C++ API, the behavior of assignment operator is designed to have same behavior as python,
to have a copy of the current tensor, we call clone to return a copy.
1 0
True False
| Tensor cytnx::Tensor::Conj | ( | ) | const |
the Conj member function. Same as cytnx::linalg::Conj(const Tensor &Tin), where Tin is the current Tensor.
| Tensor & cytnx::Tensor::Conj_ | ( | ) |
the Conj_ member function. Same as cytnx::linalg::Conj_(Tensor &Tin), where Tin is the current Tensor.
|
inline |
Make the Tensor contiguous by coalescing the memory (storage).
Vector Print: Total Elements:3 [3, 4, 5] Vector Print: Total Elements:3 [3, 5, 4] 0 1 Vector Print: Total Elements:3 [3, 5, 4] 0 1
[3, 4, 5] [3, 5, 4] False True [3, 5, 4]
|
inline |
Make the Tensor contiguous by coalescing the memory (storage), inplacely.
Vector Print: Total Elements:3 [3, 4, 5] Vector Print: Total Elements:3 [3, 5, 4] 0 1
[3, 4, 5] [3, 5, 4] False True
|
inline |
The comparison function.
This function is the comparison function. Same as cytnx::operator==(const Tensor &self, const T &rhs).
| [in] | rhs | the compared object. |
|
inline |
|
inline |
the device (in string) of the Tensor
|
inline |
Division function with a Tensor or a scalar. Same as cytnx::operator/(const Tensor &self, const T &rhs).
| [in] | rhs | the divided Tensor or scalar. @attension rhs cannot be zero. |
|
inline |
Division function with a Tensor or a scalar, inplacely. Same as operator/=(const T &rhs).
| [in] | rhs | the divided Tensor or scalar. @attension rhs cannot be zero. |
|
inline |
|
inline |
the dtype (in string) of the Tensor
| std::vector< Tensor > cytnx::Tensor::Eigh | ( | const bool & | is_V = true, |
| const bool & | row_v = false |
||
| ) | const |
the Eigh member function. Same as cytnx::linalg::Eigh(const Tensor &Tin, const bool &is_V, const bool &row_v) , where Tin is the current Tensor.
|
inline |
compare the shape of two tensors.
| [in] | rhs | the tensor to be compared. |
| Tensor cytnx::Tensor::Exp | ( | ) | const |
the Exp member function. Same as linalg::Exp(const Tensor &Tin), where Tin is the current Tensor.
| Tensor & cytnx::Tensor::Exp_ | ( | ) |
the Exp_ member function. Same as linalg::Exp_(Tensor &Tin), where Tin is the current Tensor.
|
inline |
fill all the element of current Tensor with the value.
| [in] | val | the assigned value |
Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (3,4,5) [[[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 ] [5.00000e+00 6.00000e+00 7.00000e+00 8.00000e+00 9.00000e+00 ] [1.00000e+01 1.10000e+01 1.20000e+01 1.30000e+01 1.40000e+01 ] [1.50000e+01 1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 ]] [[2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 2.40000e+01 ] [2.50000e+01 2.60000e+01 2.70000e+01 2.80000e+01 2.90000e+01 ] [3.00000e+01 3.10000e+01 3.20000e+01 3.30000e+01 3.40000e+01 ] [3.50000e+01 3.60000e+01 3.70000e+01 3.80000e+01 3.90000e+01 ]] [[4.00000e+01 4.10000e+01 4.20000e+01 4.30000e+01 4.40000e+01 ] [4.50000e+01 4.60000e+01 4.70000e+01 4.80000e+01 4.90000e+01 ] [5.00000e+01 5.10000e+01 5.20000e+01 5.30000e+01 5.40000e+01 ] [5.50000e+01 5.60000e+01 5.70000e+01 5.80000e+01 5.90000e+01 ]]] Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (3,4,5) [[[9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 ] [9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 ] [9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 ] [9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 ]] [[9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 ] [9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 ] [9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 ] [9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 ]] [[9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 ] [9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 ] [9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 ] [9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 ]]]
Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (3,4,5) [[[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 ] [5.00000e+00 6.00000e+00 7.00000e+00 8.00000e+00 9.00000e+00 ] [1.00000e+01 1.10000e+01 1.20000e+01 1.30000e+01 1.40000e+01 ] [1.50000e+01 1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 ]] [[2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 2.40000e+01 ] [2.50000e+01 2.60000e+01 2.70000e+01 2.80000e+01 2.90000e+01 ] [3.00000e+01 3.10000e+01 3.20000e+01 3.30000e+01 3.40000e+01 ] [3.50000e+01 3.60000e+01 3.70000e+01 3.80000e+01 3.90000e+01 ]] [[4.00000e+01 4.10000e+01 4.20000e+01 4.30000e+01 4.40000e+01 ] [4.50000e+01 4.60000e+01 4.70000e+01 4.80000e+01 4.90000e+01 ] [5.00000e+01 5.10000e+01 5.20000e+01 5.30000e+01 5.40000e+01 ] [5.50000e+01 5.60000e+01 5.70000e+01 5.80000e+01 5.90000e+01 ]]] Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (3,4,5) [[[9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 ] [9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 ] [9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 ] [9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 ]] [[9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 ] [9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 ] [9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 ] [9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 ]] [[9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 ] [9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 ] [9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 ] [9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 9.99000e+02 ]]]
|
inline |
The flatten function.
This function is the flatten function. It will clone (deep copy) , contiguos the current tensor and reshape it to undetermined rank.
|
inline |
The flatten function, inplacely.
This function is the flatten function, inplacely. It will contiguos the current tensor and reshape it to undetermined rank.
|
static |
|
static |
|
inline |
get elements using Accessor (C++ API) / slices (python API)
| [in] | accessors | the Accessor (C++ API) / slices (python API) to get the elements. |
One can also using more intruisive way to get the slice using [] operator.
Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (3,4,5) [[[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 ] [5.00000e+00 6.00000e+00 7.00000e+00 8.00000e+00 9.00000e+00 ] [1.00000e+01 1.10000e+01 1.20000e+01 1.30000e+01 1.40000e+01 ] [1.50000e+01 1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 ]] [[2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 2.40000e+01 ] [2.50000e+01 2.60000e+01 2.70000e+01 2.80000e+01 2.90000e+01 ] [3.00000e+01 3.10000e+01 3.20000e+01 3.30000e+01 3.40000e+01 ] [3.50000e+01 3.60000e+01 3.70000e+01 3.80000e+01 3.90000e+01 ]] [[4.00000e+01 4.10000e+01 4.20000e+01 4.30000e+01 4.40000e+01 ] [4.50000e+01 4.60000e+01 4.70000e+01 4.80000e+01 4.90000e+01 ] [5.00000e+01 5.10000e+01 5.20000e+01 5.30000e+01 5.40000e+01 ] [5.50000e+01 5.60000e+01 5.70000e+01 5.80000e+01 5.90000e+01 ]]] Total elem: 12 type : Double (Float64) cytnx device: CPU Shape : (4,3) [[4.20000e+01 4.30000e+01 4.40000e+01 ] [4.70000e+01 4.80000e+01 4.90000e+01 ] [5.20000e+01 5.30000e+01 5.40000e+01 ] [5.70000e+01 5.80000e+01 5.90000e+01 ]] Total elem: 12 type : Double (Float64) cytnx device: CPU Shape : (4,3) [[4.20000e+01 4.30000e+01 4.40000e+01 ] [4.70000e+01 4.80000e+01 4.90000e+01 ] [5.20000e+01 5.30000e+01 5.40000e+01 ] [5.70000e+01 5.80000e+01 5.90000e+01 ]]
Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (3,4,5) [[[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 ] [5.00000e+00 6.00000e+00 7.00000e+00 8.00000e+00 9.00000e+00 ] [1.00000e+01 1.10000e+01 1.20000e+01 1.30000e+01 1.40000e+01 ] [1.50000e+01 1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 ]] [[2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 2.40000e+01 ] [2.50000e+01 2.60000e+01 2.70000e+01 2.80000e+01 2.90000e+01 ] [3.00000e+01 3.10000e+01 3.20000e+01 3.30000e+01 3.40000e+01 ] [3.50000e+01 3.60000e+01 3.70000e+01 3.80000e+01 3.90000e+01 ]] [[4.00000e+01 4.10000e+01 4.20000e+01 4.30000e+01 4.40000e+01 ] [4.50000e+01 4.60000e+01 4.70000e+01 4.80000e+01 4.90000e+01 ] [5.00000e+01 5.10000e+01 5.20000e+01 5.30000e+01 5.40000e+01 ] [5.50000e+01 5.60000e+01 5.70000e+01 5.80000e+01 5.90000e+01 ]]] Total elem: 12 type : Double (Float64) cytnx device: CPU Shape : (4,3) [[4.20000e+01 4.30000e+01 4.40000e+01 ] [4.70000e+01 4.80000e+01 4.90000e+01 ] [5.20000e+01 5.30000e+01 5.40000e+01 ] [5.70000e+01 5.80000e+01 5.90000e+01 ]]
| Tensor cytnx::Tensor::imag | ( | ) |
return the imaginary part of the tensor.
|
inline |
initialize a Tensor
| [in] | shape | the shape of tensor. |
| [in] | dtype | the dtype of tensor. This can be any of type defined in cytnx::Type |
| [in] | device | the device that tensor to be created. This can be cytnx::Device.cpu or |
| [in] | init_zero | if true, the content of Tensor will be initialized to zero. if false, the content of Tensor will be un-initialize. cytnx::Device.cuda+<gpuid>, see cytnx::Device for more detail. |
Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (3,4,5) [[[0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ]] [[0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ]] [[0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ]]] Total elem: 60 type : Uint64 cytnx device: CPU Shape : (3,4,5) [[[ 0 0 0 0 0 ] [ 0 0 0 0 0 ] [ 0 0 0 0 0 ] [ 0 0 0 0 0 ]] [[ 0 0 0 0 0 ] [ 0 0 0 0 0 ] [ 0 0 0 0 0 ] [ 0 0 0 0 0 ]] [[ 0 0 0 0 0 ] [ 0 0 0 0 0 ] [ 0 0 0 0 0 ] [ 0 0 0 0 0 ]]]
Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (3,4,5) [[[0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ]] [[0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ]] [[0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ]]] Total elem: 60 type : Uint64 cytnx device: CPU Shape : (3,4,5) [[[ 0 0 0 0 0 ] [ 0 0 0 0 0 ] [ 0 0 0 0 0 ] [ 0 0 0 0 0 ]] [[ 0 0 0 0 0 ] [ 0 0 0 0 0 ] [ 0 0 0 0 0 ] [ 0 0 0 0 0 ]] [[ 0 0 0 0 0 ] [ 0 0 0 0 0 ] [ 0 0 0 0 0 ] [ 0 0 0 0 0 ]]] Total elem: 60 type : Double (Float64) cytnx device: CUDA/GPU-id:0 Shape : (3,4,5) [[[0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ]] [[0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ]] [[0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ]]]
| Tensor cytnx::Tensor::Inv | ( | const double & | clip | ) | const |
the Inv member function. Same as cytnx::linalg::Inv(const Tensor &Tin, const double &clip)
| Tensor & cytnx::Tensor::Inv_ | ( | const double & | clip | ) |
the Inv_ member function. Same as cytnx::linalg::Inv_(Tensor &Tin, const double &clip)
| Tensor cytnx::Tensor::InvM | ( | ) | const |
the InvM member function. Same as cytnx::linalg::InvM(const Tensor &Tin), where Tin is the current Tensor.
| Tensor & cytnx::Tensor::InvM_ | ( | ) |
the InvM_ member function. Same as cytnx::linalg::InvM_(Tensor &Tin), where Tin is the current Tensor.
|
inline |
|
inline |
get an from a rank-0 Tensor
Total elem: 1 type : Uint64 cytnx device: CPU Shape : (1) [ 1 ] 1
Total elem: 1 type : Uint64 cytnx device: CPU Shape : (1) [ 1 ] 1
|
static |
|
static |
| Tensor cytnx::Tensor::Max | ( | ) | const |
the Max member function. Same as linalg::Max(const Tensor &Tin), where Tin is the current Tensor.
| Tensor cytnx::Tensor::Min | ( | ) | const |
the Min member function. Same as linalg::Min(const Tensor &Tin), where Tin is the current Tensor.
|
inline |
|
inline |
Multiplication function with a Tensor or a scalar. Same as cytnx::operator*(const Tensor &self, const T &rhs).
| [in] | rhs | the multiplied Tensor or scalar. |
|
inline |
Multiplication function with a Tensor or a scalar, inplacely. Same as operator*=(const T &rhs).
| [in] | rhs | the multiplied Tensor or scalar. |
| Tensor cytnx::Tensor::Norm | ( | ) | const |
the Norm member function. Same as linalg::Norm(const Tensor &Tin), where Tin is the current Tensor.
| Tensor & cytnx::Tensor::operator*= | ( | const T & | rc | ) |
multiplication assignment operator with a Tensor or a scalar.
This function will multiply the template type to the current tensor, inplacely. The template can be either a scalar or a tensor. If the template is a scalar, then the scalar will be multiplied to all the elements of the current tensor. If the template is a tensor, then the shape of the template tensor must be the same as the current tensor. The supported type of the template are Tensor, Scalar or any scalar type (see cytnx_complex128, cytnx_complex64, cytnx_double, cytnx_float, cytnx_int64, cytnx_int32, cytnx_int16, cytnx_uint64, cytnx_uint32, cytnx_uint16, cytnx_bool).
| [in] | rc | the multiplied Tensor or scalar. |
| Tensor & cytnx::Tensor::operator+= | ( | const T & | rc | ) |
addition assignment operator with a Tensor or a scalar.
This function will add the template type to the current tensor, inplacely. The template can be either a scalar or a tensor. If the template is a scalar, then the scalar will be added to all the elements of the current tensor. If the template is a tensor, then the shape of the template tensor must be the same as the current tensor. The supported type of the template are Tensor, Scalar or any scalar type (see cytnx_complex128, cytnx_complex64, cytnx_double, cytnx_float, cytnx_int64, cytnx_int32, cytnx_int16, cytnx_uint64, cytnx_uint32, cytnx_uint16, cytnx_bool).
| [in] | rc | the added Tensor or scalar. |
|
inline |
The negation function.
This function is the negation function. Namely, if the current tensor is \(A\), then the output tensor is \(-A\).
| Tensor & cytnx::Tensor::operator-= | ( | const T & | rc | ) |
subtraction assignment operator with a Tensor or a scalar.
This function will subtract the template type to the current tensor, inplacely. The template can be either a scalar or a tensor. If the template is a scalar, then the scalar will be subtracted to all the elements of the current tensor. If the template is a tensor, then the shape of the template tensor must be the same as the current tensor. The supported type of the template are Tensor, Scalar or any scalar type (see cytnx_complex128, cytnx_complex64, cytnx_double, cytnx_float, cytnx_int64, cytnx_int32, cytnx_int16, cytnx_uint64, cytnx_uint32, cytnx_uint16, cytnx_bool).
| [in] | rc | the subtracted Tensor or scalar. |
| Tensor & cytnx::Tensor::operator/= | ( | const T & | rc | ) |
division assignment operator with a Tensor or a scalar.
This function will divide the template type to the current tensor, inplacely. The template can be either a scalar or a tensor. If the template is a scalar, then the scalar will be divided to all the elements of the current tensor. If the template is a tensor, then the shape of the template tensor must be the same as the current tensor. The supported type of the template are Tensor, Scalar or any scalar type (see cytnx_complex128, cytnx_complex64, cytnx_double, cytnx_float, cytnx_int64, cytnx_int32, cytnx_int16, cytnx_uint64, cytnx_uint32, cytnx_uint16, cytnx_bool).
| [in] | rc | the divided Tensor or scalar. |
rc cannot be zero.
|
inline |
perform tensor permute on the cytnx::Tensor and return a new instance.
| [in] | rnks | the permute indices, should have No. of elements equal to the rank of tensor. |
rnks cannot contain duplicated elements.Vector Print: Total Elements:3 [3, 4, 5] Vector Print: Total Elements:3 [3, 5, 4] 0 1
[3, 4, 5] [3, 5, 4] False True
|
inline |
| Tensor cytnx::Tensor::Pow | ( | const cytnx_double & | p | ) | const |
the Pow member function. Same as linalg::Pow(const Tensor &Tin, const cytnx_double
&p), where Tin is the current Tensor.
| Tensor & cytnx::Tensor::Pow_ | ( | const cytnx_double & | p | ) |
the Pow_ member function. Same as linalg::Pow_(Tensor &Tin, const cytnx_double
&p), where Tin is the current Tensor.
|
inline |
| Tensor cytnx::Tensor::real | ( | ) |
return the real part of the tensor.
|
inline |
|
inline |
return a new Tensor that is reshaped.
| [in] | new_shape | the new shape of the Tensor. |
new_shape cannot be empty. new_shape to -1 if the shape of the Tensor is undetermined. Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (60) [0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 5.00000e+00 6.00000e+00 7.00000e+00 8.00000e+00 9.00000e+00 1.00000e+01 1.10000e+01 1.20000e+01 1.30000e+01 1.40000e+01 1.50000e+01 1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 2.40000e+01 2.50000e+01 2.60000e+01 2.70000e+01 2.80000e+01 2.90000e+01 3.00000e+01 3.10000e+01 3.20000e+01 3.30000e+01 3.40000e+01 3.50000e+01 3.60000e+01 3.70000e+01 3.80000e+01 3.90000e+01 4.00000e+01 4.10000e+01 4.20000e+01 4.30000e+01 4.40000e+01 4.50000e+01 4.60000e+01 4.70000e+01 4.80000e+01 4.90000e+01 5.00000e+01 5.10000e+01 5.20000e+01 5.30000e+01 5.40000e+01 5.50000e+01 5.60000e+01 5.70000e+01 5.80000e+01 5.90000e+01 ] Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (5,12) [[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 5.00000e+00 6.00000e+00 7.00000e+00 8.00000e+00 9.00000e+00 1.00000e+01 1.10000e+01 ] [1.20000e+01 1.30000e+01 1.40000e+01 1.50000e+01 1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 ] [2.40000e+01 2.50000e+01 2.60000e+01 2.70000e+01 2.80000e+01 2.90000e+01 3.00000e+01 3.10000e+01 3.20000e+01 3.30000e+01 3.40000e+01 3.50000e+01 ] [3.60000e+01 3.70000e+01 3.80000e+01 3.90000e+01 4.00000e+01 4.10000e+01 4.20000e+01 4.30000e+01 4.40000e+01 4.50000e+01 4.60000e+01 4.70000e+01 ] [4.80000e+01 4.90000e+01 5.00000e+01 5.10000e+01 5.20000e+01 5.30000e+01 5.40000e+01 5.50000e+01 5.60000e+01 5.70000e+01 5.80000e+01 5.90000e+01 ]]
Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (60) [0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 5.00000e+00 6.00000e+00 7.00000e+00 8.00000e+00 9.00000e+00 1.00000e+01 1.10000e+01 1.20000e+01 1.30000e+01 1.40000e+01 1.50000e+01 1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 2.40000e+01 2.50000e+01 2.60000e+01 2.70000e+01 2.80000e+01 2.90000e+01 3.00000e+01 3.10000e+01 3.20000e+01 3.30000e+01 3.40000e+01 3.50000e+01 3.60000e+01 3.70000e+01 3.80000e+01 3.90000e+01 4.00000e+01 4.10000e+01 4.20000e+01 4.30000e+01 4.40000e+01 4.50000e+01 4.60000e+01 4.70000e+01 4.80000e+01 4.90000e+01 5.00000e+01 5.10000e+01 5.20000e+01 5.30000e+01 5.40000e+01 5.50000e+01 5.60000e+01 5.70000e+01 5.80000e+01 5.90000e+01 ] Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (5,12) [[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 5.00000e+00 6.00000e+00 7.00000e+00 8.00000e+00 9.00000e+00 1.00000e+01 1.10000e+01 ] [1.20000e+01 1.30000e+01 1.40000e+01 1.50000e+01 1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 ] [2.40000e+01 2.50000e+01 2.60000e+01 2.70000e+01 2.80000e+01 2.90000e+01 3.00000e+01 3.10000e+01 3.20000e+01 3.30000e+01 3.40000e+01 3.50000e+01 ] [3.60000e+01 3.70000e+01 3.80000e+01 3.90000e+01 4.00000e+01 4.10000e+01 4.20000e+01 4.30000e+01 4.40000e+01 4.50000e+01 4.60000e+01 4.70000e+01 ] [4.80000e+01 4.90000e+01 5.00000e+01 5.10000e+01 5.20000e+01 5.30000e+01 5.40000e+01 5.50000e+01 5.60000e+01 5.70000e+01 5.80000e+01 5.90000e+01 ]]
|
inline |
|
inline |
reshape the Tensor, inplacely
| [in] | new_shape | the new shape of the Tensor. |
new_shape cannot be empty. Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (60) [0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 5.00000e+00 6.00000e+00 7.00000e+00 8.00000e+00 9.00000e+00 1.00000e+01 1.10000e+01 1.20000e+01 1.30000e+01 1.40000e+01 1.50000e+01 1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 2.40000e+01 2.50000e+01 2.60000e+01 2.70000e+01 2.80000e+01 2.90000e+01 3.00000e+01 3.10000e+01 3.20000e+01 3.30000e+01 3.40000e+01 3.50000e+01 3.60000e+01 3.70000e+01 3.80000e+01 3.90000e+01 4.00000e+01 4.10000e+01 4.20000e+01 4.30000e+01 4.40000e+01 4.50000e+01 4.60000e+01 4.70000e+01 4.80000e+01 4.90000e+01 5.00000e+01 5.10000e+01 5.20000e+01 5.30000e+01 5.40000e+01 5.50000e+01 5.60000e+01 5.70000e+01 5.80000e+01 5.90000e+01 ] Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (5,12) [[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 5.00000e+00 6.00000e+00 7.00000e+00 8.00000e+00 9.00000e+00 1.00000e+01 1.10000e+01 ] [1.20000e+01 1.30000e+01 1.40000e+01 1.50000e+01 1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 ] [2.40000e+01 2.50000e+01 2.60000e+01 2.70000e+01 2.80000e+01 2.90000e+01 3.00000e+01 3.10000e+01 3.20000e+01 3.30000e+01 3.40000e+01 3.50000e+01 ] [3.60000e+01 3.70000e+01 3.80000e+01 3.90000e+01 4.00000e+01 4.10000e+01 4.20000e+01 4.30000e+01 4.40000e+01 4.50000e+01 4.60000e+01 4.70000e+01 ] [4.80000e+01 4.90000e+01 5.00000e+01 5.10000e+01 5.20000e+01 5.30000e+01 5.40000e+01 5.50000e+01 5.60000e+01 5.70000e+01 5.80000e+01 5.90000e+01 ]] Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (5,4,3) [[[0.00000e+00 1.00000e+00 2.00000e+00 ] [3.00000e+00 4.00000e+00 5.00000e+00 ] [6.00000e+00 7.00000e+00 8.00000e+00 ] [9.00000e+00 1.00000e+01 1.10000e+01 ]] [[1.20000e+01 1.30000e+01 1.40000e+01 ] [1.50000e+01 1.60000e+01 1.70000e+01 ] [1.80000e+01 1.90000e+01 2.00000e+01 ] [2.10000e+01 2.20000e+01 2.30000e+01 ]] [[2.40000e+01 2.50000e+01 2.60000e+01 ] [2.70000e+01 2.80000e+01 2.90000e+01 ] [3.00000e+01 3.10000e+01 3.20000e+01 ] [3.30000e+01 3.40000e+01 3.50000e+01 ]] [[3.60000e+01 3.70000e+01 3.80000e+01 ] [3.90000e+01 4.00000e+01 4.10000e+01 ] [4.20000e+01 4.30000e+01 4.40000e+01 ] [4.50000e+01 4.60000e+01 4.70000e+01 ]] [[4.80000e+01 4.90000e+01 5.00000e+01 ] [5.10000e+01 5.20000e+01 5.30000e+01 ] [5.40000e+01 5.50000e+01 5.60000e+01 ] [5.70000e+01 5.80000e+01 5.90000e+01 ]]]
Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (60) [0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 5.00000e+00 6.00000e+00 7.00000e+00 8.00000e+00 9.00000e+00 1.00000e+01 1.10000e+01 1.20000e+01 1.30000e+01 1.40000e+01 1.50000e+01 1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 2.40000e+01 2.50000e+01 2.60000e+01 2.70000e+01 2.80000e+01 2.90000e+01 3.00000e+01 3.10000e+01 3.20000e+01 3.30000e+01 3.40000e+01 3.50000e+01 3.60000e+01 3.70000e+01 3.80000e+01 3.90000e+01 4.00000e+01 4.10000e+01 4.20000e+01 4.30000e+01 4.40000e+01 4.50000e+01 4.60000e+01 4.70000e+01 4.80000e+01 4.90000e+01 5.00000e+01 5.10000e+01 5.20000e+01 5.30000e+01 5.40000e+01 5.50000e+01 5.60000e+01 5.70000e+01 5.80000e+01 5.90000e+01 ] Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (5,12) [[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 5.00000e+00 6.00000e+00 7.00000e+00 8.00000e+00 9.00000e+00 1.00000e+01 1.10000e+01 ] [1.20000e+01 1.30000e+01 1.40000e+01 1.50000e+01 1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 ] [2.40000e+01 2.50000e+01 2.60000e+01 2.70000e+01 2.80000e+01 2.90000e+01 3.00000e+01 3.10000e+01 3.20000e+01 3.30000e+01 3.40000e+01 3.50000e+01 ] [3.60000e+01 3.70000e+01 3.80000e+01 3.90000e+01 4.00000e+01 4.10000e+01 4.20000e+01 4.30000e+01 4.40000e+01 4.50000e+01 4.60000e+01 4.70000e+01 ] [4.80000e+01 4.90000e+01 5.00000e+01 5.10000e+01 5.20000e+01 5.30000e+01 5.40000e+01 5.50000e+01 5.60000e+01 5.70000e+01 5.80000e+01 5.90000e+01 ]]
| bool cytnx::Tensor::same_data | ( | const Tensor & | rhs | ) | const |
| void cytnx::Tensor::Save | ( | const char * | fname | ) | const |
| void cytnx::Tensor::Save | ( | const std::string & | fname | ) | const |
Save current Tensor to file.
| [in] | fname | file name (without file extension) |
save the Tensor to file with file path specify with input param fname with postfix ".cytn"
|
inline |
set elements with the input constant using Accessor (C++ API) / slices (python API)
| [in] | accessors | the list(vector) of accessors. |
| rc | [Const] |
Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (3,4,5) [[[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 ] [5.00000e+00 6.00000e+00 7.00000e+00 8.00000e+00 9.00000e+00 ] [1.00000e+01 1.10000e+01 1.20000e+01 1.30000e+01 1.40000e+01 ] [1.50000e+01 1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 ]] [[2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 2.40000e+01 ] [2.50000e+01 2.60000e+01 2.70000e+01 2.80000e+01 2.90000e+01 ] [3.00000e+01 3.10000e+01 3.20000e+01 3.30000e+01 3.40000e+01 ] [3.50000e+01 3.60000e+01 3.70000e+01 3.80000e+01 3.90000e+01 ]] [[4.00000e+01 4.10000e+01 4.20000e+01 4.30000e+01 4.40000e+01 ] [4.50000e+01 4.60000e+01 4.70000e+01 4.80000e+01 4.90000e+01 ] [5.00000e+01 5.10000e+01 5.20000e+01 5.30000e+01 5.40000e+01 ] [5.50000e+01 5.60000e+01 5.70000e+01 5.80000e+01 5.90000e+01 ]]] Total elem: 12 type : Double (Float64) cytnx device: CPU Shape : (4,3) [[0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 ]] Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (3,4,5) [[[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 ] [5.00000e+00 6.00000e+00 7.00000e+00 8.00000e+00 9.00000e+00 ] [1.00000e+01 1.10000e+01 1.20000e+01 1.30000e+01 1.40000e+01 ] [1.50000e+01 1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 ]] [[2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 2.40000e+01 ] [2.50000e+01 2.60000e+01 2.70000e+01 2.80000e+01 2.90000e+01 ] [3.00000e+01 3.10000e+01 3.20000e+01 3.30000e+01 3.40000e+01 ] [3.50000e+01 3.60000e+01 3.70000e+01 3.80000e+01 3.90000e+01 ]] [[4.00000e+01 4.10000e+01 0.00000e+00 0.00000e+00 0.00000e+00 ] [4.50000e+01 4.60000e+01 0.00000e+00 0.00000e+00 0.00000e+00 ] [5.00000e+01 5.10000e+01 0.00000e+00 0.00000e+00 0.00000e+00 ] [5.50000e+01 5.60000e+01 0.00000e+00 0.00000e+00 0.00000e+00 ]]] Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (3,4,5) [[[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 ] [5.00000e+00 6.00000e+00 7.00000e+00 8.00000e+00 9.00000e+00 ] [1.00000e+01 1.10000e+01 1.20000e+01 1.30000e+01 1.40000e+01 ] [1.50000e+01 1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 ]] [[2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 2.40000e+01 ] [2.50000e+01 2.60000e+01 2.70000e+01 2.80000e+01 2.90000e+01 ] [3.00000e+01 3.10000e+01 3.20000e+01 3.30000e+01 3.40000e+01 ] [3.50000e+01 3.60000e+01 3.70000e+01 3.80000e+01 3.90000e+01 ]] [[4.00000e+01 4.10000e+01 9.99000e+02 9.99000e+02 9.99000e+02 ] [4.50000e+01 4.60000e+01 9.99000e+02 9.99000e+02 9.99000e+02 ] [5.00000e+01 5.10000e+01 9.99000e+02 9.99000e+02 9.99000e+02 ] [5.50000e+01 5.60000e+01 9.99000e+02 9.99000e+02 9.99000e+02 ]]] Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (3,4,5) [[[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 ] [5.00000e+00 6.00000e+00 7.00000e+00 8.00000e+00 9.00000e+00 ] [1.00000e+01 1.10000e+01 1.20000e+01 1.30000e+01 1.40000e+01 ] [1.50000e+01 1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 ]] [[2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 2.40000e+01 ] [2.50000e+01 2.60000e+01 2.70000e+01 2.80000e+01 2.90000e+01 ] [3.00000e+01 3.10000e+01 3.20000e+01 3.30000e+01 3.40000e+01 ] [3.50000e+01 3.60000e+01 3.70000e+01 3.80000e+01 3.90000e+01 ]] [[4.00000e+01 4.10000e+01 0.00000e+00 0.00000e+00 0.00000e+00 ] [4.50000e+01 4.60000e+01 0.00000e+00 0.00000e+00 0.00000e+00 ] [5.00000e+01 5.10000e+01 0.00000e+00 0.00000e+00 0.00000e+00 ] [5.50000e+01 5.60000e+01 0.00000e+00 0.00000e+00 0.00000e+00 ]]] Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (3,4,5) [[[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 ] [5.00000e+00 6.00000e+00 7.00000e+00 8.00000e+00 9.00000e+00 ] [1.00000e+01 1.10000e+01 1.20000e+01 1.30000e+01 1.40000e+01 ] [1.50000e+01 1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 ]] [[2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 2.40000e+01 ] [2.50000e+01 2.60000e+01 2.70000e+01 2.80000e+01 2.90000e+01 ] [3.00000e+01 3.10000e+01 3.20000e+01 3.30000e+01 3.40000e+01 ] [3.50000e+01 3.60000e+01 3.70000e+01 3.80000e+01 3.90000e+01 ]] [[9.99000e+02 9.99000e+02 0.00000e+00 0.00000e+00 0.00000e+00 ] [9.99000e+02 9.99000e+02 0.00000e+00 0.00000e+00 0.00000e+00 ] [9.99000e+02 9.99000e+02 0.00000e+00 0.00000e+00 0.00000e+00 ] [9.99000e+02 9.99000e+02 0.00000e+00 0.00000e+00 0.00000e+00 ]]]
Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (3,4,5) [[[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 ] [5.00000e+00 6.00000e+00 7.00000e+00 8.00000e+00 9.00000e+00 ] [1.00000e+01 1.10000e+01 1.20000e+01 1.30000e+01 1.40000e+01 ] [1.50000e+01 1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 ]] [[2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 2.40000e+01 ] [2.50000e+01 2.60000e+01 2.70000e+01 2.80000e+01 2.90000e+01 ] [3.00000e+01 3.10000e+01 3.20000e+01 3.30000e+01 3.40000e+01 ] [3.50000e+01 3.60000e+01 3.70000e+01 3.80000e+01 3.90000e+01 ]] [[4.00000e+01 4.10000e+01 4.20000e+01 4.30000e+01 4.40000e+01 ] [4.50000e+01 4.60000e+01 4.70000e+01 4.80000e+01 4.90000e+01 ] [5.00000e+01 5.10000e+01 5.20000e+01 5.30000e+01 5.40000e+01 ] [5.50000e+01 5.60000e+01 5.70000e+01 5.80000e+01 5.90000e+01 ]]] Total elem: 12 type : Double (Float64) cytnx device: CPU Shape : (4,3) [[0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 ]] Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (3,4,5) [[[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 ] [5.00000e+00 6.00000e+00 7.00000e+00 8.00000e+00 9.00000e+00 ] [1.00000e+01 1.10000e+01 1.20000e+01 1.30000e+01 1.40000e+01 ] [1.50000e+01 1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 ]] [[2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 2.40000e+01 ] [2.50000e+01 2.60000e+01 2.70000e+01 2.80000e+01 2.90000e+01 ] [3.00000e+01 3.10000e+01 3.20000e+01 3.30000e+01 3.40000e+01 ] [3.50000e+01 3.60000e+01 3.70000e+01 3.80000e+01 3.90000e+01 ]] [[4.00000e+01 4.10000e+01 0.00000e+00 0.00000e+00 0.00000e+00 ] [4.50000e+01 4.60000e+01 0.00000e+00 0.00000e+00 0.00000e+00 ] [5.00000e+01 5.10000e+01 0.00000e+00 0.00000e+00 0.00000e+00 ] [5.50000e+01 5.60000e+01 0.00000e+00 0.00000e+00 0.00000e+00 ]]] Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (3,4,5) [[[9.99000e+02 9.99000e+02 2.00000e+00 3.00000e+00 4.00000e+00 ] [9.99000e+02 9.99000e+02 7.00000e+00 8.00000e+00 9.00000e+00 ] [9.99000e+02 9.99000e+02 1.20000e+01 1.30000e+01 1.40000e+01 ] [9.99000e+02 9.99000e+02 1.70000e+01 1.80000e+01 1.90000e+01 ]] [[2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 2.40000e+01 ] [2.50000e+01 2.60000e+01 2.70000e+01 2.80000e+01 2.90000e+01 ] [3.00000e+01 3.10000e+01 3.20000e+01 3.30000e+01 3.40000e+01 ] [3.50000e+01 3.60000e+01 3.70000e+01 3.80000e+01 3.90000e+01 ]] [[4.00000e+01 4.10000e+01 0.00000e+00 0.00000e+00 0.00000e+00 ] [4.50000e+01 4.60000e+01 0.00000e+00 0.00000e+00 0.00000e+00 ] [5.00000e+01 5.10000e+01 0.00000e+00 0.00000e+00 0.00000e+00 ] [5.50000e+01 5.60000e+01 0.00000e+00 0.00000e+00 0.00000e+00 ]]]
|
inline |
set elements with the input Tensor using Accessor (C++ API) / slices (python API)
| [in] | accessors | the list(vector) of accessors. |
| rhs | [Tensor] |
Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (3,4,5) [[[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 ] [5.00000e+00 6.00000e+00 7.00000e+00 8.00000e+00 9.00000e+00 ] [1.00000e+01 1.10000e+01 1.20000e+01 1.30000e+01 1.40000e+01 ] [1.50000e+01 1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 ]] [[2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 2.40000e+01 ] [2.50000e+01 2.60000e+01 2.70000e+01 2.80000e+01 2.90000e+01 ] [3.00000e+01 3.10000e+01 3.20000e+01 3.30000e+01 3.40000e+01 ] [3.50000e+01 3.60000e+01 3.70000e+01 3.80000e+01 3.90000e+01 ]] [[4.00000e+01 4.10000e+01 4.20000e+01 4.30000e+01 4.40000e+01 ] [4.50000e+01 4.60000e+01 4.70000e+01 4.80000e+01 4.90000e+01 ] [5.00000e+01 5.10000e+01 5.20000e+01 5.30000e+01 5.40000e+01 ] [5.50000e+01 5.60000e+01 5.70000e+01 5.80000e+01 5.90000e+01 ]]] Total elem: 12 type : Double (Float64) cytnx device: CPU Shape : (4,3) [[0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 ]] Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (3,4,5) [[[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 ] [5.00000e+00 6.00000e+00 7.00000e+00 8.00000e+00 9.00000e+00 ] [1.00000e+01 1.10000e+01 1.20000e+01 1.30000e+01 1.40000e+01 ] [1.50000e+01 1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 ]] [[2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 2.40000e+01 ] [2.50000e+01 2.60000e+01 2.70000e+01 2.80000e+01 2.90000e+01 ] [3.00000e+01 3.10000e+01 3.20000e+01 3.30000e+01 3.40000e+01 ] [3.50000e+01 3.60000e+01 3.70000e+01 3.80000e+01 3.90000e+01 ]] [[4.00000e+01 4.10000e+01 0.00000e+00 0.00000e+00 0.00000e+00 ] [4.50000e+01 4.60000e+01 0.00000e+00 0.00000e+00 0.00000e+00 ] [5.00000e+01 5.10000e+01 0.00000e+00 0.00000e+00 0.00000e+00 ] [5.50000e+01 5.60000e+01 0.00000e+00 0.00000e+00 0.00000e+00 ]]] Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (3,4,5) [[[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 ] [5.00000e+00 6.00000e+00 7.00000e+00 8.00000e+00 9.00000e+00 ] [1.00000e+01 1.10000e+01 1.20000e+01 1.30000e+01 1.40000e+01 ] [1.50000e+01 1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 ]] [[2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 2.40000e+01 ] [2.50000e+01 2.60000e+01 2.70000e+01 2.80000e+01 2.90000e+01 ] [3.00000e+01 3.10000e+01 3.20000e+01 3.30000e+01 3.40000e+01 ] [3.50000e+01 3.60000e+01 3.70000e+01 3.80000e+01 3.90000e+01 ]] [[4.00000e+01 4.10000e+01 9.99000e+02 9.99000e+02 9.99000e+02 ] [4.50000e+01 4.60000e+01 9.99000e+02 9.99000e+02 9.99000e+02 ] [5.00000e+01 5.10000e+01 9.99000e+02 9.99000e+02 9.99000e+02 ] [5.50000e+01 5.60000e+01 9.99000e+02 9.99000e+02 9.99000e+02 ]]] Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (3,4,5) [[[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 ] [5.00000e+00 6.00000e+00 7.00000e+00 8.00000e+00 9.00000e+00 ] [1.00000e+01 1.10000e+01 1.20000e+01 1.30000e+01 1.40000e+01 ] [1.50000e+01 1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 ]] [[2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 2.40000e+01 ] [2.50000e+01 2.60000e+01 2.70000e+01 2.80000e+01 2.90000e+01 ] [3.00000e+01 3.10000e+01 3.20000e+01 3.30000e+01 3.40000e+01 ] [3.50000e+01 3.60000e+01 3.70000e+01 3.80000e+01 3.90000e+01 ]] [[4.00000e+01 4.10000e+01 0.00000e+00 0.00000e+00 0.00000e+00 ] [4.50000e+01 4.60000e+01 0.00000e+00 0.00000e+00 0.00000e+00 ] [5.00000e+01 5.10000e+01 0.00000e+00 0.00000e+00 0.00000e+00 ] [5.50000e+01 5.60000e+01 0.00000e+00 0.00000e+00 0.00000e+00 ]]] Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (3,4,5) [[[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 ] [5.00000e+00 6.00000e+00 7.00000e+00 8.00000e+00 9.00000e+00 ] [1.00000e+01 1.10000e+01 1.20000e+01 1.30000e+01 1.40000e+01 ] [1.50000e+01 1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 ]] [[2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 2.40000e+01 ] [2.50000e+01 2.60000e+01 2.70000e+01 2.80000e+01 2.90000e+01 ] [3.00000e+01 3.10000e+01 3.20000e+01 3.30000e+01 3.40000e+01 ] [3.50000e+01 3.60000e+01 3.70000e+01 3.80000e+01 3.90000e+01 ]] [[9.99000e+02 9.99000e+02 0.00000e+00 0.00000e+00 0.00000e+00 ] [9.99000e+02 9.99000e+02 0.00000e+00 0.00000e+00 0.00000e+00 ] [9.99000e+02 9.99000e+02 0.00000e+00 0.00000e+00 0.00000e+00 ] [9.99000e+02 9.99000e+02 0.00000e+00 0.00000e+00 0.00000e+00 ]]]
Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (3,4,5) [[[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 ] [5.00000e+00 6.00000e+00 7.00000e+00 8.00000e+00 9.00000e+00 ] [1.00000e+01 1.10000e+01 1.20000e+01 1.30000e+01 1.40000e+01 ] [1.50000e+01 1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 ]] [[2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 2.40000e+01 ] [2.50000e+01 2.60000e+01 2.70000e+01 2.80000e+01 2.90000e+01 ] [3.00000e+01 3.10000e+01 3.20000e+01 3.30000e+01 3.40000e+01 ] [3.50000e+01 3.60000e+01 3.70000e+01 3.80000e+01 3.90000e+01 ]] [[4.00000e+01 4.10000e+01 4.20000e+01 4.30000e+01 4.40000e+01 ] [4.50000e+01 4.60000e+01 4.70000e+01 4.80000e+01 4.90000e+01 ] [5.00000e+01 5.10000e+01 5.20000e+01 5.30000e+01 5.40000e+01 ] [5.50000e+01 5.60000e+01 5.70000e+01 5.80000e+01 5.90000e+01 ]]] Total elem: 12 type : Double (Float64) cytnx device: CPU Shape : (4,3) [[0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 ] [0.00000e+00 0.00000e+00 0.00000e+00 ]] Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (3,4,5) [[[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 ] [5.00000e+00 6.00000e+00 7.00000e+00 8.00000e+00 9.00000e+00 ] [1.00000e+01 1.10000e+01 1.20000e+01 1.30000e+01 1.40000e+01 ] [1.50000e+01 1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 ]] [[2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 2.40000e+01 ] [2.50000e+01 2.60000e+01 2.70000e+01 2.80000e+01 2.90000e+01 ] [3.00000e+01 3.10000e+01 3.20000e+01 3.30000e+01 3.40000e+01 ] [3.50000e+01 3.60000e+01 3.70000e+01 3.80000e+01 3.90000e+01 ]] [[4.00000e+01 4.10000e+01 0.00000e+00 0.00000e+00 0.00000e+00 ] [4.50000e+01 4.60000e+01 0.00000e+00 0.00000e+00 0.00000e+00 ] [5.00000e+01 5.10000e+01 0.00000e+00 0.00000e+00 0.00000e+00 ] [5.50000e+01 5.60000e+01 0.00000e+00 0.00000e+00 0.00000e+00 ]]] Total elem: 60 type : Double (Float64) cytnx device: CPU Shape : (3,4,5) [[[9.99000e+02 9.99000e+02 2.00000e+00 3.00000e+00 4.00000e+00 ] [9.99000e+02 9.99000e+02 7.00000e+00 8.00000e+00 9.00000e+00 ] [9.99000e+02 9.99000e+02 1.20000e+01 1.30000e+01 1.40000e+01 ] [9.99000e+02 9.99000e+02 1.70000e+01 1.80000e+01 1.90000e+01 ]] [[2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 2.40000e+01 ] [2.50000e+01 2.60000e+01 2.70000e+01 2.80000e+01 2.90000e+01 ] [3.00000e+01 3.10000e+01 3.20000e+01 3.30000e+01 3.40000e+01 ] [3.50000e+01 3.60000e+01 3.70000e+01 3.80000e+01 3.90000e+01 ]] [[4.00000e+01 4.10000e+01 0.00000e+00 0.00000e+00 0.00000e+00 ] [4.50000e+01 4.60000e+01 0.00000e+00 0.00000e+00 0.00000e+00 ] [5.00000e+01 5.10000e+01 0.00000e+00 0.00000e+00 0.00000e+00 ] [5.50000e+01 5.60000e+01 0.00000e+00 0.00000e+00 0.00000e+00 ]]]
|
inline |
|
inline |
return the storage of current Tensor.
|
inline |
Subtraction function with a Tensor or a scalar. Same as cytnx::operator-(const Tensor &self, const T &rhs).
| [in] | rhs | the subtracted Tensor or scalar. |
|
inline |
Subtraction function with a Tensor or a scalar, inplacely. Same as operator-=(const T &rhs).
| [in] | rhs | the subtracted Tensor or scalar. |
| std::vector< Tensor > cytnx::Tensor::Svd | ( | const bool & | is_UvT = true | ) | const |
the SVD member function. Same as cytnx::linalg::Svd(const Tensor &Tin, const bool &is_UvT) , where Tin is the current Tensor.
|
inline |
copy a tensor to new device
| [in] | device | the device-id that is moving to. it can be any device defined in cytnx::Device |
description:
if the device-id is the same as current Tensor's device, then return self.
otherwise, return a copy of instance that located on the target device.
see also: Tensor.to_
cytnx device: CUDA/GPU-id:0 cytnx device: CPU
|
inline |
move the current Tensor to the device.
| [in] | device | the device-id that is moving to. it can be any device defined in cytnx::Device |
description:
see also: Tensor.to
cytnx device: CUDA/GPU-id:0
| void cytnx::Tensor::Tofile | ( | const char * | fname | ) | const |
| void cytnx::Tensor::Tofile | ( | const std::string & | fname | ) | const |
| void cytnx::Tensor::Tofile | ( | std::fstream & | f | ) | const |
| Tensor cytnx::Tensor::Trace | ( | const cytnx_uint64 & | a = 0, |
| const cytnx_uint64 & | b = 1 |
||
| ) | const |
the Trace member function. Same as linalg::Trace(const Tensor &Tin, const
cytnx_uint64 &a, const cytnx_uint64 &b), where Tin is the current Tensor.