4.1. Creating a Storage

The storage can be created similarly to a Tensor. Note that Storage does not have the concept of shape, and behaves basically just like a vector in C++.

To create a Storage, with dtype=Type.Double on the CPU:

  • In Python:

1A = cytnx.Storage(10,dtype=cytnx.Type.Double,device=cytnx.Device.cpu)
2A.set_zeros()
3
4print(A)
  • In C++:

1auto A = cytnx::Storage(10, cytnx::Type.Double, cytnx::Device.cpu);
2A.set_zeros();
3
4cout << A << endl;

Output >>

dtype : Double (Float64)
device: cytnx device: CPU
size  : 10
[ 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ]

Note

[Deprecated] Storage by itself only allocates memory (using malloc) without initializing its elements.

[v0.6.6+] Storage behaves like a vector and initializes all elements to zero.

Tip

  1. Use Storage.set_zeros() or Storage.fill() if you want to set all the elements to zero or some arbitrary numbers.

  2. For complex type Storage, you can use .real() and .imag() to get the real part/imaginary part of the data.

4.1.1. Type conversion

Conversion between different data types is possible for a Storage. Just like Tensor, call Storage.astype() to convert between different data types.

The available data types are the same as for a Tensor, see Tensor with different dtype and device.

  • In Python:

1A = cytnx.Storage(10)
2A.set_zeros()
3
4B = A.astype(cytnx.Type.ComplexDouble)
5
6print(A)
7print(B)
  • In C++:

1auto A = cytnx::Storage(10);
2A.set_zeros();
3
4auto B = A.astype(cytnx::Type.ComplexDouble);
5
6cout << A << endl;
7cout << B << endl;

Output >>

dtype : Double (Float64)
device: cytnx device: CPU
size  : 10
[ 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 ]


dtype : Complex Double (Complex Float64)
device: cytnx device: CPU
size  : 10
[ 0.00000e+00+0.00000e+00j 0.00000e+00+0.00000e+00j 0.00000e+00+0.00000e+00j 0.00000e+00+0.00000e+00j 0.00000e+00+0.00000e+00j 0.00000e+00+0.00000e+00j 0.00000e+00+0.00000e+00j 0.00000e+00+0.00000e+00j 0.00000e+00+0.00000e+00j 0.00000e+00+0.00000e+00j  ]

4.1.2. Transfer between devices

We can also transfer the storage between different devices. Similar to Tensor, we can use Storage.to().

  • In Python:

1A = cytnx.Storage(4)
2B = A.to(cytnx.Device.cuda)
3
4print(A.device_str())
5print(B.device_str())
6
7A.to_(cytnx.Device.cuda)
8print(A.device_str())
  • In C++:

1auto A = cytnx::Storage(4);
2
3auto B = A.to(cytnx::Device.cuda);
4cout << A.device_str() << endl;
5cout << B.device_str() << endl;
6
7A.to_(cytnx::Device.cuda);
8cout << A.device_str() << endl;

Output>>

cytnx device: CPU
cytnx device: CUDA/GPU-id:0
cytnx device: CUDA/GPU-id:0

Hint

  1. Like for a Tensor, .device_str() returns the device string while .device() returns the device ID (cpu=-1).

  2. .to() returns a copy on the target device. Use .to_() instead to move the current instance to a target device.

4.1.3. Get Storage of Tensor

Internally, the data of a Tensor is saved in a Storage. We can get the Storage of a Tensor by using Tensor.storage().

  • In Python:

1A = cytnx.arange(10).reshape(2,5);
2B = A.storage();
3
4print(A)
5print(B)
  • In C++:

1auto A = cytnx::arange(10).reshape(2, 5);
2auto B = A.storage();
3
4cout << A << endl;
5cout << B << endl;

Output >>

Total elem: 10
type  : Double (Float64)
cytnx device: CPU
Shape : (2,5)
[[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 ]
 [5.00000e+00 6.00000e+00 7.00000e+00 8.00000e+00 9.00000e+00 ]]



dtype : Double (Float64)
device: cytnx device: CPU
size  : 10
[ 0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 5.00000e+00 6.00000e+00 7.00000e+00 8.00000e+00 9.00000e+00 ]

Note

The return value is a reference to the Tensor’s internal storage. This implies that any modification to this Storage will modify the Tensor accordingly.

[Important] For a Tensor in non-contiguous status, the meta-data is detached from its memory handled by storage. In this case, calling Tensor.storage() will return the current memory layout, not the ordering according to the Tensor indices in the meta-data.

We demonstrate this using the Python API. The C++ API can be used in a similar way.

  • In Python:

 1A = cytnx.arange(8).reshape(2,2,2)
 2print(A.storage())
 3
 4# Let's make it non-contiguous
 5A.permute_(0,2,1)
 6print(A.is_contiguous())
 7
 8# Note that the storage is not changed
 9print(A.storage())
10
11# Now let's make it contiguous
12# thus the elements is moved
13A.contiguous_();
14print(A.is_contiguous())
15
16# Note that the storage now is changed
17print(A.storage())

Output >>

dtype : Double (Float64)
device: cytnx device: CPU
size  : 8
[ 0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 5.00000e+00 6.00000e+00 7.00000e+00 ]


False
dtype : Double (Float64)
device: cytnx device: CPU
size  : 8
[ 0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 5.00000e+00 6.00000e+00 7.00000e+00 ]


True
dtype : Double (Float64)
device: cytnx device: CPU
size  : 8
[ 0.00000e+00 2.00000e+00 1.00000e+00 3.00000e+00 4.00000e+00 6.00000e+00 5.00000e+00 7.00000e+00 ]