3.2. Manipulating Tensors¶
Next, let’s look at the operations that are commonly used to manipulate Tensor objects.
3.2.1. reshape¶
Suppose we want to create a rank-3 Tensor with shape=(2,3,4), starting with a rank-1 Tensor with shape=(24) initialized using arange().
This operation is called reshape
We can use the Tensor.reshape function to do this.
In Python:
1A = cytnx.arange(24)
2B = A.reshape(2,3,4)
3print(A)
4print(B)
In C++:
1auto A = cytnx::arange(24);
2auto B = A.reshape(2, 3, 4);
3cout << A << endl;
4cout << B << endl;
Output >>
Total elem: 24
type : Double (Float64)
cytnx device: CPU
Shape : (24)
[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 5.00000e+00 6.00000e+00 7.00000e+00 8.00000e+00 9.00000e+00 1.00000e+01 1.10000e+01 1.20000e+01 1.30000e+01 1.40000e+01 1.50000e+01 1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 ]
Total elem: 24
type : Double (Float64)
cytnx device: CPU
Shape : (2,3,4)
[[[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 ]
[4.00000e+00 5.00000e+00 6.00000e+00 7.00000e+00 ]
[8.00000e+00 9.00000e+00 1.00000e+01 1.10000e+01 ]]
[[1.20000e+01 1.30000e+01 1.40000e+01 1.50000e+01 ]
[1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 ]
[2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 ]]]
Notice that calling reshape() returns a new object B, so the original object A’s shape is not changed after calling reshape.
The function Tensor.reshape_ (with a underscore) performs a reshape as well, but instead of returning a new reshaped object, it performs an inplace reshape of the instance that calls the function. For example:
In Python:
1A = cytnx.arange(24)
2print(A)
3A.reshape_(2,3,4)
4print(A)
In C++:
1auto A = cytnx::arange(24);
2cout << A << endl;
3A.reshape_(2, 3, 4);
4cout << A << endl;
Output >>
Total elem: 24
type : Double (Float64)
cytnx device: CPU
Shape : (24)
[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 4.00000e+00 5.00000e+00 6.00000e+00 7.00000e+00 8.00000e+00 9.00000e+00 1.00000e+01 1.10000e+01 1.20000e+01 1.30000e+01 1.40000e+01 1.50000e+01 1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 ]
Total elem: 24
type : Double (Float64)
cytnx device: CPU
Shape : (2,3,4)
[[[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 ]
[4.00000e+00 5.00000e+00 6.00000e+00 7.00000e+00 ]
[8.00000e+00 9.00000e+00 1.00000e+01 1.10000e+01 ]]
[[1.20000e+01 1.30000e+01 1.40000e+01 1.50000e+01 ]
[1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 ]
[2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 ]]]
Thus, we see that using the underscore version modifies the original Tensor itself.
Note
In general, all the funcions in Cytnx that end with an underscore _ are either inplace functions that modify the instance that calls it, or return a reference of some class member.
Hint
You can use Tensor.shape() to get the shape of a Tensor.
3.2.2. permute¶
Let’s consider the same rank-3 Tensor with shape=(2,3,4) as an example. This time we want to permute the order of the Tensor indices according to (0,1,2)->(1,2,0)
This can be achieved with Tensor.permute
In Python:
1A = cytnx.arange(24).reshape(2,3,4)
2B = A.permute(1,2,0)
3print(A)
4print(B)
In C++:
1auto A = cytnx::arange(24).reshape(2, 3, 4);
2auto B = A.permute(1, 2, 0);
3cout << A << endl;
4cout << B << endl;
Output >>
Total elem: 24
type : Double (Float64)
cytnx device: CPU
Shape : (2,3,4)
[[[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 ]
[4.00000e+00 5.00000e+00 6.00000e+00 7.00000e+00 ]
[8.00000e+00 9.00000e+00 1.00000e+01 1.10000e+01 ]]
[[1.20000e+01 1.30000e+01 1.40000e+01 1.50000e+01 ]
[1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 ]
[2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 ]]]
Total elem: 24
type : Double (Float64)
cytnx device: CPU
Shape : (3,4,2)
[[[0.00000e+00 1.20000e+01 ]
[1.00000e+00 1.30000e+01 ]
[2.00000e+00 1.40000e+01 ]
[3.00000e+00 1.50000e+01 ]]
[[4.00000e+00 1.60000e+01 ]
[5.00000e+00 1.70000e+01 ]
[6.00000e+00 1.80000e+01 ]
[7.00000e+00 1.90000e+01 ]]
[[8.00000e+00 2.00000e+01 ]
[9.00000e+00 2.10000e+01 ]
[1.00000e+01 2.20000e+01 ]
[1.10000e+01 2.30000e+01 ]]]
Note
Just like before, there is an equivalent Tensor.permute_, which ends with an underscore, that performs an inplace permute on the instance that calls it.
In Cytnx, the permute operation does not move the elements in the memory immediately. Only the meta-data that is seen by the user is changed. This can avoid the redundant moving of elements. Note that this approach is also used in numpy.array and torch.tensor .
After the permute, the meta-data does not correspond to the memory order anymore. If the meta-data is distached that way from the real memory layout, we call the Tensor in this status non-contiguous. We can use Tensor.is_contiguous() to check if the current Tensor is in contiguous status.
You can force the Tensor to become contiguous by calling Tensor.contiguous() or Tensor.contiguous_(). The memory is then rearranged according to the shape of the Tensor. Generally you do not have to worry about the contiguous status, as Cytnx automatically handles it for you.
In Python:
1A = cytnx.arange(24).reshape(2,3,4)
2print(A.is_contiguous())
3print(A)
4
5A.permute_(1,0,2)
6print(A.is_contiguous())
7print(A)
8
9A.contiguous_()
10print(A.is_contiguous())
In C++:
1auto A = cytnx::arange(24).reshape(2, 3, 4);
2cout << A.is_contiguous() << endl;
3cout << A << endl;
4
5A.permute_(1, 0, 2);
6cout << A.is_contiguous() << endl;
7cout << A << endl;
8
9A.contiguous_();
10cout << A.is_contiguous() << endl;
Output >>
True
Total elem: 24
type : Double (Float64)
cytnx device: CPU
Shape : (2,3,4)
[[[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 ]
[4.00000e+00 5.00000e+00 6.00000e+00 7.00000e+00 ]
[8.00000e+00 9.00000e+00 1.00000e+01 1.10000e+01 ]]
[[1.20000e+01 1.30000e+01 1.40000e+01 1.50000e+01 ]
[1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 ]
[2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 ]]]
False
Total elem: 24
type : Double (Float64)
cytnx device: CPU
Shape : (3,2,4)
[[[0.00000e+00 1.00000e+00 2.00000e+00 3.00000e+00 ]
[1.20000e+01 1.30000e+01 1.40000e+01 1.50000e+01 ]]
[[4.00000e+00 5.00000e+00 6.00000e+00 7.00000e+00 ]
[1.60000e+01 1.70000e+01 1.80000e+01 1.90000e+01 ]]
[[8.00000e+00 9.00000e+00 1.00000e+01 1.10000e+01 ]
[2.00000e+01 2.10000e+01 2.20000e+01 2.30000e+01 ]]]
True
Tip
Generally, you don’t have to worry about contiguous issues. You can access the elements and call linalg just like this contiguous/non-contiguous property does not exist.
In cases where a function does require the user to manually force the Tensor to be contiguous, a warning will be prompted, and you can simply add a Tensor.contiguous() or Tensor.contiguous_() before the function call.
Making a Tensor contiguous involves copying the elements in memory and can slow down the algorithm. Unnecessary calls of Tensor.contiguous() or Tensor.contiguous_() should therefore be avoided.
See Contiguous for more details about the contiguous status.
Note
As mentioned before, Tensor.contiguous_() (with underscore) makes the current instance contiguous, while Tensor.contiguous() returns a new object with contiguous status. In the case that the current instance is already in it’s contiguous status, calling contiguous will return itself, and no new object will be created.