dtorch.jtensors
- class jtensors.JTensors
The tensor class wrapping a numpy array to handle autograd.
- grad: JTensors
Way to access the gradients of the tensors after the
backward()
method was called on a result of an operation on said tensors.
- require_grads: bool
If set to
True
, the gradients of this tensor will be calculated when thebackward()
method is called.
- ndims: int
Number of dimensions this tensor’s shape have.
- shape: Tuple[int]
The shape of the tensor.
- dtype: numpy.ndtype
The type of the tensor’s elements.
- itemsize: int
Size of each elements in bytes.
- size: int
Number of elements in the tensor.
- stride: int
The current stride of the tensor (stride * itemsize).
- backward(base_tensor, forced=False) None
- Parameters:
base_tensor (JTensors) – The base tensor the gradients will be accumulated on top of.
forced (bool) – Only useful internaly (may be used with caution)
Backpropagate though the network to calculate the gradients of tensor linked.
require_grads
attribute need to be set to True to backpropagate.
- numpy() numpy.ndarray
Transform the tensor into a numpy() array.
- reshape(*shape) JTensors
- Parameters:
shape (int) – The shape of the new tensor. Ex :
tensor.reshape(1, 2, 3)
- sum(axis: Tuple[int] | None = None, keepdims: bool = False) JTensors
- Parameters:
axis (Optional[Tuple[int]]) – The axis that will be summed. If not provided, all axis are summed.
keepdims (bool) – Keep the dimension that are summed.
Return a tensor containing the sum over specific axis or over all axis if not provided.
- rearrange(pattern: str) JTensors
- Parameters:
pattern (str) – The pattern to rearrange
The method is similar to
einops
rearrange method.Ex:
>> import dtorch.jtensors as dt >> u = dt.tensor([[4, 2, 4], [5, 2, 6]]) >> dt.rearrange(u, 'ab->ba') jtensor([[4 5] [2 2] [4 6]])
- detach() JTensors
Return a copy of the tensor that is not linked to the ancient autograd graph. Useful for TBPTT methods