torch.nn¶
Parameters¶
-
class
torch.nn.
Parameter
(data=None, requires_grad=True)[source]¶ A kind of Tensor that is to be considered a module parameter.
Parameters are
Tensor
subclasses, that have a very special property when used withModule
s - when they’re assigned as Module attributes they are automatically added to the list of its parameters, and will appear e.g. inparameters()
iterator. Assigning a Tensor doesn’t have such effect. This is because one might want to cache some temporary state, like last hidden state of the RNN, in the model. If there was no such class asParameter
, these temporaries would get registered too.- Parameters
data (Tensor) – parameter tensor.
requires_grad (bool, optional) – if the parameter requires gradient. See Excluding subgraphs from backward for more details. Default: True
Containers¶
Module¶
-
class
torch.nn.
Module
[source]¶ Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.- Variables
training (bool) – Boolean represents whether this module is in training or evaluation mode.
-
add_module
(name: str, module: Optional[torch.nn.modules.module.Module]) → None[source]¶ Adds a child module to the current module.
The module can be accessed as an attribute using the given name.
- Parameters
name (string) – name of the child module. The child module can be accessed from this module using the given name
module (Module) – child module to be added to the module.
-
apply
(fn: Callable[torch.nn.modules.module.Module, None]) → T[source]¶ Applies
fn
recursively to every submodule (as returned by.children()
) as well as self. Typical use includes initializing the parameters of a model (see also nn-init-doc).- Parameters
fn (
Module
-> None) – function to be applied to each submodule- Returns
self
- Return type
Example:
>>> @torch.no_grad() >>> def init_weights(m): >>> print(m) >>> if type(m) == nn.Linear: >>> m.weight.fill_(1.0) >>> print(m.weight) >>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2)) >>> net.apply(init_weights) Linear(in_features=2, out_features=2, bias=True) Parameter containing: tensor([[ 1., 1.], [ 1., 1.]]) Linear(in_features=2, out_features=2, bias=True) Parameter containing: tensor([[ 1., 1.], [ 1., 1.]]) Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) ) Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) )
-
bfloat16
() → T[source]¶ Casts all floating point parameters and buffers to
bfloat16
datatype.- Returns
self
- Return type
-
buffers
(recurse: bool = True) → Iterator[torch.Tensor][source]¶ Returns an iterator over module buffers.
- Parameters
recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module.
- Yields
torch.Tensor – module buffer
Example:
>>> for buf in model.buffers(): >>> print(type(buf), buf.size()) <class 'torch.Tensor'> (20L,) <class 'torch.Tensor'> (20L, 1L, 5L, 5L)
-
children
() → Iterator[torch.nn.modules.module.Module][source]¶ Returns an iterator over immediate children modules.
- Yields
Module – a child module
-
cuda
(device: Optional[Union[int, torch.device]] = None) → T[source]¶ Moves all model parameters and buffers to the GPU.
This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized.
-
double
() → T[source]¶ Casts all floating point parameters and buffers to
double
datatype.- Returns
self
- Return type
-
dump_patches
: bool = False¶ This allows better BC support for
load_state_dict()
. Instate_dict()
, the version number will be saved as in the attribute _metadata of the returned state dict, and thus pickled. _metadata is a dictionary with keys that follow the naming convention of state dict. See_load_from_state_dict
on how to use this information in loading.If new parameters/buffers are added/removed from a module, this number shall be bumped, and the module’s _load_from_state_dict method can compare the version number and do appropriate changes if the state dict is from before the change.
-
eval
() → T[source]¶ Sets the module in evaluation mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.This is equivalent with
self.train(False)
.- Returns
self
- Return type
-
extra_repr
() → str[source]¶ Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
float
() → T[source]¶ Casts all floating point parameters and buffers to float datatype.
- Returns
self
- Return type
-
forward
(*input: Any) → None¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
half
() → T[source]¶ Casts all floating point parameters and buffers to
half
datatype.- Returns
self
- Return type
-
load_state_dict
(state_dict: OrderedDict[str, Tensor], strict: bool = True)[source]¶ Copies parameters and buffers from
state_dict
into this module and its descendants. Ifstrict
isTrue
, then the keys ofstate_dict
must exactly match the keys returned by this module’sstate_dict()
function.- Parameters
state_dict (dict) – a dict containing parameters and persistent buffers.
strict (bool, optional) – whether to strictly enforce that the keys in
state_dict
match the keys returned by this module’sstate_dict()
function. Default:True
- Returns
missing_keys is a list of str containing the missing keys
unexpected_keys is a list of str containing the unexpected keys
- Return type
NamedTuple
withmissing_keys
andunexpected_keys
fields
-
modules
() → Iterator[torch.nn.modules.module.Module][source]¶ Returns an iterator over all modules in the network.
- Yields
Module – a module in the network
Note
Duplicate modules are returned only once. In the following example,
l
will be returned only once.Example:
>>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.modules()): print(idx, '->', m) 0 -> Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) ) 1 -> Linear(in_features=2, out_features=2, bias=True)
-
named_buffers
(prefix: str = '', recurse: bool = True) → Iterator[Tuple[str, torch.Tensor]][source]¶ Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
- Parameters
- Yields
(string, torch.Tensor) – Tuple containing the name and buffer
Example:
>>> for name, buf in self.named_buffers(): >>> if name in ['running_var']: >>> print(buf.size())
-
named_children
() → Iterator[Tuple[str, torch.nn.modules.module.Module]][source]¶ Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
- Yields
(string, Module) – Tuple containing a name and child module
Example:
>>> for name, module in model.named_children(): >>> if name in ['conv4', 'conv5']: >>> print(module)
-
named_modules
(memo: Optional[Set[torch.nn.modules.module.Module]] = None, prefix: str = '')[source]¶ Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
- Yields
(string, Module) – Tuple of name and module
Note
Duplicate modules are returned only once. In the following example,
l
will be returned only once.Example:
>>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.named_modules()): print(idx, '->', m) 0 -> ('', Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) )) 1 -> ('0', Linear(in_features=2, out_features=2, bias=True))
-
named_parameters
(prefix: str = '', recurse: bool = True) → Iterator[Tuple[str, torch.nn.parameter.Parameter]][source]¶ Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
- Parameters
- Yields
(string, Parameter) – Tuple containing the name and parameter
Example:
>>> for name, param in self.named_parameters(): >>> if name in ['bias']: >>> print(param.size())
-
parameters
(recurse: bool = True) → Iterator[torch.nn.parameter.Parameter][source]¶ Returns an iterator over module parameters.
This is typically passed to an optimizer.
- Parameters
recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module.
- Yields
Parameter – module parameter
Example:
>>> for param in model.parameters(): >>> print(type(param), param.size()) <class 'torch.Tensor'> (20L,) <class 'torch.Tensor'> (20L, 1L, 5L, 5L)
-
register_backward_hook
(hook: Callable[[torch.nn.modules.module.Module, Union[Tuple[torch.Tensor, …], torch.Tensor], Union[Tuple[torch.Tensor, …], torch.Tensor]], Union[None, torch.Tensor]]) → torch.utils.hooks.RemovableHandle[source]¶ Registers a backward hook on the module.
This function is deprecated in favor of
nn.Module.register_full_backward_hook()
and the behavior of this function will change in future versions.- Returns
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type
torch.utils.hooks.RemovableHandle
-
register_buffer
(name: str, tensor: Optional[torch.Tensor], persistent: bool = True) → None[source]¶ Adds a buffer to the module.
This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNorm’s
running_mean
is not a parameter, but is part of the module’s state. Buffers, by default, are persistent and will be saved alongside parameters. This behavior can be changed by settingpersistent
toFalse
. The only difference between a persistent buffer and a non-persistent buffer is that the latter will not be a part of this module’sstate_dict
.Buffers can be accessed as attributes using given names.
- Parameters
name (string) – name of the buffer. The buffer can be accessed from this module using the given name
tensor (Tensor) – buffer to be registered.
persistent (bool) – whether the buffer is part of this module’s
state_dict
.
Example:
>>> self.register_buffer('running_mean', torch.zeros(num_features))
-
register_forward_hook
(hook: Callable[…, None]) → torch.utils.hooks.RemovableHandle[source]¶ Registers a forward hook on the module.
The hook will be called every time after
forward()
has computed an output. It should have the following signature:hook(module, input, output) -> None or modified output
The input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the
forward
. The hook can modify the output. It can modify the input inplace but it will not have effect on forward since this is called afterforward()
is called.- Returns
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type
torch.utils.hooks.RemovableHandle
-
register_forward_pre_hook
(hook: Callable[…, None]) → torch.utils.hooks.RemovableHandle[source]¶ Registers a forward pre-hook on the module.
The hook will be called every time before
forward()
is invoked. It should have the following signature:hook(module, input) -> None or modified input
The input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the
forward
. The hook can modify the input. User can either return a tuple or a single modified value in the hook. We will wrap the value into a tuple if a single value is returned(unless that value is already a tuple).- Returns
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type
torch.utils.hooks.RemovableHandle
-
register_full_backward_hook
(hook: Callable[[torch.nn.modules.module.Module, Union[Tuple[torch.Tensor, …], torch.Tensor], Union[Tuple[torch.Tensor, …], torch.Tensor]], Union[None, torch.Tensor]]) → torch.utils.hooks.RemovableHandle[source]¶ Registers a backward hook on the module.
The hook will be called every time the gradients with respect to module inputs are computed. The hook should have the following signature:
hook(module, grad_input, grad_output) -> tuple(Tensor) or None
The
grad_input
andgrad_output
are tuples that contain the gradients with respect to the inputs and outputs respectively. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the input that will be used in place ofgrad_input
in subsequent computations.grad_input
will only correspond to the inputs given as positional arguments and all kwarg arguments are ignored. Entries ingrad_input
andgrad_output
will beNone
for all non-Tensor arguments.Warning
Modifying inputs or outputs inplace is not allowed when using backward hooks and will raise an error.
- Returns
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type
torch.utils.hooks.RemovableHandle
-
register_parameter
(name: str, param: Optional[torch.nn.parameter.Parameter]) → None[source]¶ Adds a parameter to the module.
The parameter can be accessed as an attribute using given name.
- Parameters
name (string) – name of the parameter. The parameter can be accessed from this module using the given name
param (Parameter) – parameter to be added to the module.
-
requires_grad_
(requires_grad: bool = True) → T[source]¶ Change if autograd should record operations on parameters in this module.
This method sets the parameters’
requires_grad
attributes in-place.This method is helpful for freezing part of the module for finetuning or training parts of a model individually (e.g., GAN training).
-
state_dict
(destination: T_destination, prefix: str = '', keep_vars: bool = False) → T_destination[source]¶ -
state_dict
(prefix: str = '', keep_vars: bool = False) → Dict[str, torch.Tensor] Returns a dictionary containing a whole state of the module.
Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names.
- Returns
a dictionary containing a whole state of the module
- Return type
Example:
>>> module.state_dict().keys() ['bias', 'weight']
-
to
(device: Optional[Union[int, torch.device]] = ..., dtype: Optional[Union[torch.dtype, str]] = ..., non_blocking: bool = ...) → T[source]¶ -
to
(dtype: Union[torch.dtype, str], non_blocking: bool = ...) → T -
to
(tensor: torch.Tensor, non_blocking: bool = ...) → T Moves and/or casts the parameters and buffers.
This can be called as
Its signature is similar to
torch.Tensor.to()
, but only accepts floating point or complexdtype`s. In addition, this method will only cast the floating point or complex parameters and buffers to :attr:`dtype
(if given). The integral parameters and buffers will be moveddevice
, if that is given, but with dtypes unchanged. Whennon_blocking
is set, it tries to convert/move asynchronously with respect to the host if possible, e.g., moving CPU Tensors with pinned memory to CUDA devices.See below for examples.
Note
This method modifies the module in-place.
- Parameters
device (
torch.device
) – the desired device of the parameters and buffers in this moduledtype (
torch.dtype
) – the desired floating point or complex dtype of the parameters and buffers in this moduletensor (torch.Tensor) – Tensor whose dtype and device are the desired dtype and device for all parameters and buffers in this module
memory_format (
torch.memory_format
) – the desired memory format for 4D parameters and buffers in this module (keyword only argument)
- Returns
self
- Return type
Examples:
>>> linear = nn.Linear(2, 2) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]]) >>> linear.to(torch.double) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]], dtype=torch.float64) >>> gpu1 = torch.device("cuda:1") >>> linear.to(gpu1, dtype=torch.half, non_blocking=True) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1') >>> cpu = torch.device("cpu") >>> linear.to(cpu) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16) >>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble) >>> linear.weight Parameter containing: tensor([[ 0.3741+0.j, 0.2382+0.j], [ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128) >>> linear(torch.ones(3, 2, dtype=torch.cdouble)) tensor([[0.6122+0.j, 0.1150+0.j], [0.6122+0.j, 0.1150+0.j], [0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128)
-
train
(mode: bool = True) → T[source]¶ Sets the module in training mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.
-
xpu
(device: Optional[Union[int, torch.device]] = None) → T[source]¶ Moves all model parameters and buffers to the XPU.
This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on XPU while being optimized.
-
zero_grad
(set_to_none: bool = False) → None[source]¶ Sets gradients of all model parameters to zero. See similar function under
torch.optim.Optimizer
for more context.- Parameters
set_to_none (bool) – instead of setting to zero, set the grads to None. See
torch.optim.Optimizer.zero_grad()
for details.
Sequential¶
-
class
torch.nn.
Sequential
(*args: torch.nn.modules.module.Module)[source]¶ -
class
torch.nn.
Sequential
(arg: OrderedDict[str, Module]) A sequential container. Modules will be added to it in the order they are passed in the constructor. Alternatively, an ordered dict of modules can also be passed in.
To make it easier to understand, here is a small example:
# Example of using Sequential model = nn.Sequential( nn.Conv2d(1,20,5), nn.ReLU(), nn.Conv2d(20,64,5), nn.ReLU() ) # Example of using Sequential with OrderedDict model = nn.Sequential(OrderedDict([ ('conv1', nn.Conv2d(1,20,5)), ('relu1', nn.ReLU()), ('conv2', nn.Conv2d(20,64,5)), ('relu2', nn.ReLU()) ]))
ModuleList¶
-
class
torch.nn.
ModuleList
(modules: Optional[Iterable[torch.nn.modules.module.Module]] = None)[source]¶ Holds submodules in a list.
ModuleList
can be indexed like a regular Python list, but modules it contains are properly registered, and will be visible by allModule
methods.- Parameters
modules (iterable, optional) – an iterable of modules to add
Example:
class MyModule(nn.Module): def __init__(self): super(MyModule, self).__init__() self.linears = nn.ModuleList([nn.Linear(10, 10) for i in range(10)]) def forward(self, x): # ModuleList can act as an iterable, or be indexed using ints for i, l in enumerate(self.linears): x = self.linears[i // 2](x) + l(x) return x
-
append
(module: torch.nn.modules.module.Module) → torch.nn.modules.container.ModuleList[source]¶ Appends a given module to the end of the list.
- Parameters
module (nn.Module) – module to append
ModuleDict¶
-
class
torch.nn.
ModuleDict
(modules: Optional[Mapping[str, torch.nn.modules.module.Module]] = None)[source]¶ Holds submodules in a dictionary.
ModuleDict
can be indexed like a regular Python dictionary, but modules it contains are properly registered, and will be visible by allModule
methods.ModuleDict
is an ordered dictionary that respectsthe order of insertion, and
in
update()
, the order of the mergedOrderedDict
,dict
(started from Python 3.6) or anotherModuleDict
(the argument toupdate()
).
Note that
update()
with other unordered mapping types (e.g., Python’s plaindict
before Python version 3.6) does not preserve the order of the merged mapping.- Parameters
modules (iterable, optional) – a mapping (dictionary) of (string: module) or an iterable of key-value pairs of type (string, module)
Example:
class MyModule(nn.Module): def __init__(self): super(MyModule, self).__init__() self.choices = nn.ModuleDict({ 'conv': nn.Conv2d(10, 10, 3), 'pool': nn.MaxPool2d(3) }) self.activations = nn.ModuleDict([ ['lrelu', nn.LeakyReLU()], ['prelu', nn.PReLU()] ]) def forward(self, x, choice, act): x = self.choices[choice](x) x = self.activations[act](x) return x
-
items
() → Iterable[Tuple[str, torch.nn.modules.module.Module]][source]¶ Return an iterable of the ModuleDict key/value pairs.
-
pop
(key: str) → torch.nn.modules.module.Module[source]¶ Remove key from the ModuleDict and return its module.
- Parameters
key (string) – key to pop from the ModuleDict
-
update
(modules: Mapping[str, torch.nn.modules.module.Module]) → None[source]¶ Update the
ModuleDict
with the key-value pairs from a mapping or an iterable, overwriting existing keys.Note
If
modules
is anOrderedDict
, aModuleDict
, or an iterable of key-value pairs, the order of new elements in it is preserved.
ParameterList¶
-
class
torch.nn.
ParameterList
(parameters: Optional[Iterable[Parameter]] = None)[source]¶ Holds parameters in a list.
ParameterList
can be indexed like a regular Python list, but parameters it contains are properly registered, and will be visible by allModule
methods.- Parameters
parameters (iterable, optional) – an iterable of
Parameter
to add
Example:
class MyModule(nn.Module): def __init__(self): super(MyModule, self).__init__() self.params = nn.ParameterList([nn.Parameter(torch.randn(10, 10)) for i in range(10)]) def forward(self, x): # ParameterList can act as an iterable, or be indexed using ints for i, p in enumerate(self.params): x = self.params[i // 2].mm(x) + p.mm(x) return x
-
append
(parameter: Parameter) → ParameterList[source]¶ Appends a given parameter at the end of the list.
- Parameters
parameter (nn.Parameter) – parameter to append
-
extend
(parameters: Iterable[Parameter]) → ParameterList[source]¶ Appends parameters from a Python iterable to the end of the list.
- Parameters
parameters (iterable) – iterable of parameters to append
ParameterDict¶
-
class
torch.nn.
ParameterDict
(parameters: Optional[Mapping[str, Parameter]] = None)[source]¶ Holds parameters in a dictionary.
ParameterDict can be indexed like a regular Python dictionary, but parameters it contains are properly registered, and will be visible by all Module methods.
ParameterDict
is an ordered dictionary that respectsthe order of insertion, and
in
update()
, the order of the mergedOrderedDict
or anotherParameterDict
(the argument toupdate()
).
Note that
update()
with other unordered mapping types (e.g., Python’s plaindict
) does not preserve the order of the merged mapping.- Parameters
parameters (iterable, optional) – a mapping (dictionary) of (string :
Parameter
) or an iterable of key-value pairs of type (string,Parameter
)
Example:
class MyModule(nn.Module): def __init__(self): super(MyModule, self).__init__() self.params = nn.ParameterDict({ 'left': nn.Parameter(torch.randn(5, 10)), 'right': nn.Parameter(torch.randn(5, 10)) }) def forward(self, x, choice): x = self.params[choice].mm(x) return x
-
items
() → Iterable[Tuple[str, Parameter]][source]¶ Return an iterable of the ParameterDict key/value pairs.
-
pop
(key: str) → Parameter[source]¶ Remove key from the ParameterDict and return its parameter.
- Parameters
key (string) – key to pop from the ParameterDict
-
update
(parameters: Mapping[str, Parameter]) → None[source]¶ Update the
ParameterDict
with the key-value pairs from a mapping or an iterable, overwriting existing keys.Note
If
parameters
is anOrderedDict
, aParameterDict
, or an iterable of key-value pairs, the order of new elements in it is preserved.
Convolution layers¶
Conv1d¶
-
class
torch.nn.
Conv1d
(in_channels: int, out_channels: int, kernel_size: Union[T, Tuple[T]], stride: Union[T, Tuple[T]] = 1, padding: Union[T, Tuple[T]] = 0, dilation: Union[T, Tuple[T]] = 1, groups: int = 1, bias: bool = True, padding_mode: str = 'zeros')[source]¶ Applies a 1D convolution over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size \((N, C_{\text{in}}, L)\) and output \((N, C_{\text{out}}, L_{\text{out}})\) can be precisely described as:
\[\text{out}(N_i, C_{\text{out}_j}) = \text{bias}(C_{\text{out}_j}) + \sum_{k = 0}^{C_{in} - 1} \text{weight}(C_{\text{out}_j}, k) \star \text{input}(N_i, k) \]where \(\star\) is the valid cross-correlation operator, \(N\) is a batch size, \(C\) denotes a number of channels, \(L\) is a length of signal sequence.
This module supports TensorFloat32.
stride
controls the stride for the cross-correlation, a single number or a one-element tuple.padding
controls the amount of implicit padding on both sides forpadding
number of points.dilation
controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of whatdilation
does.groups
controls the connections between inputs and outputs.in_channels
andout_channels
must both be divisible bygroups
. For example,At groups=1, all inputs are convolved to all outputs.
At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels and producing half the output channels, and both subsequently concatenated.
At groups=
in_channels
, each input channel is convolved with its own set of filters (of size \(\frac{\text{out\_channels}}{\text{in\_channels}}\)).
Note
When groups == in_channels and out_channels == K * in_channels, where K is a positive integer, this operation is also known as a “depthwise convolution”.
In other words, for an input of size \((N, C_{in}, L_{in})\), a depthwise convolution with a depthwise multiplier K can be performed with the arguments \((C_\text{in}=C_\text{in}, C_\text{out}=C_\text{in} \times \text{K}, ..., \text{groups}=C_\text{in})\).
Note
In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting
torch.backends.cudnn.deterministic = True
. See Reproducibility for more information.- Parameters
in_channels (int) – Number of channels in the input image
out_channels (int) – Number of channels produced by the convolution
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int or tuple, optional) – Zero-padding added to both sides of the input. Default: 0
padding_mode (string, optional) –
'zeros'
,'reflect'
,'replicate'
or'circular'
. Default:'zeros'
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional) – If
True
, adds a learnable bias to the output. Default:True
- Shape:
Input: \((N, C_{in}, L_{in})\)
Output: \((N, C_{out}, L_{out})\) where
\[L_{out} = \left\lfloor\frac{L_{in} + 2 \times \text{padding} - \text{dilation} \times (\text{kernel\_size} - 1) - 1}{\text{stride}} + 1\right\rfloor \]
- Variables
~Conv1d.weight (Tensor) – the learnable weights of the module of shape \((\text{out\_channels}, \frac{\text{in\_channels}}{\text{groups}}, \text{kernel\_size})\). The values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{groups}{C_\text{in} * \text{kernel\_size}}\)
~Conv1d.bias (Tensor) – the learnable bias of the module of shape (out_channels). If
bias
isTrue
, then the values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{groups}{C_\text{in} * \text{kernel\_size}}\)
Examples:
>>> m = nn.Conv1d(16, 33, 3, stride=2) >>> input = torch.randn(20, 16, 50) >>> output = m(input)
Conv2d¶
-
class
torch.nn.
Conv2d
(in_channels: int, out_channels: int, kernel_size: Union[T, Tuple[T, T]], stride: Union[T, Tuple[T, T]] = 1, padding: Union[T, Tuple[T, T]] = 0, dilation: Union[T, Tuple[T, T]] = 1, groups: int = 1, bias: bool = True, padding_mode: str = 'zeros')[source]¶ Applies a 2D convolution over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size \((N, C_{\text{in}}, H, W)\) and output \((N, C_{\text{out}}, H_{\text{out}}, W_{\text{out}})\) can be precisely described as:
\[\text{out}(N_i, C_{\text{out}_j}) = \text{bias}(C_{\text{out}_j}) + \sum_{k = 0}^{C_{\text{in}} - 1} \text{weight}(C_{\text{out}_j}, k) \star \text{input}(N_i, k) \]where \(\star\) is the valid 2D cross-correlation operator, \(N\) is a batch size, \(C\) denotes a number of channels, \(H\) is a height of input planes in pixels, and \(W\) is width in pixels.
This module supports TensorFloat32.
stride
controls the stride for the cross-correlation, a single number or a tuple.padding
controls the amount of implicit padding on both sides forpadding
number of points for each dimension.dilation
controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of whatdilation
does.groups
controls the connections between inputs and outputs.in_channels
andout_channels
must both be divisible bygroups
. For example,At groups=1, all inputs are convolved to all outputs.
At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels and producing half the output channels, and both subsequently concatenated.
At groups=
in_channels
, each input channel is convolved with its own set of filters (of size \(\frac{\text{out\_channels}}{\text{in\_channels}}\)).
The parameters
kernel_size
,stride
,padding
,dilation
can either be:a single
int
– in which case the same value is used for the height and width dimensiona
tuple
of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension
Note
When groups == in_channels and out_channels == K * in_channels, where K is a positive integer, this operation is also known as a “depthwise convolution”.
In other words, for an input of size \((N, C_{in}, L_{in})\), a depthwise convolution with a depthwise multiplier K can be performed with the arguments \((C_\text{in}=C_\text{in}, C_\text{out}=C_\text{in} \times \text{K}, ..., \text{groups}=C_\text{in})\).
Note
In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting
torch.backends.cudnn.deterministic = True
. See Reproducibility for more information.- Parameters
in_channels (int) – Number of channels in the input image
out_channels (int) – Number of channels produced by the convolution
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int or tuple, optional) – Zero-padding added to both sides of the input. Default: 0
padding_mode (string, optional) –
'zeros'
,'reflect'
,'replicate'
or'circular'
. Default:'zeros'
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional) – If
True
, adds a learnable bias to the output. Default:True
- Shape:
Input: \((N, C_{in}, H_{in}, W_{in})\)
Output: \((N, C_{out}, H_{out}, W_{out})\) where
\[H_{out} = \left\lfloor\frac{H_{in} + 2 \times \text{padding}[0] - \text{dilation}[0] \times (\text{kernel\_size}[0] - 1) - 1}{\text{stride}[0]} + 1\right\rfloor \]\[W_{out} = \left\lfloor\frac{W_{in} + 2 \times \text{padding}[1] - \text{dilation}[1] \times (\text{kernel\_size}[1] - 1) - 1}{\text{stride}[1]} + 1\right\rfloor \]
- Variables
~Conv2d.weight (Tensor) – the learnable weights of the module of shape \((\text{out\_channels}, \frac{\text{in\_channels}}{\text{groups}},\) \(\text{kernel\_size[0]}, \text{kernel\_size[1]})\). The values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{groups}{C_\text{in} * \prod_{i=0}^{1}\text{kernel\_size}[i]}\)
~Conv2d.bias (Tensor) – the learnable bias of the module of shape (out_channels). If
bias
isTrue
, then the values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{groups}{C_\text{in} * \prod_{i=0}^{1}\text{kernel\_size}[i]}\)
Examples
>>> # With square kernels and equal stride >>> m = nn.Conv2d(16, 33, 3, stride=2) >>> # non-square kernels and unequal stride and with padding >>> m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2)) >>> # non-square kernels and unequal stride and with padding and dilation >>> m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2), dilation=(3, 1)) >>> input = torch.randn(20, 16, 50, 100) >>> output = m(input)
Conv3d¶
-
class
torch.nn.
Conv3d
(in_channels: int, out_channels: int, kernel_size: Union[T, Tuple[T, T, T]], stride: Union[T, Tuple[T, T, T]] = 1, padding: Union[T, Tuple[T, T, T]] = 0, dilation: Union[T, Tuple[T, T, T]] = 1, groups: int = 1, bias: bool = True, padding_mode: str = 'zeros')[source]¶ Applies a 3D convolution over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size \((N, C_{in}, D, H, W)\) and output \((N, C_{out}, D_{out}, H_{out}, W_{out})\) can be precisely described as:
\[out(N_i, C_{out_j}) = bias(C_{out_j}) + \sum_{k = 0}^{C_{in} - 1} weight(C_{out_j}, k) \star input(N_i, k) \]where \(\star\) is the valid 3D cross-correlation operator
This module supports TensorFloat32.
stride
controls the stride for the cross-correlation.padding
controls the amount of implicit padding on both sides forpadding
number of points for each dimension.dilation
controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of whatdilation
does.groups
controls the connections between inputs and outputs.in_channels
andout_channels
must both be divisible bygroups
. For example,At groups=1, all inputs are convolved to all outputs.
At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels and producing half the output channels, and both subsequently concatenated.
At groups=
in_channels
, each input channel is convolved with its own set of filters (of size \(\frac{\text{out\_channels}}{\text{in\_channels}}\)).
The parameters
kernel_size
,stride
,padding
,dilation
can either be:a single
int
– in which case the same value is used for the depth, height and width dimensiona
tuple
of three ints – in which case, the first int is used for the depth dimension, the second int for the height dimension and the third int for the width dimension
Note
When groups == in_channels and out_channels == K * in_channels, where K is a positive integer, this operation is also known as a “depthwise convolution”.
In other words, for an input of size \((N, C_{in}, L_{in})\), a depthwise convolution with a depthwise multiplier K can be performed with the arguments \((C_\text{in}=C_\text{in}, C_\text{out}=C_\text{in} \times \text{K}, ..., \text{groups}=C_\text{in})\).
Note
In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting
torch.backends.cudnn.deterministic = True
. See Reproducibility for more information.- Parameters
in_channels (int) – Number of channels in the input image
out_channels (int) – Number of channels produced by the convolution
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int or tuple, optional) – Zero-padding added to all three sides of the input. Default: 0
padding_mode (string, optional) –
'zeros'
,'reflect'
,'replicate'
or'circular'
. Default:'zeros'
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional) – If
True
, adds a learnable bias to the output. Default:True
- Shape:
Input: \((N, C_{in}, D_{in}, H_{in}, W_{in})\)
Output: \((N, C_{out}, D_{out}, H_{out}, W_{out})\) where
\[D_{out} = \left\lfloor\frac{D_{in} + 2 \times \text{padding}[0] - \text{dilation}[0] \times (\text{kernel\_size}[0] - 1) - 1}{\text{stride}[0]} + 1\right\rfloor \]\[H_{out} = \left\lfloor\frac{H_{in} + 2 \times \text{padding}[1] - \text{dilation}[1] \times (\text{kernel\_size}[1] - 1) - 1}{\text{stride}[1]} + 1\right\rfloor \]\[W_{out} = \left\lfloor\frac{W_{in} + 2 \times \text{padding}[2] - \text{dilation}[2] \times (\text{kernel\_size}[2] - 1) - 1}{\text{stride}[2]} + 1\right\rfloor \]
- Variables
~Conv3d.weight (Tensor) – the learnable weights of the module of shape \((\text{out\_channels}, \frac{\text{in\_channels}}{\text{groups}},\) \(\text{kernel\_size[0]}, \text{kernel\_size[1]}, \text{kernel\_size[2]})\). The values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{groups}{C_\text{in} * \prod_{i=0}^{2}\text{kernel\_size}[i]}\)
~Conv3d.bias (Tensor) – the learnable bias of the module of shape (out_channels). If
bias
isTrue
, then the values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{groups}{C_\text{in} * \prod_{i=0}^{2}\text{kernel\_size}[i]}\)
Examples:
>>> # With square kernels and equal stride >>> m = nn.Conv3d(16, 33, 3, stride=2) >>> # non-square kernels and unequal stride and with padding >>> m = nn.Conv3d(16, 33, (3, 5, 2), stride=(2, 1, 1), padding=(4, 2, 0)) >>> input = torch.randn(20, 16, 10, 50, 100) >>> output = m(input)
ConvTranspose1d¶
-
class
torch.nn.
ConvTranspose1d
(in_channels: int, out_channels: int, kernel_size: Union[T, Tuple[T]], stride: Union[T, Tuple[T]] = 1, padding: Union[T, Tuple[T]] = 0, output_padding: Union[T, Tuple[T]] = 0, groups: int = 1, bias: bool = True, dilation: Union[T, Tuple[T]] = 1, padding_mode: str = 'zeros')[source]¶ Applies a 1D transposed convolution operator over an input image composed of several input planes.
This module can be seen as the gradient of Conv1d with respect to its input. It is also known as a fractionally-strided convolution or a deconvolution (although it is not an actual deconvolution operation).
This module supports TensorFloat32.
stride
controls the stride for the cross-correlation.padding
controls the amount of implicit zero padding on both sides fordilation * (kernel_size - 1) - padding
number of points. See note below for details.output_padding
controls the additional size added to one side of the output shape. See note below for details.dilation
controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of whatdilation
does.groups
controls the connections between inputs and outputs.in_channels
andout_channels
must both be divisible bygroups
. For example,At groups=1, all inputs are convolved to all outputs.
At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels and producing half the output channels, and both subsequently concatenated.
At groups=
in_channels
, each input channel is convolved with its own set of filters (of size \(\frac{\text{out\_channels}}{\text{in\_channels}}\)).
Note
The
padding
argument effectively addsdilation * (kernel_size - 1) - padding
amount of zero padding to both sizes of the input. This is set so that when aConv1d
and aConvTranspose1d
are initialized with same parameters, they are inverses of each other in regard to the input and output shapes. However, whenstride > 1
,Conv1d
maps multiple input shapes to the same output shape.output_padding
is provided to resolve this ambiguity by effectively increasing the calculated output shape on one side. Note thatoutput_padding
is only used to find output shape, but does not actually add zero-padding to output.Note
In some circumstances when using the CUDA backend with CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting
torch.backends.cudnn.deterministic = True
. Please see the notes on Reproducibility for background.- Parameters
in_channels (int) – Number of channels in the input image
out_channels (int) – Number of channels produced by the convolution
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int or tuple, optional) –
dilation * (kernel_size - 1) - padding
zero-padding will be added to both sides of the input. Default: 0output_padding (int or tuple, optional) – Additional size added to one side of the output shape. Default: 0
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional) – If
True
, adds a learnable bias to the output. Default:True
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
- Shape:
Input: \((N, C_{in}, L_{in})\)
Output: \((N, C_{out}, L_{out})\) where
\[L_{out} = (L_{in} - 1) \times \text{stride} - 2 \times \text{padding} + \text{dilation} \times (\text{kernel\_size} - 1) + \text{output\_padding} + 1 \]
- Variables
~ConvTranspose1d.weight (Tensor) – the learnable weights of the module of shape \((\text{in\_channels}, \frac{\text{out\_channels}}{\text{groups}},\) \(\text{kernel\_size})\). The values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{groups}{C_\text{out} * \text{kernel\_size}}\)
~ConvTranspose1d.bias (Tensor) – the learnable bias of the module of shape (out_channels). If
bias
isTrue
, then the values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{groups}{C_\text{out} * \text{kernel\_size}}\)
ConvTranspose2d¶
-
class
torch.nn.
ConvTranspose2d
(in_channels: int, out_channels: int, kernel_size: Union[T, Tuple[T, T]], stride: Union[T, Tuple[T, T]] = 1, padding: Union[T, Tuple[T, T]] = 0, output_padding: Union[T, Tuple[T, T]] = 0, groups: int = 1, bias: bool = True, dilation: int = 1, padding_mode: str = 'zeros')[source]¶ Applies a 2D transposed convolution operator over an input image composed of several input planes.
This module can be seen as the gradient of Conv2d with respect to its input. It is also known as a fractionally-strided convolution or a deconvolution (although it is not an actual deconvolution operation).
This module supports TensorFloat32.
stride
controls the stride for the cross-correlation.padding
controls the amount of implicit zero padding on both sides fordilation * (kernel_size - 1) - padding
number of points. See note below for details.output_padding
controls the additional size added to one side of the output shape. See note below for details.dilation
controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of whatdilation
does.groups
controls the connections between inputs and outputs.in_channels
andout_channels
must both be divisible bygroups
. For example,At groups=1, all inputs are convolved to all outputs.
At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels and producing half the output channels, and both subsequently concatenated.
At groups=
in_channels
, each input channel is convolved with its own set of filters (of size \(\frac{\text{out\_channels}}{\text{in\_channels}}\)).
The parameters
kernel_size
,stride
,padding
,output_padding
can either be:a single
int
– in which case the same value is used for the height and width dimensionsa
tuple
of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension
Note
The
padding
argument effectively addsdilation * (kernel_size - 1) - padding
amount of zero padding to both sizes of the input. This is set so that when aConv2d
and aConvTranspose2d
are initialized with same parameters, they are inverses of each other in regard to the input and output shapes. However, whenstride > 1
,Conv2d
maps multiple input shapes to the same output shape.output_padding
is provided to resolve this ambiguity by effectively increasing the calculated output shape on one side. Note thatoutput_padding
is only used to find output shape, but does not actually add zero-padding to output.Note
In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting
torch.backends.cudnn.deterministic = True
. See Reproducibility for more information.- Parameters
in_channels (int) – Number of channels in the input image
out_channels (int) – Number of channels produced by the convolution
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int or tuple, optional) –
dilation * (kernel_size - 1) - padding
zero-padding will be added to both sides of each dimension in the input. Default: 0output_padding (int or tuple, optional) – Additional size added to one side of each dimension in the output shape. Default: 0
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional) – If
True
, adds a learnable bias to the output. Default:True
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
- Shape:
Input: \((N, C_{in}, H_{in}, W_{in})\)
Output: \((N, C_{out}, H_{out}, W_{out})\) where
\[H_{out} = (H_{in} - 1) \times \text{stride}[0] - 2 \times \text{padding}[0] + \text{dilation}[0] \times (\text{kernel\_size}[0] - 1) + \text{output\_padding}[0] + 1 \]\[W_{out} = (W_{in} - 1) \times \text{stride}[1] - 2 \times \text{padding}[1] + \text{dilation}[1] \times (\text{kernel\_size}[1] - 1) + \text{output\_padding}[1] + 1 \]
- Variables
~ConvTranspose2d.weight (Tensor) – the learnable weights of the module of shape \((\text{in\_channels}, \frac{\text{out\_channels}}{\text{groups}},\) \(\text{kernel\_size[0]}, \text{kernel\_size[1]})\). The values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{groups}{C_\text{out} * \prod_{i=0}^{1}\text{kernel\_size}[i]}\)
~ConvTranspose2d.bias (Tensor) – the learnable bias of the module of shape (out_channels) If
bias
isTrue
, then the values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{groups}{C_\text{out} * \prod_{i=0}^{1}\text{kernel\_size}[i]}\)
Examples:
>>> # With square kernels and equal stride >>> m = nn.ConvTranspose2d(16, 33, 3, stride=2) >>> # non-square kernels and unequal stride and with padding >>> m = nn.ConvTranspose2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2)) >>> input = torch.randn(20, 16, 50, 100) >>> output = m(input) >>> # exact output size can be also specified as an argument >>> input = torch.randn(1, 16, 12, 12) >>> downsample = nn.Conv2d(16, 16, 3, stride=2, padding=1) >>> upsample = nn.ConvTranspose2d(16, 16, 3, stride=2, padding=1) >>> h = downsample(input) >>> h.size() torch.Size([1, 16, 6, 6]) >>> output = upsample(h, output_size=input.size()) >>> output.size() torch.Size([1, 16, 12, 12])
ConvTranspose3d¶
-
class
torch.nn.
ConvTranspose3d
(in_channels: int, out_channels: int, kernel_size: Union[T, Tuple[T, T, T]], stride: Union[T, Tuple[T, T, T]] = 1, padding: Union[T, Tuple[T, T, T]] = 0, output_padding: Union[T, Tuple[T, T, T]] = 0, groups: int = 1, bias: bool = True, dilation: Union[T, Tuple[T, T, T]] = 1, padding_mode: str = 'zeros')[source]¶ Applies a 3D transposed convolution operator over an input image composed of several input planes. The transposed convolution operator multiplies each input value element-wise by a learnable kernel, and sums over the outputs from all input feature planes.
This module can be seen as the gradient of Conv3d with respect to its input. It is also known as a fractionally-strided convolution or a deconvolution (although it is not an actual deconvolution operation).
This module supports TensorFloat32.
stride
controls the stride for the cross-correlation.padding
controls the amount of implicit zero padding on both sides fordilation * (kernel_size - 1) - padding
number of points. See note below for details.output_padding
controls the additional size added to one side of the output shape. See note below for details.dilation
controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of whatdilation
does.groups
controls the connections between inputs and outputs.in_channels
andout_channels
must both be divisible bygroups
. For example,At groups=1, all inputs are convolved to all outputs.
At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels and producing half the output channels, and both subsequently concatenated.
At groups=
in_channels
, each input channel is convolved with its own set of filters (of size \(\frac{\text{out\_channels}}{\text{in\_channels}}\)).
The parameters
kernel_size
,stride
,padding
,output_padding
can either be:a single
int
– in which case the same value is used for the depth, height and width dimensionsa
tuple
of three ints – in which case, the first int is used for the depth dimension, the second int for the height dimension and the third int for the width dimension
Note
The
padding
argument effectively addsdilation * (kernel_size - 1) - padding
amount of zero padding to both sizes of the input. This is set so that when aConv3d
and aConvTranspose3d
are initialized with same parameters, they are inverses of each other in regard to the input and output shapes. However, whenstride > 1
,Conv3d
maps multiple input shapes to the same output shape.output_padding
is provided to resolve this ambiguity by effectively increasing the calculated output shape on one side. Note thatoutput_padding
is only used to find output shape, but does not actually add zero-padding to output.Note
In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting
torch.backends.cudnn.deterministic = True
. See Reproducibility for more information.- Parameters
in_channels (int) – Number of channels in the input image
out_channels (int) – Number of channels produced by the convolution
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int or tuple, optional) –
dilation * (kernel_size - 1) - padding
zero-padding will be added to both sides of each dimension in the input. Default: 0output_padding (int or tuple, optional) – Additional size added to one side of each dimension in the output shape. Default: 0
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional) – If
True
, adds a learnable bias to the output. Default:True
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
- Shape:
Input: \((N, C_{in}, D_{in}, H_{in}, W_{in})\)
Output: \((N, C_{out}, D_{out}, H_{out}, W_{out})\) where
\[D_{out} = (D_{in} - 1) \times \text{stride}[0] - 2 \times \text{padding}[0] + \text{dilation}[0] \times (\text{kernel\_size}[0] - 1) + \text{output\_padding}[0] + 1 \]\[H_{out} = (H_{in} - 1) \times \text{stride}[1] - 2 \times \text{padding}[1] + \text{dilation}[1] \times (\text{kernel\_size}[1] - 1) + \text{output\_padding}[1] + 1 \]\[W_{out} = (W_{in} - 1) \times \text{stride}[2] - 2 \times \text{padding}[2] + \text{dilation}[2] \times (\text{kernel\_size}[2] - 1) + \text{output\_padding}[2] + 1 \]
- Variables
~ConvTranspose3d.weight (Tensor) – the learnable weights of the module of shape \((\text{in\_channels}, \frac{\text{out\_channels}}{\text{groups}},\) \(\text{kernel\_size[0]}, \text{kernel\_size[1]}, \text{kernel\_size[2]})\). The values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{groups}{C_\text{out} * \prod_{i=0}^{2}\text{kernel\_size}[i]}\)
~ConvTranspose3d.bias (Tensor) – the learnable bias of the module of shape (out_channels) If
bias
isTrue
, then the values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{groups}{C_\text{out} * \prod_{i=0}^{2}\text{kernel\_size}[i]}\)
Examples:
>>> # With square kernels and equal stride >>> m = nn.ConvTranspose3d(16, 33, 3, stride=2) >>> # non-square kernels and unequal stride and with padding >>> m = nn.ConvTranspose3d(16, 33, (3, 5, 2), stride=(2, 1, 1), padding=(0, 4, 2)) >>> input = torch.randn(20, 16, 10, 50, 100) >>> output = m(input)
Unfold¶
-
class
torch.nn.
Unfold
(kernel_size: Union[T, Tuple[T, …]], dilation: Union[T, Tuple[T, …]] = 1, padding: Union[T, Tuple[T, …]] = 0, stride: Union[T, Tuple[T, …]] = 1)[source]¶ Extracts sliding local blocks from a batched input tensor.
Consider a batched
input
tensor of shape \((N, C, *)\), where \(N\) is the batch dimension, \(C\) is the channel dimension, and \(*\) represent arbitrary spatial dimensions. This operation flattens each slidingkernel_size
-sized block within the spatial dimensions ofinput
into a column (i.e., last dimension) of a 3-Doutput
tensor of shape \((N, C \times \prod(\text{kernel\_size}), L)\), where \(C \times \prod(\text{kernel\_size})\) is the total number of values within each block (a block has \(\prod(\text{kernel\_size})\) spatial locations each containing a \(C\)-channeled vector), and \(L\) is the total number of such blocks:\[L = \prod_d \left\lfloor\frac{\text{spatial\_size}[d] + 2 \times \text{padding}[d] % - \text{dilation}[d] \times (\text{kernel\_size}[d] - 1) - 1}{\text{stride}[d]} + 1\right\rfloor, \]where \(\text{spatial\_size}\) is formed by the spatial dimensions of
input
(\(*\) above), and \(d\) is over all spatial dimensions.Therefore, indexing
output
at the last dimension (column dimension) gives all values within a certain block.The
padding
,stride
anddilation
arguments specify how the sliding blocks are retrieved.stride
controls the stride for the sliding blocks.padding
controls the amount of implicit zero-paddings on both sides forpadding
number of points for each dimension before reshaping.dilation
controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of whatdilation
does.
- Parameters
stride (int or tuple, optional) – the stride of the sliding blocks in the input spatial dimensions. Default: 1
padding (int or tuple, optional) – implicit zero padding to be added on both sides of input. Default: 0
dilation (int or tuple, optional) – a parameter that controls the stride of elements within the neighborhood. Default: 1
If
kernel_size
,dilation
,padding
orstride
is an int or a tuple of length 1, their values will be replicated across all spatial dimensions.For the case of two input spatial dimensions this operation is sometimes called
im2col
.
Note
Fold
calculates each combined value in the resulting large tensor by summing all values from all containing blocks.Unfold
extracts the values in the local blocks by copying from the large tensor. So, if the blocks overlap, they are not inverses of each other.In general, folding and unfolding operations are related as follows. Consider
Fold
andUnfold
instances created with the same parameters:>>> fold_params = dict(kernel_size=..., dilation=..., padding=..., stride=...) >>> fold = nn.Fold(output_size=..., **fold_params) >>> unfold = nn.Unfold(**fold_params)
Then for any (supported)
input
tensor the following equality holds:fold(unfold(input)) == divisor * input
where
divisor
is a tensor that depends only on the shape and dtype of theinput
:>>> input_ones = torch.ones(input.shape, dtype=input.dtype) >>> divisor = fold(unfold(input_ones))
When the
divisor
tensor contains no zero elements, thenfold
andunfold
operations are inverses of each other (up to constant divisor).Warning
Currently, only 4-D input tensors (batched image-like tensors) are supported.
- Shape:
Input: \((N, C, *)\)
Output: \((N, C \times \prod(\text{kernel\_size}), L)\) as described above
Examples:
>>> unfold = nn.Unfold(kernel_size=(2, 3)) >>> input = torch.randn(2, 5, 3, 4) >>> output = unfold(input) >>> # each patch contains 30 values (2x3=6 vectors, each of 5 channels) >>> # 4 blocks (2x3 kernels) in total in the 3x4 input >>> output.size() torch.Size([2, 30, 4]) >>> # Convolution is equivalent with Unfold + Matrix Multiplication + Fold (or view to output shape) >>> inp = torch.randn(1, 3, 10, 12) >>> w = torch.randn(2, 3, 4, 5) >>> inp_unf = torch.nn.functional.unfold(inp, (4, 5)) >>> out_unf = inp_unf.transpose(1, 2).matmul(w.view(w.size(0), -1).t()).transpose(1, 2) >>> out = torch.nn.functional.fold(out_unf, (7, 8), (1, 1)) >>> # or equivalently (and avoiding a copy), >>> # out = out_unf.view(1, 2, 7, 8) >>> (torch.nn.functional.conv2d(inp, w) - out).abs().max() tensor(1.9073e-06)
Fold¶
-
class
torch.nn.
Fold
(output_size: Union[T, Tuple[T, …]], kernel_size: Union[T, Tuple[T, …]], dilation: Union[T, Tuple[T, …]] = 1, padding: Union[T, Tuple[T, …]] = 0, stride: Union[T, Tuple[T, …]] = 1)[source]¶ Combines an array of sliding local blocks into a large containing tensor.
Consider a batched
input
tensor containing sliding local blocks, e.g., patches of images, of shape \((N, C \times \prod(\text{kernel\_size}), L)\), where \(N\) is batch dimension, \(C \times \prod(\text{kernel\_size})\) is the number of values within a block (a block has \(\prod(\text{kernel\_size})\) spatial locations each containing a \(C\)-channeled vector), and \(L\) is the total number of blocks. (This is exactly the same specification as the output shape ofUnfold
.) This operation combines these local blocks into the largeoutput
tensor of shape \((N, C, \text{output\_size}[0], \text{output\_size}[1], \dots)\) by summing the overlapping values. Similar toUnfold
, the arguments must satisfy\[L = \prod_d \left\lfloor\frac{\text{output\_size}[d] + 2 \times \text{padding}[d] % - \text{dilation}[d] \times (\text{kernel\_size}[d] - 1) - 1}{\text{stride}[d]} + 1\right\rfloor, \]where \(d\) is over all spatial dimensions.
output_size
describes the spatial shape of the large containing tensor of the sliding local blocks. It is useful to resolve the ambiguity when multiple input shapes map to same number of sliding blocks, e.g., withstride > 0
.
The
padding
,stride
anddilation
arguments specify how the sliding blocks are retrieved.stride
controls the stride for the sliding blocks.padding
controls the amount of implicit zero-paddings on both sides forpadding
number of points for each dimension before reshaping.dilation
controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of whatdilation
does.
- Parameters
output_size (int or tuple) – the shape of the spatial dimensions of the output (i.e.,
output.sizes()[2:]
)stride (int or tuple) – the stride of the sliding blocks in the input spatial dimensions. Default: 1
padding (int or tuple, optional) – implicit zero padding to be added on both sides of input. Default: 0
dilation (int or tuple, optional) – a parameter that controls the stride of elements within the neighborhood. Default: 1
If
output_size
,kernel_size
,dilation
,padding
orstride
is an int or a tuple of length 1 then their values will be replicated across all spatial dimensions.For the case of two output spatial dimensions this operation is sometimes called
col2im
.
Note
Fold
calculates each combined value in the resulting large tensor by summing all values from all containing blocks.Unfold
extracts the values in the local blocks by copying from the large tensor. So, if the blocks overlap, they are not inverses of each other.In general, folding and unfolding operations are related as follows. Consider
Fold
andUnfold
instances created with the same parameters:>>> fold_params = dict(kernel_size=..., dilation=..., padding=..., stride=...) >>> fold = nn.Fold(output_size=..., **fold_params) >>> unfold = nn.Unfold(**fold_params)
Then for any (supported)
input
tensor the following equality holds:fold(unfold(input)) == divisor * input
where
divisor
is a tensor that depends only on the shape and dtype of theinput
:>>> input_ones = torch.ones(input.shape, dtype=input.dtype) >>> divisor = fold(unfold(input_ones))
When the
divisor
tensor contains no zero elements, thenfold
andunfold
operations are inverses of each other (up to constant divisor).Warning
Currently, only 4-D output tensors (batched image-like tensors) are supported.
- Shape:
Input: \((N, C \times \prod(\text{kernel\_size}), L)\)
Output: \((N, C, \text{output\_size}[0], \text{output\_size}[1], \dots)\) as described above
Examples:
>>> fold = nn.Fold(output_size=(4, 5), kernel_size=(2, 2)) >>> input = torch.randn(1, 3 * 2 * 2, 12) >>> output = fold(input) >>> output.size() torch.Size([1, 3, 4, 5])
Pooling layers¶
MaxPool1d¶
-
class
torch.nn.
MaxPool1d
(kernel_size: Union[T, Tuple[T, …]], stride: Optional[Union[T, Tuple[T, …]]] = None, padding: Union[T, Tuple[T, …]] = 0, dilation: Union[T, Tuple[T, …]] = 1, return_indices: bool = False, ceil_mode: bool = False)[source]¶ Applies a 1D max pooling over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size \((N, C, L)\) and output \((N, C, L_{out})\) can be precisely described as:
\[out(N_i, C_j, k) = \max_{m=0, \ldots, \text{kernel\_size} - 1} input(N_i, C_j, stride \times k + m) \]If
padding
is non-zero, then the input is implicitly padded with negative infinity on both sides forpadding
number of points.dilation
is the stride between the elements within the sliding window. This link has a nice visualization of the pooling parameters.Note
When ceil_mode=True, sliding windows are allowed to go off-bounds if they start within the left padding or the input. Sliding windows that would start in the right padded region are ignored.
- Parameters
kernel_size – The size of the sliding window, must be > 0.
stride – The stride of the sliding window, must be > 0. Default value is
kernel_size
.padding – Implicit negative infinity padding to be added on both sides, must be >= 0 and <= kernel_size / 2.
dilation – The stride between elements within a sliding window, must be > 0.
return_indices – If
True
, will return the argmax along with the max values. Useful fortorch.nn.MaxUnpool1d
laterceil_mode – If
True
, will use ceil instead of floor to compute the output shape. This ensures that every element in the input tensor is covered by a sliding window.
- Shape:
Input: \((N, C, L_{in})\)
Output: \((N, C, L_{out})\), where
\[L_{out} = \left\lfloor \frac{L_{in} + 2 \times \text{padding} - \text{dilation} \times (\text{kernel\_size} - 1) - 1}{\text{stride}} + 1\right\rfloor \]
Examples:
>>> # pool of size=3, stride=2 >>> m = nn.MaxPool1d(3, stride=2) >>> input = torch.randn(20, 16, 50) >>> output = m(input)
MaxPool2d¶
-
class
torch.nn.
MaxPool2d
(kernel_size: Union[T, Tuple[T, …]], stride: Optional[Union[T, Tuple[T, …]]] = None, padding: Union[T, Tuple[T, …]] = 0, dilation: Union[T, Tuple[T, …]] = 1, return_indices: bool = False, ceil_mode: bool = False)[source]¶ Applies a 2D max pooling over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size \((N, C, H, W)\), output \((N, C, H_{out}, W_{out})\) and
kernel_size
\((kH, kW)\) can be precisely described as:\[\begin{aligned} out(N_i, C_j, h, w) ={} & \max_{m=0, \ldots, kH-1} \max_{n=0, \ldots, kW-1} \\ & \text{input}(N_i, C_j, \text{stride[0]} \times h + m, \text{stride[1]} \times w + n) \end{aligned} \]If
padding
is non-zero, then the input is implicitly zero-padded on both sides forpadding
number of points.dilation
controls the spacing between the kernel points. It is harder to describe, but this link has a nice visualization of whatdilation
does.Note
When ceil_mode=True, sliding windows are allowed to go off-bounds if they start within the left padding or the input. Sliding windows that would start in the right padded region are ignored.
The parameters
kernel_size
,stride
,padding
,dilation
can either be:a single
int
– in which case the same value is used for the height and width dimensiona
tuple
of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension
- Parameters
kernel_size – the size of the window to take a max over
stride – the stride of the window. Default value is
kernel_size
padding – implicit zero padding to be added on both sides
dilation – a parameter that controls the stride of elements in the window
return_indices – if
True
, will return the max indices along with the outputs. Useful fortorch.nn.MaxUnpool2d
laterceil_mode – when True, will use ceil instead of floor to compute the output shape
- Shape:
Input: \((N, C, H_{in}, W_{in})\)
Output: \((N, C, H_{out}, W_{out})\), where
\[H_{out} = \left\lfloor\frac{H_{in} + 2 * \text{padding[0]} - \text{dilation[0]} \times (\text{kernel\_size[0]} - 1) - 1}{\text{stride[0]}} + 1\right\rfloor \]\[W_{out} = \left\lfloor\frac{W_{in} + 2 * \text{padding[1]} - \text{dilation[1]} \times (\text{kernel\_size[1]} - 1) - 1}{\text{stride[1]}} + 1\right\rfloor \]
Examples:
>>> # pool of square window of size=3, stride=2 >>> m = nn.MaxPool2d(3, stride=2) >>> # pool of non-square window >>> m = nn.MaxPool2d((3, 2), stride=(2, 1)) >>> input = torch.randn(20, 16, 50, 32) >>> output = m(input)
MaxPool3d¶
-
class
torch.nn.
MaxPool3d
(kernel_size: Union[T, Tuple[T, …]], stride: Optional[Union[T, Tuple[T, …]]] = None, padding: Union[T, Tuple[T, …]] = 0, dilation: Union[T, Tuple[T, …]] = 1, return_indices: bool = False, ceil_mode: bool = False)[source]¶ Applies a 3D max pooling over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size \((N, C, D, H, W)\), output \((N, C, D_{out}, H_{out}, W_{out})\) and
kernel_size
\((kD, kH, kW)\) can be precisely described as:\[\begin{aligned} \text{out}(N_i, C_j, d, h, w) ={} & \max_{k=0, \ldots, kD-1} \max_{m=0, \ldots, kH-1} \max_{n=0, \ldots, kW-1} \\ & \text{input}(N_i, C_j, \text{stride[0]} \times d + k, \text{stride[1]} \times h + m, \text{stride[2]} \times w + n) \end{aligned} \]If
padding
is non-zero, then the input is implicitly zero-padded on both sides forpadding
number of points.dilation
controls the spacing between the kernel points. It is harder to describe, but this link has a nice visualization of whatdilation
does.Note
When ceil_mode=True, sliding windows are allowed to go off-bounds if they start within the left padding or the input. Sliding windows that would start in the right padded region are ignored.
The parameters
kernel_size
,stride
,padding
,dilation
can either be:a single
int
– in which case the same value is used for the depth, height and width dimensiona
tuple
of three ints – in which case, the first int is used for the depth dimension, the second int for the height dimension and the third int for the width dimension
- Parameters
kernel_size – the size of the window to take a max over
stride – the stride of the window. Default value is
kernel_size
padding – implicit zero padding to be added on all three sides
dilation – a parameter that controls the stride of elements in the window
return_indices – if
True
, will return the max indices along with the outputs. Useful fortorch.nn.MaxUnpool3d
laterceil_mode – when True, will use ceil instead of floor to compute the output shape
- Shape:
Input: \((N, C, D_{in}, H_{in}, W_{in})\)
Output: \((N, C, D_{out}, H_{out}, W_{out})\), where
\[D_{out} = \left\lfloor\frac{D_{in} + 2 \times \text{padding}[0] - \text{dilation}[0] \times (\text{kernel\_size}[0] - 1) - 1}{\text{stride}[0]} + 1\right\rfloor \]\[H_{out} = \left\lfloor\frac{H_{in} + 2 \times \text{padding}[1] - \text{dilation}[1] \times (\text{kernel\_size}[1] - 1) - 1}{\text{stride}[1]} + 1\right\rfloor \]\[W_{out} = \left\lfloor\frac{W_{in} + 2 \times \text{padding}[2] - \text{dilation}[2] \times (\text{kernel\_size}[2] - 1) - 1}{\text{stride}[2]} + 1\right\rfloor \]
Examples:
>>> # pool of square window of size=3, stride=2 >>> m = nn.MaxPool3d(3, stride=2) >>> # pool of non-square window >>> m = nn.MaxPool3d((3, 2, 2), stride=(2, 1, 2)) >>> input = torch.randn(20, 16, 50,44, 31) >>> output = m(input)
MaxUnpool1d¶
-
class
torch.nn.
MaxUnpool1d
(kernel_size: Union[T, Tuple[T]], stride: Optional[Union[T, Tuple[T]]] = None, padding: Union[T, Tuple[T]] = 0)[source]¶ Computes a partial inverse of
MaxPool1d
.MaxPool1d
is not fully invertible, since the non-maximal values are lost.MaxUnpool1d
takes in as input the output ofMaxPool1d
including the indices of the maximal values and computes a partial inverse in which all non-maximal values are set to zero.Note
MaxPool1d
can map several input sizes to the same output sizes. Hence, the inversion process can get ambiguous. To accommodate this, you can provide the needed output size as an additional argumentoutput_size
in the forward call. See the Inputs and Example below.- Parameters
- Inputs:
input: the input Tensor to invert
indices: the indices given out by
MaxPool1d
output_size (optional): the targeted output size
- Shape:
Input: \((N, C, H_{in})\)
Output: \((N, C, H_{out})\), where
\[H_{out} = (H_{in} - 1) \times \text{stride}[0] - 2 \times \text{padding}[0] + \text{kernel\_size}[0] \]or as given by
output_size
in the call operator
Example:
>>> pool = nn.MaxPool1d(2, stride=2, return_indices=True) >>> unpool = nn.MaxUnpool1d(2, stride=2) >>> input = torch.tensor([[[1., 2, 3, 4, 5, 6, 7, 8]]]) >>> output, indices = pool(input) >>> unpool(output, indices) tensor([[[ 0., 2., 0., 4., 0., 6., 0., 8.]]]) >>> # Example showcasing the use of output_size >>> input = torch.tensor([[[1., 2, 3, 4, 5, 6, 7, 8, 9]]]) >>> output, indices = pool(input) >>> unpool(output, indices, output_size=input.size()) tensor([[[ 0., 2., 0., 4., 0., 6., 0., 8., 0.]]]) >>> unpool(output, indices) tensor([[[ 0., 2., 0., 4., 0., 6., 0., 8.]]])
MaxUnpool2d¶
-
class
torch.nn.
MaxUnpool2d
(kernel_size: Union[T, Tuple[T, T]], stride: Optional[Union[T, Tuple[T, T]]] = None, padding: Union[T, Tuple[T, T]] = 0)[source]¶ Computes a partial inverse of
MaxPool2d
.MaxPool2d
is not fully invertible, since the non-maximal values are lost.MaxUnpool2d
takes in as input the output ofMaxPool2d
including the indices of the maximal values and computes a partial inverse in which all non-maximal values are set to zero.Note
MaxPool2d
can map several input sizes to the same output sizes. Hence, the inversion process can get ambiguous. To accommodate this, you can provide the needed output size as an additional argumentoutput_size
in the forward call. See the Inputs and Example below.- Parameters
- Inputs:
input: the input Tensor to invert
indices: the indices given out by
MaxPool2d
output_size (optional): the targeted output size
- Shape:
Input: \((N, C, H_{in}, W_{in})\)
Output: \((N, C, H_{out}, W_{out})\), where
\[H_{out} = (H_{in} - 1) \times \text{stride[0]} - 2 \times \text{padding[0]} + \text{kernel\_size[0]} \]\[W_{out} = (W_{in} - 1) \times \text{stride[1]} - 2 \times \text{padding[1]} + \text{kernel\_size[1]} \]or as given by
output_size
in the call operator
Example:
>>> pool = nn.MaxPool2d(2, stride=2, return_indices=True) >>> unpool = nn.MaxUnpool2d(2, stride=2) >>> input = torch.tensor([[[[ 1., 2, 3, 4], [ 5, 6, 7, 8], [ 9, 10, 11, 12], [13, 14, 15, 16]]]]) >>> output, indices = pool(input) >>> unpool(output, indices) tensor([[[[ 0., 0., 0., 0.], [ 0., 6., 0., 8.], [ 0., 0., 0., 0.], [ 0., 14., 0., 16.]]]]) >>> # specify a different output size than input size >>> unpool(output, indices, output_size=torch.Size([1, 1, 5, 5])) tensor([[[[ 0., 0., 0., 0., 0.], [ 6., 0., 8., 0., 0.], [ 0., 0., 0., 14., 0.], [ 16., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0.]]]])
MaxUnpool3d¶
-
class
torch.nn.
MaxUnpool3d
(kernel_size: Union[T, Tuple[T, T, T]], stride: Optional[Union[T, Tuple[T, T, T]]] = None, padding: Union[T, Tuple[T, T, T]] = 0)[source]¶ Computes a partial inverse of
MaxPool3d
.MaxPool3d
is not fully invertible, since the non-maximal values are lost.MaxUnpool3d
takes in as input the output ofMaxPool3d
including the indices of the maximal values and computes a partial inverse in which all non-maximal values are set to zero.Note
MaxPool3d
can map several input sizes to the same output sizes. Hence, the inversion process can get ambiguous. To accommodate this, you can provide the needed output size as an additional argumentoutput_size
in the forward call. See the Inputs section below.- Parameters
- Inputs:
input: the input Tensor to invert
indices: the indices given out by
MaxPool3d
output_size (optional): the targeted output size
- Shape:
Input: \((N, C, D_{in}, H_{in}, W_{in})\)
Output: \((N, C, D_{out}, H_{out}, W_{out})\), where
\[D_{out} = (D_{in} - 1) \times \text{stride[0]} - 2 \times \text{padding[0]} + \text{kernel\_size[0]} \]\[H_{out} = (H_{in} - 1) \times \text{stride[1]} - 2 \times \text{padding[1]} + \text{kernel\_size[1]} \]\[W_{out} = (W_{in} - 1) \times \text{stride[2]} - 2 \times \text{padding[2]} + \text{kernel\_size[2]} \]or as given by
output_size
in the call operator
Example:
>>> # pool of square window of size=3, stride=2 >>> pool = nn.MaxPool3d(3, stride=2, return_indices=True) >>> unpool = nn.MaxUnpool3d(3, stride=2) >>> output, indices = pool(torch.randn(20, 16, 51, 33, 15)) >>> unpooled_output = unpool(output, indices) >>> unpooled_output.size() torch.Size([20, 16, 51, 33, 15])
AvgPool1d¶
-
class
torch.nn.
AvgPool1d
(kernel_size: Union[T, Tuple[T]], stride: Optional[Union[T, Tuple[T]]] = None, padding: Union[T, Tuple[T]] = 0, ceil_mode: bool = False, count_include_pad: bool = True)[source]¶ Applies a 1D average pooling over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size \((N, C, L)\), output \((N, C, L_{out})\) and
kernel_size
\(k\) can be precisely described as:\[\text{out}(N_i, C_j, l) = \frac{1}{k} \sum_{m=0}^{k-1} \text{input}(N_i, C_j, \text{stride} \times l + m)\]If
padding
is non-zero, then the input is implicitly zero-padded on both sides forpadding
number of points.Note
When ceil_mode=True, sliding windows are allowed to go off-bounds if they start within the left padding or the input. Sliding windows that would start in the right padded region are ignored.
The parameters
kernel_size
,stride
,padding
can each be anint
or a one-element tuple.- Parameters
kernel_size – the size of the window
stride – the stride of the window. Default value is
kernel_size
padding – implicit zero padding to be added on both sides
ceil_mode – when True, will use ceil instead of floor to compute the output shape
count_include_pad – when True, will include the zero-padding in the averaging calculation
- Shape:
Input: \((N, C, L_{in})\)
Output: \((N, C, L_{out})\), where
\[L_{out} = \left\lfloor \frac{L_{in} + 2 \times \text{padding} - \text{kernel\_size}}{\text{stride}} + 1\right\rfloor \]
Examples:
>>> # pool with window of size=3, stride=2 >>> m = nn.AvgPool1d(3, stride=2) >>> m(torch.tensor([[[1.,2,3,4,5,6,7]]])) tensor([[[ 2., 4., 6.]]])
AvgPool2d¶
-
class
torch.nn.
AvgPool2d
(kernel_size: Union[T, Tuple[T, T]], stride: Optional[Union[T, Tuple[T, T]]] = None, padding: Union[T, Tuple[T, T]] = 0, ceil_mode: bool = False, count_include_pad: bool = True, divisor_override: Optional[int] = None)[source]¶ Applies a 2D average pooling over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size \((N, C, H, W)\), output \((N, C, H_{out}, W_{out})\) and
kernel_size
\((kH, kW)\) can be precisely described as:\[out(N_i, C_j, h, w) = \frac{1}{kH * kW} \sum_{m=0}^{kH-1} \sum_{n=0}^{kW-1} input(N_i, C_j, stride[0] \times h + m, stride[1] \times w + n)\]If
padding
is non-zero, then the input is implicitly zero-padded on both sides forpadding
number of points.Note
When ceil_mode=True, sliding windows are allowed to go off-bounds if they start within the left padding or the input. Sliding windows that would start in the right padded region are ignored.
The parameters
kernel_size
,stride
,padding
can either be:a single
int
– in which case the same value is used for the height and width dimensiona
tuple
of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension
- Parameters
kernel_size – the size of the window
stride – the stride of the window. Default value is
kernel_size
padding – implicit zero padding to be added on both sides
ceil_mode – when True, will use ceil instead of floor to compute the output shape
count_include_pad – when True, will include the zero-padding in the averaging calculation
divisor_override – if specified, it will be used as divisor, otherwise
kernel_size
will be used
- Shape:
Input: \((N, C, H_{in}, W_{in})\)
Output: \((N, C, H_{out}, W_{out})\), where
\[H_{out} = \left\lfloor\frac{H_{in} + 2 \times \text{padding}[0] - \text{kernel\_size}[0]}{\text{stride}[0]} + 1\right\rfloor \]\[W_{out} = \left\lfloor\frac{W_{in} + 2 \times \text{padding}[1] - \text{kernel\_size}[1]}{\text{stride}[1]} + 1\right\rfloor \]
Examples:
>>> # pool of square window of size=3, stride=2 >>> m = nn.AvgPool2d(3, stride=2) >>> # pool of non-square window >>> m = nn.AvgPool2d((3, 2), stride=(2, 1)) >>> input = torch.randn(20, 16, 50, 32) >>> output = m(input)
AvgPool3d¶
-
class
torch.nn.
AvgPool3d
(kernel_size: Union[T, Tuple[T, T, T]], stride: Optional[Union[T, Tuple[T, T, T]]] = None, padding: Union[T, Tuple[T, T, T]] = 0, ceil_mode: bool = False, count_include_pad: bool = True, divisor_override: Optional[int] = None)[source]¶ Applies a 3D average pooling over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size \((N, C, D, H, W)\), output \((N, C, D_{out}, H_{out}, W_{out})\) and
kernel_size
\((kD, kH, kW)\) can be precisely described as:\[\begin{aligned} \text{out}(N_i, C_j, d, h, w) ={} & \sum_{k=0}^{kD-1} \sum_{m=0}^{kH-1} \sum_{n=0}^{kW-1} \\ & \frac{\text{input}(N_i, C_j, \text{stride}[0] \times d + k, \text{stride}[1] \times h + m, \text{stride}[2] \times w + n)} {kD \times kH \times kW} \end{aligned} \]If
padding
is non-zero, then the input is implicitly zero-padded on all three sides forpadding
number of points.Note
When ceil_mode=True, sliding windows are allowed to go off-bounds if they start within the left padding or the input. Sliding windows that would start in the right padded region are ignored.
The parameters
kernel_size
,stride
can either be:a single
int
– in which case the same value is used for the depth, height and width dimensiona
tuple
of three ints – in which case, the first int is used for the depth dimension, the second int for the height dimension and the third int for the width dimension
- Parameters
kernel_size – the size of the window
stride – the stride of the window. Default value is
kernel_size
padding – implicit zero padding to be added on all three sides
ceil_mode – when True, will use ceil instead of floor to compute the output shape
count_include_pad – when True, will include the zero-padding in the averaging calculation
divisor_override – if specified, it will be used as divisor, otherwise
kernel_size
will be used
- Shape:
Input: \((N, C, D_{in}, H_{in}, W_{in})\)
Output: \((N, C, D_{out}, H_{out}, W_{out})\), where
\[D_{out} = \left\lfloor\frac{D_{in} + 2 \times \text{padding}[0] - \text{kernel\_size}[0]}{\text{stride}[0]} + 1\right\rfloor \]\[H_{out} = \left\lfloor\frac{H_{in} + 2 \times \text{padding}[1] - \text{kernel\_size}[1]}{\text{stride}[1]} + 1\right\rfloor \]\[W_{out} = \left\lfloor\frac{W_{in} + 2 \times \text{padding}[2] - \text{kernel\_size}[2]}{\text{stride}[2]} + 1\right\rfloor \]
Examples:
>>> # pool of square window of size=3, stride=2 >>> m = nn.AvgPool3d(3, stride=2) >>> # pool of non-square window >>> m = nn.AvgPool3d((3, 2, 2), stride=(2, 1, 2)) >>> input = torch.randn(20, 16, 50,44, 31) >>> output = m(input)
FractionalMaxPool2d¶
-
class
torch.nn.
FractionalMaxPool2d
(kernel_size: Union[T, Tuple[T, T]], output_size: Optional[Union[T, Tuple[T, T]]] = None, output_ratio: Optional[Union[T, Tuple[T, T]]] = None, return_indices: bool = False, _random_samples=None)[source]¶ Applies a 2D fractional max pooling over an input signal composed of several input planes.
Fractional MaxPooling is described in detail in the paper Fractional MaxPooling by Ben Graham
The max-pooling operation is applied in \(kH \times kW\) regions by a stochastic step size determined by the target output size. The number of output features is equal to the number of input planes.
- Parameters
kernel_size – the size of the window to take a max over. Can be a single number k (for a square kernel of k x k) or a tuple (kh, kw)
output_size – the target output size of the image of the form oH x oW. Can be a tuple (oH, oW) or a single number oH for a square image oH x oH
output_ratio – If one wants to have an output size as a ratio of the input size, this option can be given. This has to be a number or tuple in the range (0, 1)
return_indices – if
True
, will return the indices along with the outputs. Useful to pass tonn.MaxUnpool2d()
. Default:False
Examples
>>> # pool of square window of size=3, and target output size 13x12 >>> m = nn.FractionalMaxPool2d(3, output_size=(13, 12)) >>> # pool of square window and target output size being half of input image size >>> m = nn.FractionalMaxPool2d(3, output_ratio=(0.5, 0.5)) >>> input = torch.randn(20, 16, 50, 32) >>> output = m(input)
LPPool1d¶
-
class
torch.nn.
LPPool1d
(norm_type: float, kernel_size: Union[T, Tuple[T, …]], stride: Optional[Union[T, Tuple[T, …]]] = None, ceil_mode: bool = False)[source]¶ Applies a 1D power-average pooling over an input signal composed of several input planes.
On each window, the function computed is:
\[f(X) = \sqrt[p]{\sum_{x \in X} x^{p}} \]At p = \(\infty\), one gets Max Pooling
At p = 1, one gets Sum Pooling (which is proportional to Average Pooling)
Note
If the sum to the power of p is zero, the gradient of this function is not defined. This implementation will set the gradient to zero in this case.
- Parameters
kernel_size – a single int, the size of the window
stride – a single int, the stride of the window. Default value is
kernel_size
ceil_mode – when True, will use ceil instead of floor to compute the output shape
- Shape:
Input: \((N, C, L_{in})\)
Output: \((N, C, L_{out})\), where
\[L_{out} = \left\lfloor\frac{L_{in} - \text{kernel\_size}}{\text{stride}} + 1\right\rfloor \]
- Examples::
>>> # power-2 pool of window of length 3, with stride 2. >>> m = nn.LPPool1d(2, 3, stride=2) >>> input = torch.randn(20, 16, 50) >>> output = m(input)
LPPool2d¶
-
class
torch.nn.
LPPool2d
(norm_type: float, kernel_size: Union[T, Tuple[T, …]], stride: Optional[Union[T, Tuple[T, …]]] = None, ceil_mode: bool = False)[source]¶ Applies a 2D power-average pooling over an input signal composed of several input planes.
On each window, the function computed is:
\[f(X) = \sqrt[p]{\sum_{x \in X} x^{p}} \]At p = \(\infty\), one gets Max Pooling
At p = 1, one gets Sum Pooling (which is proportional to average pooling)
The parameters
kernel_size
,stride
can either be:a single
int
– in which case the same value is used for the height and width dimensiona
tuple
of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension
Note
If the sum to the power of p is zero, the gradient of this function is not defined. This implementation will set the gradient to zero in this case.
- Parameters
kernel_size – the size of the window
stride – the stride of the window. Default value is
kernel_size
ceil_mode – when True, will use ceil instead of floor to compute the output shape
- Shape:
Input: \((N, C, H_{in}, W_{in})\)
Output: \((N, C, H_{out}, W_{out})\), where
\[H_{out} = \left\lfloor\frac{H_{in} - \text{kernel\_size}[0]}{\text{stride}[0]} + 1\right\rfloor \]\[W_{out} = \left\lfloor\frac{W_{in} - \text{kernel\_size}[1]}{\text{stride}[1]} + 1\right\rfloor \]
Examples:
>>> # power-2 pool of square window of size=3, stride=2 >>> m = nn.LPPool2d(2, 3, stride=2) >>> # pool of non-square window of power 1.2 >>> m = nn.LPPool2d(1.2, (3, 2), stride=(2, 1)) >>> input = torch.randn(20, 16, 50, 32) >>> output = m(input)
AdaptiveMaxPool1d¶
-
class
torch.nn.
AdaptiveMaxPool1d
(output_size: Union[T, Tuple[T, …]], return_indices: bool = False)[source]¶ Applies a 1D adaptive max pooling over an input signal composed of several input planes.
The output size is H, for any input size. The number of output features is equal to the number of input planes.
- Parameters
output_size – the target output size H
return_indices – if
True
, will return the indices along with the outputs. Useful to pass to nn.MaxUnpool1d. Default:False
Examples
>>> # target output size of 5 >>> m = nn.AdaptiveMaxPool1d(5) >>> input = torch.randn(1, 64, 8) >>> output = m(input)
AdaptiveMaxPool2d¶
-
class
torch.nn.
AdaptiveMaxPool2d
(output_size: Union[T, Tuple[T, …]], return_indices: bool = False)[source]¶ Applies a 2D adaptive max pooling over an input signal composed of several input planes.
The output is of size H x W, for any input size. The number of output features is equal to the number of input planes.
- Parameters
output_size – the target output size of the image of the form H x W. Can be a tuple (H, W) or a single H for a square image H x H. H and W can be either a
int
, orNone
which means the size will be the same as that of the input.return_indices – if
True
, will return the indices along with the outputs. Useful to pass to nn.MaxUnpool2d. Default:False
Examples
>>> # target output size of 5x7 >>> m = nn.AdaptiveMaxPool2d((5,7)) >>> input = torch.randn(1, 64, 8, 9) >>> output = m(input) >>> # target output size of 7x7 (square) >>> m = nn.AdaptiveMaxPool2d(7) >>> input = torch.randn(1, 64, 10, 9) >>> output = m(input) >>> # target output size of 10x7 >>> m = nn.AdaptiveMaxPool2d((None, 7)) >>> input = torch.randn(1, 64, 10, 9) >>> output = m(input)
AdaptiveMaxPool3d¶
-
class
torch.nn.
AdaptiveMaxPool3d
(output_size: Union[T, Tuple[T, …]], return_indices: bool = False)[source]¶ Applies a 3D adaptive max pooling over an input signal composed of several input planes.
The output is of size D x H x W, for any input size. The number of output features is equal to the number of input planes.
- Parameters
output_size – the target output size of the image of the form D x H x W. Can be a tuple (D, H, W) or a single D for a cube D x D x D. D, H and W can be either a
int
, orNone
which means the size will be the same as that of the input.return_indices – if
True
, will return the indices along with the outputs. Useful to pass to nn.MaxUnpool3d. Default:False
Examples
>>> # target output size of 5x7x9 >>> m = nn.AdaptiveMaxPool3d((5,7,9)) >>> input = torch.randn(1, 64, 8, 9, 10) >>> output = m(input) >>> # target output size of 7x7x7 (cube) >>> m = nn.AdaptiveMaxPool3d(7) >>> input = torch.randn(1, 64, 10, 9, 8) >>> output = m(input) >>> # target output size of 7x9x8 >>> m = nn.AdaptiveMaxPool3d((7, None, None)) >>> input = torch.randn(1, 64, 10, 9, 8) >>> output = m(input)
AdaptiveAvgPool1d¶
-
class
torch.nn.
AdaptiveAvgPool1d
(output_size: Union[T, Tuple[T, …]])[source]¶ Applies a 1D adaptive average pooling over an input signal composed of several input planes.
The output size is H, for any input size. The number of output features is equal to the number of input planes.
- Parameters
output_size – the target output size H
Examples
>>> # target output size of 5 >>> m = nn.AdaptiveAvgPool1d(5) >>> input = torch.randn(1, 64, 8) >>> output = m(input)
AdaptiveAvgPool2d¶
-
class
torch.nn.
AdaptiveAvgPool2d
(output_size: Union[T, Tuple[T, …]])[source]¶ Applies a 2D adaptive average pooling over an input signal composed of several input planes.
The output is of size H x W, for any input size. The number of output features is equal to the number of input planes.
- Parameters
output_size – the target output size of the image of the form H x W. Can be a tuple (H, W) or a single H for a square image H x H. H and W can be either a
int
, orNone
which means the size will be the same as that of the input.
Examples
>>> # target output size of 5x7 >>> m = nn.AdaptiveAvgPool2d((5,7)) >>> input = torch.randn(1, 64, 8, 9) >>> output = m(input) >>> # target output size of 7x7 (square) >>> m = nn.AdaptiveAvgPool2d(7) >>> input = torch.randn(1, 64, 10, 9) >>> output = m(input) >>> # target output size of 10x7 >>> m = nn.AdaptiveAvgPool2d((None, 7)) >>> input = torch.randn(1, 64, 10, 9) >>> output = m(input)
AdaptiveAvgPool3d¶
-
class
torch.nn.
AdaptiveAvgPool3d
(output_size: Union[T, Tuple[T, …]])[source]¶ Applies a 3D adaptive average pooling over an input signal composed of several input planes.
The output is of size D x H x W, for any input size. The number of output features is equal to the number of input planes.
- Parameters
output_size – the target output size of the form D x H x W. Can be a tuple (D, H, W) or a single number D for a cube D x D x D. D, H and W can be either a
int
, orNone
which means the size will be the same as that of the input.
Examples
>>> # target output size of 5x7x9 >>> m = nn.AdaptiveAvgPool3d((5,7,9)) >>> input = torch.randn(1, 64, 8, 9, 10) >>> output = m(input) >>> # target output size of 7x7x7 (cube) >>> m = nn.AdaptiveAvgPool3d(7) >>> input = torch.randn(1, 64, 10, 9, 8) >>> output = m(input) >>> # target output size of 7x9x8 >>> m = nn.AdaptiveAvgPool3d((7, None, None)) >>> input = torch.randn(1, 64, 10, 9, 8) >>> output = m(input)
Padding layers¶
ReflectionPad1d¶
-
class
torch.nn.
ReflectionPad1d
(padding: Union[T, Tuple[T, T]])[source]¶ Pads the input tensor using the reflection of the input boundary.
For N-dimensional padding, use
torch.nn.functional.pad()
.- Parameters
padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 2-tuple, uses (\(\text{padding\_left}\), \(\text{padding\_right}\))
- Shape:
Input: \((N, C, W_{in})\)
Output: \((N, C, W_{out})\) where
\(W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}\)
Examples:
>>> m = nn.ReflectionPad1d(2) >>> input = torch.arange(8, dtype=torch.float).reshape(1, 2, 4) >>> input tensor([[[0., 1., 2., 3.], [4., 5., 6., 7.]]]) >>> m(input) tensor([[[2., 1., 0., 1., 2., 3., 2., 1.], [6., 5., 4., 5., 6., 7., 6., 5.]]]) >>> # using different paddings for different sides >>> m = nn.ReflectionPad1d((3, 1)) >>> m(input) tensor([[[3., 2., 1., 0., 1., 2., 3., 2.], [7., 6., 5., 4., 5., 6., 7., 6.]]])
ReflectionPad2d¶
-
class
torch.nn.
ReflectionPad2d
(padding: Union[T, Tuple[T, T, T, T]])[source]¶ Pads the input tensor using the reflection of the input boundary.
For N-dimensional padding, use
torch.nn.functional.pad()
.- Parameters
padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 4-tuple, uses (\(\text{padding\_left}\), \(\text{padding\_right}\), \(\text{padding\_top}\), \(\text{padding\_bottom}\))
- Shape:
Input: \((N, C, H_{in}, W_{in})\)
Output: \((N, C, H_{out}, W_{out})\) where
\(H_{out} = H_{in} + \text{padding\_top} + \text{padding\_bottom}\)
\(W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}\)
Examples:
>>> m = nn.ReflectionPad2d(2) >>> input = torch.arange(9, dtype=torch.float).reshape(1, 1, 3, 3) >>> input tensor([[[[0., 1., 2.], [3., 4., 5.], [6., 7., 8.]]]]) >>> m(input) tensor([[[[8., 7., 6., 7., 8., 7., 6.], [5., 4., 3., 4., 5., 4., 3.], [2., 1., 0., 1., 2., 1., 0.], [5., 4., 3., 4., 5., 4., 3.], [8., 7., 6., 7., 8., 7., 6.], [5., 4., 3., 4., 5., 4., 3.], [2., 1., 0., 1., 2., 1., 0.]]]]) >>> # using different paddings for different sides >>> m = nn.ReflectionPad2d((1, 1, 2, 0)) >>> m(input) tensor([[[[7., 6., 7., 8., 7.], [4., 3., 4., 5., 4.], [1., 0., 1., 2., 1.], [4., 3., 4., 5., 4.], [7., 6., 7., 8., 7.]]]])
ReplicationPad1d¶
-
class
torch.nn.
ReplicationPad1d
(padding: Union[T, Tuple[T, T]])[source]¶ Pads the input tensor using replication of the input boundary.
For N-dimensional padding, use
torch.nn.functional.pad()
.- Parameters
padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 2-tuple, uses (\(\text{padding\_left}\), \(\text{padding\_right}\))
- Shape:
Input: \((N, C, W_{in})\)
Output: \((N, C, W_{out})\) where
\(W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}\)
Examples:
>>> m = nn.ReplicationPad1d(2) >>> input = torch.arange(8, dtype=torch.float).reshape(1, 2, 4) >>> input tensor([[[0., 1., 2., 3.], [4., 5., 6., 7.]]]) >>> m(input) tensor([[[0., 0., 0., 1., 2., 3., 3., 3.], [4., 4., 4., 5., 6., 7., 7., 7.]]]) >>> # using different paddings for different sides >>> m = nn.ReplicationPad1d((3, 1)) >>> m(input) tensor([[[0., 0., 0., 0., 1., 2., 3., 3.], [4., 4., 4., 4., 5., 6., 7., 7.]]])
ReplicationPad2d¶
-
class
torch.nn.
ReplicationPad2d
(padding: Union[T, Tuple[T, T, T, T]])[source]¶ Pads the input tensor using replication of the input boundary.
For N-dimensional padding, use
torch.nn.functional.pad()
.- Parameters
padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 4-tuple, uses (\(\text{padding\_left}\), \(\text{padding\_right}\), \(\text{padding\_top}\), \(\text{padding\_bottom}\))
- Shape:
Input: \((N, C, H_{in}, W_{in})\)
Output: \((N, C, H_{out}, W_{out})\) where
\(H_{out} = H_{in} + \text{padding\_top} + \text{padding\_bottom}\)
\(W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}\)
Examples:
>>> m = nn.ReplicationPad2d(2) >>> input = torch.arange(9, dtype=torch.float).reshape(1, 1, 3, 3) >>> input tensor([[[[0., 1., 2.], [3., 4., 5.], [6., 7., 8.]]]]) >>> m(input) tensor([[[[0., 0., 0., 1., 2., 2., 2.], [0., 0., 0., 1., 2., 2., 2.], [0., 0., 0., 1., 2., 2., 2.], [3., 3., 3., 4., 5., 5., 5.], [6., 6., 6., 7., 8., 8., 8.], [6., 6., 6., 7., 8., 8., 8.], [6., 6., 6., 7., 8., 8., 8.]]]]) >>> # using different paddings for different sides >>> m = nn.ReplicationPad2d((1, 1, 2, 0)) >>> m(input) tensor([[[[0., 0., 1., 2., 2.], [0., 0., 1., 2., 2.], [0., 0., 1., 2., 2.], [3., 3., 4., 5., 5.], [6., 6., 7., 8., 8.]]]])
ReplicationPad3d¶
-
class
torch.nn.
ReplicationPad3d
(padding: Union[T, Tuple[T, T, T, T, T, T]])[source]¶ Pads the input tensor using replication of the input boundary.
For N-dimensional padding, use
torch.nn.functional.pad()
.- Parameters
padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 6-tuple, uses (\(\text{padding\_left}\), \(\text{padding\_right}\), \(\text{padding\_top}\), \(\text{padding\_bottom}\), \(\text{padding\_front}\), \(\text{padding\_back}\))
- Shape:
Input: \((N, C, D_{in}, H_{in}, W_{in})\)
Output: \((N, C, D_{out}, H_{out}, W_{out})\) where
\(D_{out} = D_{in} + \text{padding\_front} + \text{padding\_back}\)
\(H_{out} = H_{in} + \text{padding\_top} + \text{padding\_bottom}\)
\(W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}\)
Examples:
>>> m = nn.ReplicationPad3d(3) >>> input = torch.randn(16, 3, 8, 320, 480) >>> output = m(input) >>> # using different paddings for different sides >>> m = nn.ReplicationPad3d((3, 3, 6, 6, 1, 1)) >>> output = m(input)
ZeroPad2d¶
-
class
torch.nn.
ZeroPad2d
(padding: Union[T, Tuple[T, T, T, T]])[source]¶ Pads the input tensor boundaries with zero.
For N-dimensional padding, use
torch.nn.functional.pad()
.- Parameters
padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 4-tuple, uses (\(\text{padding\_left}\), \(\text{padding\_right}\), \(\text{padding\_top}\), \(\text{padding\_bottom}\))
- Shape:
Input: \((N, C, H_{in}, W_{in})\)
Output: \((N, C, H_{out}, W_{out})\) where
\(H_{out} = H_{in} + \text{padding\_top} + \text{padding\_bottom}\)
\(W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}\)
Examples:
>>> m = nn.ZeroPad2d(2) >>> input = torch.randn(1, 1, 3, 3) >>> input tensor([[[[-0.1678, -0.4418, 1.9466], [ 0.9604, -0.4219, -0.5241], [-0.9162, -0.5436, -0.6446]]]]) >>> m(input) tensor([[[[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, -0.1678, -0.4418, 1.9466, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.9604, -0.4219, -0.5241, 0.0000, 0.0000], [ 0.0000, 0.0000, -0.9162, -0.5436, -0.6446, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]]) >>> # using different paddings for different sides >>> m = nn.ZeroPad2d((1, 1, 2, 0)) >>> m(input) tensor([[[[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [ 0.0000, -0.1678, -0.4418, 1.9466, 0.0000], [ 0.0000, 0.9604, -0.4219, -0.5241, 0.0000], [ 0.0000, -0.9162, -0.5436, -0.6446, 0.0000]]]])
ConstantPad1d¶
-
class
torch.nn.
ConstantPad1d
(padding: Union[T, Tuple[T, T]], value: float)[source]¶ Pads the input tensor boundaries with a constant value.
For N-dimensional padding, use
torch.nn.functional.pad()
.- Parameters
padding (int, tuple) – the size of the padding. If is int, uses the same padding in both boundaries. If a 2-tuple, uses (\(\text{padding\_left}\), \(\text{padding\_right}\))
- Shape:
Input: \((N, C, W_{in})\)
Output: \((N, C, W_{out})\) where
\(W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}\)
Examples:
>>> m = nn.ConstantPad1d(2, 3.5) >>> input = torch.randn(1, 2, 4) >>> input tensor([[[-1.0491, -0.7152, -0.0749, 0.8530], [-1.3287, 1.8966, 0.1466, -0.2771]]]) >>> m(input) tensor([[[ 3.5000, 3.5000, -1.0491, -0.7152, -0.0749, 0.8530, 3.5000, 3.5000], [ 3.5000, 3.5000, -1.3287, 1.8966, 0.1466, -0.2771, 3.5000, 3.5000]]]) >>> m = nn.ConstantPad1d(2, 3.5) >>> input = torch.randn(1, 2, 3) >>> input tensor([[[ 1.6616, 1.4523, -1.1255], [-3.6372, 0.1182, -1.8652]]]) >>> m(input) tensor([[[ 3.5000, 3.5000, 1.6616, 1.4523, -1.1255, 3.5000, 3.5000], [ 3.5000, 3.5000, -3.6372, 0.1182, -1.8652, 3.5000, 3.5000]]]) >>> # using different paddings for different sides >>> m = nn.ConstantPad1d((3, 1), 3.5) >>> m(input) tensor([[[ 3.5000, 3.5000, 3.5000, 1.6616, 1.4523, -1.1255, 3.5000], [ 3.5000, 3.5000, 3.5000, -3.6372, 0.1182, -1.8652, 3.5000]]])
ConstantPad2d¶
-
class
torch.nn.
ConstantPad2d
(padding: Union[T, Tuple[T, T, T, T]], value: float)[source]¶ Pads the input tensor boundaries with a constant value.
For N-dimensional padding, use
torch.nn.functional.pad()
.- Parameters
padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 4-tuple, uses (\(\text{padding\_left}\), \(\text{padding\_right}\), \(\text{padding\_top}\), \(\text{padding\_bottom}\))
- Shape:
Input: \((N, C, H_{in}, W_{in})\)
Output: \((N, C, H_{out}, W_{out})\) where
\(H_{out} = H_{in} + \text{padding\_top} + \text{padding\_bottom}\)
\(W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}\)
Examples:
>>> m = nn.ConstantPad2d(2, 3.5) >>> input = torch.randn(1, 2, 2) >>> input tensor([[[ 1.6585, 0.4320], [-0.8701, -0.4649]]]) >>> m(input) tensor([[[ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000], [ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000], [ 3.5000, 3.5000, 1.6585, 0.4320, 3.5000, 3.5000], [ 3.5000, 3.5000, -0.8701, -0.4649, 3.5000, 3.5000], [ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000], [ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000]]]) >>> # using different paddings for different sides >>> m = nn.ConstantPad2d((3, 0, 2, 1), 3.5) >>> m(input) tensor([[[ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000], [ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000], [ 3.5000, 3.5000, 3.5000, 1.6585, 0.4320], [ 3.5000, 3.5000, 3.5000, -0.8701, -0.4649], [ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000]]])
ConstantPad3d¶
-
class
torch.nn.
ConstantPad3d
(padding: Union[T, Tuple[T, T, T, T, T, T]], value: float)[source]¶ Pads the input tensor boundaries with a constant value.
For N-dimensional padding, use
torch.nn.functional.pad()
.- Parameters
padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 6-tuple, uses (\(\text{padding\_left}\), \(\text{padding\_right}\), \(\text{padding\_top}\), \(\text{padding\_bottom}\), \(\text{padding\_front}\), \(\text{padding\_back}\))
- Shape:
Input: \((N, C, D_{in}, H_{in}, W_{in})\)
Output: \((N, C, D_{out}, H_{out}, W_{out})\) where
\(D_{out} = D_{in} + \text{padding\_front} + \text{padding\_back}\)
\(H_{out} = H_{in} + \text{padding\_top} + \text{padding\_bottom}\)
\(W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}\)
Examples:
>>> m = nn.ConstantPad3d(3, 3.5) >>> input = torch.randn(16, 3, 10, 20, 30) >>> output = m(input) >>> # using different paddings for different sides >>> m = nn.ConstantPad3d((3, 3, 6, 6, 0, 1), 3.5) >>> output = m(input)
Non-linear activations (weighted sum, nonlinearity)¶
ELU¶
-
class
torch.nn.
ELU
(alpha: float = 1.0, inplace: bool = False)[source]¶ Applies the element-wise function:
\[\text{ELU}(x) = \begin{cases} x, & \text{ if } x > 0\\ \alpha * (\exp(x) - 1), & \text{ if } x \leq 0 \end{cases} \]- Parameters
alpha – the \(\alpha\) value for the ELU formulation. Default: 1.0
inplace – can optionally do the operation in-place. Default:
False
- Shape:
Input: \((N, *)\) where * means, any number of additional dimensions
Output: \((N, *)\), same shape as the input
Examples:
>>> m = nn.ELU() >>> input = torch.randn(2) >>> output = m(input)
Hardshrink¶
-
class
torch.nn.
Hardshrink
(lambd: float = 0.5)[source]¶ Applies the hard shrinkage function element-wise:
\[\text{HardShrink}(x) = \begin{cases} x, & \text{ if } x > \lambda \\ x, & \text{ if } x < -\lambda \\ 0, & \text{ otherwise } \end{cases} \]- Parameters
lambd – the \(\lambda\) value for the Hardshrink formulation. Default: 0.5
- Shape:
Input: \((N, *)\) where * means, any number of additional dimensions
Output: \((N, *)\), same shape as the input
Examples:
>>> m = nn.Hardshrink() >>> input = torch.randn(2) >>> output = m(input)
Hardtanh¶
-
class
torch.nn.
Hardtanh
(min_val: float = - 1.0, max_val: float = 1.0, inplace: bool = False, min_value: Optional[float] = None, max_value: Optional[float] = None)[source]¶ Applies the HardTanh function element-wise
HardTanh is defined as:
\[\text{HardTanh}(x) = \begin{cases} 1 & \text{ if } x > 1 \\ -1 & \text{ if } x < -1 \\ x & \text{ otherwise } \\ \end{cases} \]The range of the linear region \([-1, 1]\) can be adjusted using
min_val
andmax_val
.- Parameters
min_val – minimum value of the linear region range. Default: -1
max_val – maximum value of the linear region range. Default: 1
inplace – can optionally do the operation in-place. Default:
False
Keyword arguments
min_value
andmax_value
have been deprecated in favor ofmin_val
andmax_val
.- Shape:
Input: \((N, *)\) where * means, any number of additional dimensions
Output: \((N, *)\), same shape as the input
Examples:
>>> m = nn.Hardtanh(-2, 2) >>> input = torch.randn(2) >>> output = m(input)
LeakyReLU¶
-
class
torch.nn.
LeakyReLU
(negative_slope: float = 0.01, inplace: bool = False)[source]¶ Applies the element-wise function:
\[\text{LeakyReLU}(x) = \max(0, x) + \text{negative\_slope} * \min(0, x) \]or
\[\text{LeakyRELU}(x) = \begin{cases} x, & \text{ if } x \geq 0 \\ \text{negative\_slope} \times x, & \text{ otherwise } \end{cases} \]- Parameters
negative_slope – Controls the angle of the negative slope. Default: 1e-2
inplace – can optionally do the operation in-place. Default:
False
- Shape:
Input: \((N, *)\) where * means, any number of additional dimensions
Output: \((N, *)\), same shape as the input
Examples:
>>> m = nn.LeakyReLU(0.1) >>> input = torch.randn(2) >>> output = m(input)
LogSigmoid¶
-
class
torch.nn.
LogSigmoid
[source]¶ Applies the element-wise function:
\[\text{LogSigmoid}(x) = \log\left(\frac{ 1 }{ 1 + \exp(-x)}\right) \]- Shape:
Input: \((N, *)\) where * means, any number of additional dimensions
Output: \((N, *)\), same shape as the input
Examples:
>>> m = nn.LogSigmoid() >>> input = torch.randn(2) >>> output = m(input)
PReLU¶
-
class
torch.nn.
PReLU
(num_parameters: int = 1, init: float = 0.25)[source]¶ Applies the element-wise function:
\[\text{PReLU}(x) = \max(0,x) + a * \min(0,x) \]or
\[\text{PReLU}(x) = \begin{cases} x, & \text{ if } x \geq 0 \\ ax, & \text{ otherwise } \end{cases} \]Here \(a\) is a learnable parameter. When called without arguments, nn.PReLU() uses a single parameter \(a\) across all input channels. If called with nn.PReLU(nChannels), a separate \(a\) is used for each input channel.
Note
weight decay should not be used when learning \(a\) for good performance.
Note
Channel dim is the 2nd dim of input. When input has dims < 2, then there is no channel dim and the number of channels = 1.
- Parameters
- Shape:
Input: \((N, *)\) where * means, any number of additional dimensions
Output: \((N, *)\), same shape as the input
- Variables
~PReLU.weight (Tensor) – the learnable weights of shape (
num_parameters
).
Examples:
>>> m = nn.PReLU() >>> input = torch.randn(2) >>> output = m(input)
ReLU¶
-
class
torch.nn.
ReLU
(inplace: bool = False)[source]¶ Applies the rectified linear unit function element-wise:
\(\text{ReLU}(x) = (x)^+ = \max(0, x)\)
- Parameters
inplace – can optionally do the operation in-place. Default:
False
- Shape:
Input: \((N, *)\) where * means, any number of additional dimensions
Output: \((N, *)\), same shape as the input
Examples:
>>> m = nn.ReLU() >>> input = torch.randn(2) >>> output = m(input) An implementation of CReLU - https://arxiv.org/abs/1603.05201 >>> m = nn.ReLU() >>> input = torch.randn(2).unsqueeze(0) >>> output = torch.cat((m(input),m(-input)))
ReLU6¶
-
class
torch.nn.
ReLU6
(inplace: bool = False)[source]¶ Applies the element-wise function:
\[\text{ReLU6}(x) = \min(\max(0,x), 6) \]- Parameters
inplace – can optionally do the operation in-place. Default:
False
- Shape:
Input: \((N, *)\) where * means, any number of additional dimensions
Output: \((N, *)\), same shape as the input
Examples:
>>> m = nn.ReLU6() >>> input = torch.randn(2) >>> output = m(input)
RReLU¶
-
class
torch.nn.
RReLU
(lower: float = 0.125, upper: float = 0.3333333333333333, inplace: bool = False)[source]¶ Applies the randomized leaky rectified liner unit function, element-wise, as described in the paper:
Empirical Evaluation of Rectified Activations in Convolutional Network.
The function is defined as:
\[\text{RReLU}(x) = \begin{cases} x & \text{if } x \geq 0 \\ ax & \text{ otherwise } \end{cases} \]where \(a\) is randomly sampled from uniform distribution \(\mathcal{U}(\text{lower}, \text{upper})\).
- Parameters
lower – lower bound of the uniform distribution. Default: \(\frac{1}{8}\)
upper – upper bound of the uniform distribution. Default: \(\frac{1}{3}\)
inplace – can optionally do the operation in-place. Default:
False
- Shape:
Input: \((N, *)\) where * means, any number of additional dimensions
Output: \((N, *)\), same shape as the input
Examples:
>>> m = nn.RReLU(0.1, 0.3) >>> input = torch.randn(2) >>> output = m(input)
SELU¶
-
class
torch.nn.
SELU
(inplace: bool = False)[source]¶ Applied element-wise, as:
\[\text{SELU}(x) = \text{scale} * (\max(0,x) + \min(0, \alpha * (\exp(x) - 1))) \]with \(\alpha = 1.6732632423543772848170429916717\) and \(\text{scale} = 1.0507009873554804934193349852946\).
More details can be found in the paper Self-Normalizing Neural Networks .
- Parameters
inplace (bool, optional) – can optionally do the operation in-place. Default:
False
- Shape:
Input: \((N, *)\) where * means, any number of additional dimensions
Output: \((N, *)\), same shape as the input
Examples:
>>> m = nn.SELU() >>> input = torch.randn(2) >>> output = m(input)
CELU¶
-
class
torch.nn.
CELU
(alpha: float = 1.0, inplace: bool = False)[source]¶ Applies the element-wise function:
\[\text{CELU}(x) = \max(0,x) + \min(0, \alpha * (\exp(x/\alpha) - 1)) \]More details can be found in the paper Continuously Differentiable Exponential Linear Units .
- Parameters
alpha – the \(\alpha\) value for the CELU formulation. Default: 1.0
inplace – can optionally do the operation in-place. Default:
False
- Shape:
Input: \((N, *)\) where * means, any number of additional dimensions
Output: \((N, *)\), same shape as the input
Examples:
>>> m = nn.CELU() >>> input = torch.randn(2) >>> output = m(input)
Sigmoid¶
-
class
torch.nn.
Sigmoid
[source]¶ Applies the element-wise function:
\[\text{Sigmoid}(x) = \sigma(x) = \frac{1}{1 + \exp(-x)} \]- Shape:
Input: \((N, *)\) where * means, any number of additional dimensions
Output: \((N, *)\), same shape as the input
Examples:
>>> m = nn.Sigmoid() >>> input = torch.randn(2) >>> output = m(input)
Softplus¶
-
class
torch.nn.
Softplus
(beta: int = 1, threshold: int = 20)[source]¶ Applies the element-wise function:
\[\text{Softplus}(x) = \frac{1}{\beta} * \log(1 + \exp(\beta * x)) \]SoftPlus is a smooth approximation to the ReLU function and can be used to constrain the output of a machine to always be positive.
For numerical stability the implementation reverts to the linear function when \(input \times \beta > threshold\).
- Parameters
beta – the \(\beta\) value for the Softplus formulation. Default: 1
threshold – values above this revert to a linear function. Default: 20
- Shape:
Input: \((N, *)\) where * means, any number of additional dimensions
Output: \((N, *)\), same shape as the input
Examples:
>>> m = nn.Softplus() >>> input = torch.randn(2) >>> output = m(input)
Softshrink¶
-
class
torch.nn.
Softshrink
(lambd: float = 0.5)[source]¶ Applies the soft shrinkage function elementwise:
\[\text{SoftShrinkage}(x) = \begin{cases} x - \lambda, & \text{ if } x > \lambda \\ x + \lambda, & \text{ if } x < -\lambda \\ 0, & \text{ otherwise } \end{cases} \]- Parameters
lambd – the \(\lambda\) (must be no less than zero) value for the Softshrink formulation. Default: 0.5
- Shape:
Input: \((N, *)\) where * means, any number of additional dimensions
Output: \((N, *)\), same shape as the input
Examples:
>>> m = nn.Softshrink() >>> input = torch.randn(2) >>> output = m(input)
Softsign¶
-
class
torch.nn.
Softsign
[source]¶ Applies the element-wise function:
\[\text{SoftSign}(x) = \frac{x}{ 1 + |x|} \]- Shape:
Input: \((N, *)\) where * means, any number of additional dimensions
Output: \((N, *)\), same shape as the input
Examples:
>>> m = nn.Softsign() >>> input = torch.randn(2) >>> output = m(input)
Tanh¶
-
class
torch.nn.
Tanh
[source]¶ Applies the element-wise function:
\[\text{Tanh}(x) = \tanh(x) = \frac{\exp(x) - \exp(-x)} {\exp(x) + \exp(-x)} \]- Shape:
Input: \((N, *)\) where * means, any number of additional dimensions
Output: \((N, *)\), same shape as the input
Examples:
>>> m = nn.Tanh() >>> input = torch.randn(2) >>> output = m(input)
Tanhshrink¶
-
class
torch.nn.
Tanhshrink
[source]¶ Applies the element-wise function:
\[\text{Tanhshrink}(x) = x - \tanh(x) \]- Shape:
Input: \((N, *)\) where * means, any number of additional dimensions
Output: \((N, *)\), same shape as the input
Examples:
>>> m = nn.Tanhshrink() >>> input = torch.randn(2) >>> output = m(input)
Threshold¶
-
class
torch.nn.
Threshold
(threshold: float, value: float, inplace: bool = False)[source]¶ Thresholds each element of the input Tensor.
Threshold is defined as:
\[y = \begin{cases} x, &\text{ if } x > \text{threshold} \\ \text{value}, &\text{ otherwise } \end{cases} \]- Parameters
threshold – The value to threshold at
value – The value to replace with
inplace – can optionally do the operation in-place. Default:
False
- Shape:
Input: \((N, *)\) where * means, any number of additional dimensions
Output: \((N, *)\), same shape as the input
Examples:
>>> m = nn.Threshold(0.1, 20) >>> input = torch.randn(2) >>> output = m(input)
Non-linear activations (other)¶
Softmin¶
-
class
torch.nn.
Softmin
(dim: Optional[int] = None)[source]¶ Applies the Softmin function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range [0, 1] and sum to 1.
Softmin is defined as:
\[\text{Softmin}(x_{i}) = \frac{\exp(-x_i)}{\sum_j \exp(-x_j)} \]- Shape:
Input: \((*)\) where * means, any number of additional dimensions
Output: \((*)\), same shape as the input
- Parameters
dim (int) – A dimension along which Softmin will be computed (so every slice along dim will sum to 1).
- Returns
a Tensor of the same dimension and shape as the input, with values in the range [0, 1]
Examples:
>>> m = nn.Softmin() >>> input = torch.randn(2, 3) >>> output = m(input)
Softmax¶
-
class
torch.nn.
Softmax
(dim: Optional[int] = None)[source]¶ Applies the Softmax function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range [0,1] and sum to 1.
Softmax is defined as:
\[\text{Softmax}(x_{i}) = \frac{\exp(x_i)}{\sum_j \exp(x_j)} \]When the input Tensor is a sparse tensor then the unspecifed values are treated as
-inf
.- Shape:
Input: \((*)\) where * means, any number of additional dimensions
Output: \((*)\), same shape as the input
- Returns
a Tensor of the same dimension and shape as the input with values in the range [0, 1]
- Parameters
dim (int) – A dimension along which Softmax will be computed (so every slice along dim will sum to 1).
Note
This module doesn’t work directly with NLLLoss, which expects the Log to be computed between the Softmax and itself. Use LogSoftmax instead (it’s faster and has better numerical properties).
Examples:
>>> m = nn.Softmax(dim=1) >>> input = torch.randn(2, 3) >>> output = m(input)
Softmax2d¶
-
class
torch.nn.
Softmax2d
[source]¶ Applies SoftMax over features to each spatial location.
When given an image of
Channels x Height x Width
, it will apply Softmax to each location \((Channels, h_i, w_j)\)- Shape:
Input: \((N, C, H, W)\)
Output: \((N, C, H, W)\) (same shape as input)
- Returns
a Tensor of the same dimension and shape as the input with values in the range [0, 1]
Examples:
>>> m = nn.Softmax2d() >>> # you softmax over the 2nd dimension >>> input = torch.randn(2, 3, 12, 13) >>> output = m(input)
LogSoftmax¶
-
class
torch.nn.
LogSoftmax
(dim: Optional[int] = None)[source]¶ Applies the \(\log(\text{Softmax}(x))\) function to an n-dimensional input Tensor. The LogSoftmax formulation can be simplified as:
\[\text{LogSoftmax}(x_{i}) = \log\left(\frac{\exp(x_i) }{ \sum_j \exp(x_j)} \right) \]- Shape:
Input: \((*)\) where * means, any number of additional dimensions
Output: \((*)\), same shape as the input
- Parameters
dim (int) – A dimension along which LogSoftmax will be computed.
- Returns
a Tensor of the same dimension and shape as the input with values in the range [-inf, 0)
Examples:
>>> m = nn.LogSoftmax() >>> input = torch.randn(2, 3) >>> output = m(input)
AdaptiveLogSoftmaxWithLoss¶
-
class
torch.nn.
AdaptiveLogSoftmaxWithLoss
(in_features: int, n_classes: int, cutoffs: Sequence[int], div_value: float = 4.0, head_bias: bool = False)[source]¶ Efficient softmax approximation as described in Efficient softmax approximation for GPUs by Edouard Grave, Armand Joulin, Moustapha Cissé, David Grangier, and Hervé Jégou.
Adaptive softmax is an approximate strategy for training models with large output spaces. It is most effective when the label distribution is highly imbalanced, for example in natural language modelling, where the word frequency distribution approximately follows the Zipf’s law.
Adaptive softmax partitions the labels into several clusters, according to their frequency. These clusters may contain different number of targets each. Additionally, clusters containing less frequent labels assign lower dimensional embeddings to those labels, which speeds up the computation. For each minibatch, only clusters for which at least one target is present are evaluated.
The idea is that the clusters which are accessed frequently (like the first one, containing most frequent labels), should also be cheap to compute – that is, contain a small number of assigned labels.
We highly recommend taking a look at the original paper for more details.
cutoffs
should be an ordered Sequence of integers sorted in the increasing order. It controls number of clusters and the partitioning of targets into clusters. For example settingcutoffs = [10, 100, 1000]
means that first 10 targets will be assigned to the ‘head’ of the adaptive softmax, targets 11, 12, …, 100 will be assigned to the first cluster, and targets 101, 102, …, 1000 will be assigned to the second cluster, while targets 1001, 1002, …, n_classes - 1 will be assigned to the last, third cluster.div_value
is used to compute the size of each additional cluster, which is given as \(\left\lfloor\frac{\texttt{in\_features}}{\texttt{div\_value}^{idx}}\right\rfloor\), where \(idx\) is the cluster index (with clusters for less frequent words having larger indices, and indices starting from \(1\)).head_bias
if set to True, adds a bias term to the ‘head’ of the adaptive softmax. See paper for details. Set to False in the official implementation.
Warning
Labels passed as inputs to this module should be sorted according to their frequency. This means that the most frequent label should be represented by the index 0, and the least frequent label should be represented by the index n_classes - 1.
Note
This module returns a
NamedTuple
withoutput
andloss
fields. See further documentation for details.Note
To compute log-probabilities for all classes, the
log_prob
method can be used.- Parameters
in_features (int) – Number of features in the input tensor
n_classes (int) – Number of classes in the dataset
cutoffs (Sequence) – Cutoffs used to assign targets to their buckets
div_value (float, optional) – value used as an exponent to compute sizes of the clusters. Default: 4.0
head_bias (bool, optional) – If
True
, adds a bias term to the ‘head’ of the adaptive softmax. Default:False
- Returns
output is a Tensor of size
N
containing computed target log probabilities for each exampleloss is a Scalar representing the computed negative log likelihood loss
- Return type
NamedTuple
withoutput
andloss
fields
- Shape:
input: \((N, \texttt{in\_features})\)
target: \((N)\) where each value satisfies \(0 <= \texttt{target[i]} <= \texttt{n\_classes}\)
output1: \((N)\)
output2:
Scalar
-
log_prob
(input: torch.Tensor) → torch.Tensor[source]¶ Computes log probabilities for all \(\texttt{n\_classes}\)
- Parameters
input (Tensor) – a minibatch of examples
- Returns
log-probabilities of for each class \(c\) in range \(0 <= c <= \texttt{n\_classes}\), where \(\texttt{n\_classes}\) is a parameter passed to
AdaptiveLogSoftmaxWithLoss
constructor.
- Shape:
Input: \((N, \texttt{in\_features})\)
Output: \((N, \texttt{n\_classes})\)
-
predict
(input: torch.Tensor) → torch.Tensor[source]¶ This is equivalent to self.log_pob(input).argmax(dim=1), but is more efficient in some cases.
- Parameters
input (Tensor) – a minibatch of examples
- Returns
a class with the highest probability for each example
- Return type
output (Tensor)
- Shape:
Input: \((N, \texttt{in\_features})\)
Output: \((N)\)
Normalization layers¶
BatchNorm1d¶
-
class
torch.nn.
BatchNorm1d
(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)[source]¶ Applies Batch Normalization over a 2D or 3D input (a mini-batch of 1D inputs with optional additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .
\[y = \frac{x - \mathrm{E}[x]}{\sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]The mean and standard-deviation are calculated per-dimension over the mini-batches and \(\gamma\) and \(\beta\) are learnable parameter vectors of size C (where C is the input size). By default, the elements of \(\gamma\) are set to 1 and the elements of \(\beta\) are set to 0. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False).
Also by default, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default
momentum
of 0.1.If
track_running_stats
is set toFalse
, this layer then does not keep running estimates, and batch statistics are instead used during evaluation time as well.Note
This
momentum
argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is \(\hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x_t\), where \(\hat{x}\) is the estimated statistic and \(x_t\) is the new observed value.Because the Batch Normalization is done over the C dimension, computing statistics on (N, L) slices, it’s common terminology to call this Temporal Batch Normalization.
- Parameters
num_features – \(C\) from an expected input of size \((N, C, L)\) or \(L\) from input of size \((N, L)\)
eps – a value added to the denominator for numerical stability. Default: 1e-5
momentum – the value used for the running_mean and running_var computation. Can be set to
None
for cumulative moving average (i.e. simple average). Default: 0.1affine – a boolean value that when set to
True
, this module has learnable affine parameters. Default:True
track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics, and initializes statistics buffersrunning_mean
andrunning_var
asNone
. When these buffers areNone
, this module always uses batch statistics. in both training and eval modes. Default:True
- Shape:
Input: \((N, C)\) or \((N, C, L)\)
Output: \((N, C)\) or \((N, C, L)\) (same shape as input)
Examples:
>>> # With Learnable Parameters >>> m = nn.BatchNorm1d(100) >>> # Without Learnable Parameters >>> m = nn.BatchNorm1d(100, affine=False) >>> input = torch.randn(20, 100) >>> output = m(input)
BatchNorm2d¶
-
class
torch.nn.
BatchNorm2d
(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)[source]¶ Applies Batch Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .
\[y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]The mean and standard-deviation are calculated per-dimension over the mini-batches and \(\gamma\) and \(\beta\) are learnable parameter vectors of size C (where C is the input size). By default, the elements of \(\gamma\) are set to 1 and the elements of \(\beta\) are set to 0. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False).
Also by default, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default
momentum
of 0.1.If
track_running_stats
is set toFalse
, this layer then does not keep running estimates, and batch statistics are instead used during evaluation time as well.Note
This
momentum
argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is \(\hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x_t\), where \(\hat{x}\) is the estimated statistic and \(x_t\) is the new observed value.Because the Batch Normalization is done over the C dimension, computing statistics on (N, H, W) slices, it’s common terminology to call this Spatial Batch Normalization.
- Parameters
num_features – \(C\) from an expected input of size \((N, C, H, W)\)
eps – a value added to the denominator for numerical stability. Default: 1e-5
momentum – the value used for the running_mean and running_var computation. Can be set to
None
for cumulative moving average (i.e. simple average). Default: 0.1affine – a boolean value that when set to
True
, this module has learnable affine parameters. Default:True
track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics, and initializes statistics buffersrunning_mean
andrunning_var
asNone
. When these buffers areNone
, this module always uses batch statistics. in both training and eval modes. Default:True
- Shape:
Input: \((N, C, H, W)\)
Output: \((N, C, H, W)\) (same shape as input)
Examples:
>>> # With Learnable Parameters >>> m = nn.BatchNorm2d(100) >>> # Without Learnable Parameters >>> m = nn.BatchNorm2d(100, affine=False) >>> input = torch.randn(20, 100, 35, 45) >>> output = m(input)
BatchNorm3d¶
-
class
torch.nn.
BatchNorm3d
(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)[source]¶ Applies Batch Normalization over a 5D input (a mini-batch of 3D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .
\[y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]The mean and standard-deviation are calculated per-dimension over the mini-batches and \(\gamma\) and \(\beta\) are learnable parameter vectors of size C (where C is the input size). By default, the elements of \(\gamma\) are set to 1 and the elements of \(\beta\) are set to 0. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False).
Also by default, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default
momentum
of 0.1.If
track_running_stats
is set toFalse
, this layer then does not keep running estimates, and batch statistics are instead used during evaluation time as well.Note
This
momentum
argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is \(\hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x_t\), where \(\hat{x}\) is the estimated statistic and \(x_t\) is the new observed value.Because the Batch Normalization is done over the C dimension, computing statistics on (N, D, H, W) slices, it’s common terminology to call this Volumetric Batch Normalization or Spatio-temporal Batch Normalization.
- Parameters
num_features – \(C\) from an expected input of size \((N, C, D, H, W)\)
eps – a value added to the denominator for numerical stability. Default: 1e-5
momentum – the value used for the running_mean and running_var computation. Can be set to
None
for cumulative moving average (i.e. simple average). Default: 0.1affine – a boolean value that when set to
True
, this module has learnable affine parameters. Default:True
track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics, and initializes statistics buffersrunning_mean
andrunning_var
asNone
. When these buffers areNone
, this module always uses batch statistics. in both training and eval modes. Default:True
- Shape:
Input: \((N, C, D, H, W)\)
Output: \((N, C, D, H, W)\) (same shape as input)
Examples:
>>> # With Learnable Parameters >>> m = nn.BatchNorm3d(100) >>> # Without Learnable Parameters >>> m = nn.BatchNorm3d(100, affine=False) >>> input = torch.randn(20, 100, 35, 45, 10) >>> output = m(input)
GroupNorm¶
-
class
torch.nn.
GroupNorm
(num_groups: int, num_channels: int, eps: float = 1e-05, affine: bool = True)[source]¶ Applies Group Normalization over a mini-batch of inputs as described in the paper Group Normalization
\[y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta \]The input channels are separated into
num_groups
groups, each containingnum_channels / num_groups
channels. The mean and standard-deviation are calculated separately over the each group. \(\gamma\) and \(\beta\) are learnable per-channel affine transform parameter vectors of sizenum_channels
ifaffine
isTrue
. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False).This layer uses statistics computed from input data in both training and evaluation modes.
- Parameters
num_groups (int) – number of groups to separate the channels into
num_channels (int) – number of channels expected in input
eps – a value added to the denominator for numerical stability. Default: 1e-5
affine – a boolean value that when set to
True
, this module has learnable per-channel affine parameters initialized to ones (for weights) and zeros (for biases). Default:True
.
- Shape:
Input: \((N, C, *)\) where \(C=\text{num\_channels}\)
Output: \((N, C, *)\) (same shape as input)
Examples:
>>> input = torch.randn(20, 6, 10, 10) >>> # Separate 6 channels into 3 groups >>> m = nn.GroupNorm(3, 6) >>> # Separate 6 channels into 6 groups (equivalent with InstanceNorm) >>> m = nn.GroupNorm(6, 6) >>> # Put all 6 channels into a single group (equivalent with LayerNorm) >>> m = nn.GroupNorm(1, 6) >>> # Activating the module >>> output = m(input)
InstanceNorm1d¶
-
class
torch.nn.
InstanceNorm1d
(num_features: int, eps: float = 1e-05, momentum: float = 0.1, affine: bool = False, track_running_stats: bool = False)[source]¶ Applies Instance Normalization over a 3D input (a mini-batch of 1D inputs with optional additional channel dimension) as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylization.
\[y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]The mean and standard-deviation are calculated per-dimension separately for each object in a mini-batch. \(\gamma\) and \(\beta\) are learnable parameter vectors of size C (where C is the input size) if
affine
isTrue
. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False).By default, this layer uses instance statistics computed from input data in both training and evaluation modes.
If
track_running_stats
is set toTrue
, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a defaultmomentum
of 0.1.Note
This
momentum
argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is \(\hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x_t\), where \(\hat{x}\) is the estimated statistic and \(x_t\) is the new observed value.Note
InstanceNorm1d
andLayerNorm
are very similar, but have some subtle differences.InstanceNorm1d
is applied on each channel of channeled data like multidimensional time series, butLayerNorm
is usually applied on entire sample and often in NLP tasks. Additionally,LayerNorm
applies elementwise affine transform, whileInstanceNorm1d
usually don’t apply affine transform.- Parameters
num_features – \(C\) from an expected input of size \((N, C, L)\) or \(L\) from input of size \((N, L)\)
eps – a value added to the denominator for numerical stability. Default: 1e-5
momentum – the value used for the running_mean and running_var computation. Default: 0.1
affine – a boolean value that when set to
True
, this module has learnable affine parameters, initialized the same way as done for batch normalization. Default:False
.track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default:False
- Shape:
Input: \((N, C, L)\)
Output: \((N, C, L)\) (same shape as input)
Examples:
>>> # Without Learnable Parameters >>> m = nn.InstanceNorm1d(100) >>> # With Learnable Parameters >>> m = nn.InstanceNorm1d(100, affine=True) >>> input = torch.randn(20, 100, 40) >>> output = m(input)
InstanceNorm2d¶
-
class
torch.nn.
InstanceNorm2d
(num_features: int, eps: float = 1e-05, momentum: float = 0.1, affine: bool = False, track_running_stats: bool = False)[source]¶ Applies Instance Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension) as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylization.
\[y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]The mean and standard-deviation are calculated per-dimension separately for each object in a mini-batch. \(\gamma\) and \(\beta\) are learnable parameter vectors of size C (where C is the input size) if
affine
isTrue
. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False).By default, this layer uses instance statistics computed from input data in both training and evaluation modes.
If
track_running_stats
is set toTrue
, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a defaultmomentum
of 0.1.Note
This
momentum
argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is \(\hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x_t\), where \(\hat{x}\) is the estimated statistic and \(x_t\) is the new observed value.Note
InstanceNorm2d
andLayerNorm
are very similar, but have some subtle differences.InstanceNorm2d
is applied on each channel of channeled data like RGB images, butLayerNorm
is usually applied on entire sample and often in NLP tasks. Additionally,LayerNorm
applies elementwise affine transform, whileInstanceNorm2d
usually don’t apply affine transform.- Parameters
num_features – \(C\) from an expected input of size \((N, C, H, W)\)
eps – a value added to the denominator for numerical stability. Default: 1e-5
momentum – the value used for the running_mean and running_var computation. Default: 0.1
affine – a boolean value that when set to
True
, this module has learnable affine parameters, initialized the same way as done for batch normalization. Default:False
.track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default:False
- Shape:
Input: \((N, C, H, W)\)
Output: \((N, C, H, W)\) (same shape as input)
Examples:
>>> # Without Learnable Parameters >>> m = nn.InstanceNorm2d(100) >>> # With Learnable Parameters >>> m = nn.InstanceNorm2d(100, affine=True) >>> input = torch.randn(20, 100, 35, 45) >>> output = m(input)
InstanceNorm3d¶
-
class
torch.nn.
InstanceNorm3d
(num_features: int, eps: float = 1e-05, momentum: float = 0.1, affine: bool = False, track_running_stats: bool = False)[source]¶ Applies Instance Normalization over a 5D input (a mini-batch of 3D inputs with additional channel dimension) as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylization.
\[y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]The mean and standard-deviation are calculated per-dimension separately for each object in a mini-batch. \(\gamma\) and \(\beta\) are learnable parameter vectors of size C (where C is the input size) if
affine
isTrue
. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False).By default, this layer uses instance statistics computed from input data in both training and evaluation modes.
If
track_running_stats
is set toTrue
, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a defaultmomentum
of 0.1.Note
This
momentum
argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is \(\hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x_t\), where \(\hat{x}\) is the estimated statistic and \(x_t\) is the new observed value.Note
InstanceNorm3d
andLayerNorm
are very similar, but have some subtle differences.InstanceNorm3d
is applied on each channel of channeled data like 3D models with RGB color, butLayerNorm
is usually applied on entire sample and often in NLP tasks. Additionally,LayerNorm
applies elementwise affine transform, whileInstanceNorm3d
usually don’t apply affine transform.- Parameters
num_features – \(C\) from an expected input of size \((N, C, D, H, W)\)
eps – a value added to the denominator for numerical stability. Default: 1e-5
momentum – the value used for the running_mean and running_var computation. Default: 0.1
affine – a boolean value that when set to
True
, this module has learnable affine parameters, initialized the same way as done for batch normalization. Default:False
.track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default:False
- Shape:
Input: \((N, C, D, H, W)\)
Output: \((N, C, D, H, W)\) (same shape as input)
Examples:
>>> # Without Learnable Parameters >>> m = nn.InstanceNorm3d(100) >>> # With Learnable Parameters >>> m = nn.InstanceNorm3d(100, affine=True) >>> input = torch.randn(20, 100, 35, 45, 10) >>> output = m(input)
LayerNorm¶
-
class
torch.nn.
LayerNorm
(normalized_shape: Union[int, List[int], torch.Size], eps: float = 1e-05, elementwise_affine: bool = True)[source]¶ Applies Layer Normalization over a mini-batch of inputs as described in the paper Layer Normalization
\[y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta \]The mean and standard-deviation are calculated separately over the last certain number dimensions which have to be of the shape specified by
normalized_shape
. \(\gamma\) and \(\beta\) are learnable affine transform parameters ofnormalized_shape
ifelementwise_affine
isTrue
. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False).Note
Unlike Batch Normalization and Instance Normalization, which applies scalar scale and bias for each entire channel/plane with the
affine
option, Layer Normalization applies per-element scale and bias withelementwise_affine
.This layer uses statistics computed from input data in both training and evaluation modes.
- Parameters
normalized_shape (int or list or torch.Size) –
input shape from an expected input of size
\[[* \times \text{normalized\_shape}[0] \times \text{normalized\_shape}[1] \times \ldots \times \text{normalized\_shape}[-1]] \]If a single integer is used, it is treated as a singleton list, and this module will normalize over the last dimension which is expected to be of that specific size.
eps – a value added to the denominator for numerical stability. Default: 1e-5
elementwise_affine – a boolean value that when set to
True
, this module has learnable per-element affine parameters initialized to ones (for weights) and zeros (for biases). Default:True
.
- Shape:
Input: \((N, *)\)
Output: \((N, *)\) (same shape as input)
Examples:
>>> input = torch.randn(20, 5, 10, 10) >>> # With Learnable Parameters >>> m = nn.LayerNorm(input.size()[1:]) >>> # Without Learnable Parameters >>> m = nn.LayerNorm(input.size()[1:], elementwise_affine=False) >>> # Normalize over last two dimensions >>> m = nn.LayerNorm([10, 10]) >>> # Normalize over last dimension of size 10 >>> m = nn.LayerNorm(10) >>> # Activating the module >>> output = m(input)
LocalResponseNorm¶
-
class
torch.nn.
LocalResponseNorm
(size: int, alpha: float = 0.0001, beta: float = 0.75, k: float = 1.0)[source]¶ Applies local response normalization over an input signal composed of several input planes, where channels occupy the second dimension. Applies normalization across channels.
\[b_{c} = a_{c}\left(k + \frac{\alpha}{n} \sum_{c'=\max(0, c-n/2)}^{\min(N-1,c+n/2)}a_{c'}^2\right)^{-\beta} \]- Parameters
size – amount of neighbouring channels used for normalization
alpha – multiplicative factor. Default: 0.0001
beta – exponent. Default: 0.75
k – additive factor. Default: 1
- Shape:
Input: \((N, C, *)\)
Output: \((N, C, *)\) (same shape as input)
Examples:
>>> lrn = nn.LocalResponseNorm(2) >>> signal_2d = torch.randn(32, 5, 24, 24) >>> signal_4d = torch.randn(16, 5, 7, 7, 7, 7) >>> output_2d = lrn(signal_2d) >>> output_4d = lrn(signal_4d)
Recurrent layers¶
RNN¶
-
class
torch.nn.
RNN
(*args, **kwargs)[source]¶ Applies a multi-layer Elman RNN with \(\tanh\) or \(\text{ReLU}\) non-linearity to an input sequence.
For each element in the input sequence, each layer computes the following function:
\[h_t = \tanh(W_{ih} x_t + b_{ih} + W_{hh} h_{(t-1)} + b_{hh}) \]where \(h_t\) is the hidden state at time t, \(x_t\) is the input at time t, and \(h_{(t-1)}\) is the hidden state of the previous layer at time t-1 or the initial hidden state at time 0. If
nonlinearity
is'relu'
, then \(\text{ReLU}\) is used instead of \(\tanh\).- Parameters
input_size – The number of expected features in the input x
hidden_size – The number of features in the hidden state h
num_layers – Number of recurrent layers. E.g., setting
num_layers=2
would mean stacking two RNNs together to form a stacked RNN, with the second RNN taking in outputs of the first RNN and computing the final results. Default: 1nonlinearity – The non-linearity to use. Can be either
'tanh'
or'relu'
. Default:'tanh'
bias – If
False
, then the layer does not use bias weights b_ih and b_hh. Default:True
batch_first – If
True
, then the input and output tensors are provided as (batch, seq, feature). Default:False
dropout – If non-zero, introduces a Dropout layer on the outputs of each RNN layer except the last layer, with dropout probability equal to
dropout
. Default: 0bidirectional – If
True
, becomes a bidirectional RNN. Default:False
- Inputs: input, h_0
input of shape (seq_len, batch, input_size): tensor containing the features of the input sequence. The input can also be a packed variable length sequence. See
torch.nn.utils.rnn.pack_padded_sequence()
ortorch.nn.utils.rnn.pack_sequence()
for details.h_0 of shape (num_layers * num_directions, batch, hidden_size): tensor containing the initial hidden state for each element in the batch. Defaults to zero if not provided. If the RNN is bidirectional, num_directions should be 2, else it should be 1.
- Outputs: output, h_n
output of shape (seq_len, batch, num_directions * hidden_size): tensor containing the output features (h_t) from the last layer of the RNN, for each t. If a
torch.nn.utils.rnn.PackedSequence
has been given as the input, the output will also be a packed sequence.For the unpacked case, the directions can be separated using
output.view(seq_len, batch, num_directions, hidden_size)
, with forward and backward being direction 0 and 1 respectively. Similarly, the directions can be separated in the packed case.h_n of shape (num_layers * num_directions, batch, hidden_size): tensor containing the hidden state for t = seq_len.
Like output, the layers can be separated using
h_n.view(num_layers, num_directions, batch, hidden_size)
.
- Shape:
Input1: \((L, N, H_{in})\) tensor containing input features where \(H_{in}=\text{input\_size}\) and L represents a sequence length.
Input2: \((S, N, H_{out})\) tensor containing the initial hidden state for each element in the batch. \(H_{out}=\text{hidden\_size}\) Defaults to zero if not provided. where \(S=\text{num\_layers} * \text{num\_directions}\) If the RNN is bidirectional, num_directions should be 2, else it should be 1.
Output1: \((L, N, H_{all})\) where \(H_{all}=\text{num\_directions} * \text{hidden\_size}\)
Output2: \((S, N, H_{out})\) tensor containing the next hidden state for each element in the batch
- Variables
~RNN.weight_ih_l[k] – the learnable input-hidden weights of the k-th layer, of shape (hidden_size, input_size) for k = 0. Otherwise, the shape is (hidden_size, num_directions * hidden_size)
~RNN.weight_hh_l[k] – the learnable hidden-hidden weights of the k-th layer, of shape (hidden_size, hidden_size)
~RNN.bias_ih_l[k] – the learnable input-hidden bias of the k-th layer, of shape (hidden_size)
~RNN.bias_hh_l[k] – the learnable hidden-hidden bias of the k-th layer, of shape (hidden_size)
Note
All the weights and biases are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{1}{\text{hidden\_size}}\)
Examples:
>>> rnn = nn.RNN(10, 20, 2) >>> input = torch.randn(5, 3, 10) >>> h0 = torch.randn(2, 3, 20) >>> output, hn = rnn(input, h0)
LSTM¶
-
class
torch.nn.
LSTM
(*args, **kwargs)[source]¶ Applies a multi-layer long short-term memory (LSTM) RNN to an input sequence.
For each element in the input sequence, each layer computes the following function:
\[\begin{array}{ll} \\ i_t = \sigma(W_{ii} x_t + b_{ii} + W_{hi} h_{t-1} + b_{hi}) \\ f_t = \sigma(W_{if} x_t + b_{if} + W_{hf} h_{t-1} + b_{hf}) \\ g_t = \tanh(W_{ig} x_t + b_{ig} + W_{hg} h_{t-1} + b_{hg}) \\ o_t = \sigma(W_{io} x_t + b_{io} + W_{ho} h_{t-1} + b_{ho}) \\ c_t = f_t \odot c_{t-1} + i_t \odot g_t \\ h_t = o_t \odot \tanh(c_t) \\ \end{array} \]where \(h_t\) is the hidden state at time t, \(c_t\) is the cell state at time t, \(x_t\) is the input at time t, \(h_{t-1}\) is the hidden state of the layer at time t-1 or the initial hidden state at time 0, and \(i_t\), \(f_t\), \(g_t\), \(o_t\) are the input, forget, cell, and output gates, respectively. \(\sigma\) is the sigmoid function, and \(\odot\) is the Hadamard product.
In a multilayer LSTM, the input \(x^{(l)}_t\) of the \(l\) -th layer (\(l >= 2\)) is the hidden state \(h^{(l-1)}_t\) of the previous layer multiplied by dropout \(\delta^{(l-1)}_t\) where each \(\delta^{(l-1)}_t\) is a Bernoulli random variable which is \(0\) with probability
dropout
.If
proj_size > 0
is specified, LSTM with projections will be used. This changes the LSTM cell in the following way. First, the dimension of \(h_t\) will be changed fromhidden_size
toproj_size
(dimensions of \(W_{hi}\) will be changed accordingly). Second, the output hidden state of each layer will be multiplied by a learnable projection matrix: \(h_t = W_{hr}h_t\). Note that as a consequence of this, the output of LSTM network will be of different shape as well. See Inputs/Outputs sections below for exact dimensions of all variables. You can find more details in https://arxiv.org/abs/1402.1128.- Parameters
input_size – The number of expected features in the input x
hidden_size – The number of features in the hidden state h
num_layers – Number of recurrent layers. E.g., setting
num_layers=2
would mean stacking two LSTMs together to form a stacked LSTM, with the second LSTM taking in outputs of the first LSTM and computing the final results. Default: 1bias – If
False
, then the layer does not use bias weights b_ih and b_hh. Default:True
batch_first – If
True
, then the input and output tensors are provided as (batch, seq, feature). Default:False
dropout – If non-zero, introduces a Dropout layer on the outputs of each LSTM layer except the last layer, with dropout probability equal to
dropout
. Default: 0bidirectional – If
True
, becomes a bidirectional LSTM. Default:False
proj_size – If
> 0
, will use LSTM with projections of corresponding size. Default: 0
- Inputs: input, (h_0, c_0)
input of shape (seq_len, batch, input_size): tensor containing the features of the input sequence. The input can also be a packed variable length sequence. See
torch.nn.utils.rnn.pack_padded_sequence()
ortorch.nn.utils.rnn.pack_sequence()
for details.h_0 of shape (num_layers * num_directions, batch, hidden_size): tensor containing the initial hidden state for each element in the batch. If the LSTM is bidirectional, num_directions should be 2, else it should be 1. If
proj_size > 0
was specified, the shape has to be (num_layers * num_directions, batch, proj_size).c_0 of shape (num_layers * num_directions, batch, hidden_size): tensor containing the initial cell state for each element in the batch.
If (h_0, c_0) is not provided, both h_0 and c_0 default to zero.
- Outputs: output, (h_n, c_n)
output of shape (seq_len, batch, num_directions * hidden_size): tensor containing the output features (h_t) from the last layer of the LSTM, for each t. If a
torch.nn.utils.rnn.PackedSequence
has been given as the input, the output will also be a packed sequence. Ifproj_size > 0
was specified, output shape will be (seq_len, batch, num_directions * proj_size).For the unpacked case, the directions can be separated using
output.view(seq_len, batch, num_directions, hidden_size)
, with forward and backward being direction 0 and 1 respectively. Similarly, the directions can be separated in the packed case.h_n of shape (num_layers * num_directions, batch, hidden_size): tensor containing the hidden state for t = seq_len. If
proj_size > 0
was specified,h_n
shape will be (num_layers * num_directions, batch, proj_size).Like output, the layers can be separated using
h_n.view(num_layers, num_directions, batch, hidden_size)
and similarly for c_n.c_n of shape (num_layers * num_directions, batch, hidden_size): tensor containing the cell state for t = seq_len.
- Variables
~LSTM.weight_ih_l[k] – the learnable input-hidden weights of the \(\text{k}^{th}\) layer (W_ii|W_if|W_ig|W_io), of shape (4*hidden_size, input_size) for k = 0. Otherwise, the shape is (4*hidden_size, num_directions * hidden_size)
~LSTM.weight_hh_l[k] – the learnable hidden-hidden weights of the \(\text{k}^{th}\) layer (W_hi|W_hf|W_hg|W_ho), of shape (4*hidden_size, hidden_size). If
proj_size > 0
was specified, the shape will be (4*hidden_size, proj_size).~LSTM.bias_ih_l[k] – the learnable input-hidden bias of the \(\text{k}^{th}\) layer (b_ii|b_if|b_ig|b_io), of shape (4*hidden_size)
~LSTM.bias_hh_l[k] – the learnable hidden-hidden bias of the \(\text{k}^{th}\) layer (b_hi|b_hf|b_hg|b_ho), of shape (4*hidden_size)
~LSTM.weight_hr_l[k] – the learnable projection weights of the \(\text{k}^{th}\) layer of shape (proj_size, hidden_size). Only present when
proj_size > 0
was specified.
Note
All the weights and biases are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{1}{\text{hidden\_size}}\)
Examples:
>>> rnn = nn.LSTM(10, 20, 2) >>> input = torch.randn(5, 3, 10) >>> h0 = torch.randn(2, 3, 20) >>> c0 = torch.randn(2, 3, 20) >>> output, (hn, cn) = rnn(input, (h0, c0))
GRU¶
-
class
torch.nn.
GRU
(*args, **kwargs)[source]¶ Applies a multi-layer gated recurrent unit (GRU) RNN to an input sequence.
For each element in the input sequence, each layer computes the following function:
\[\begin{array}{ll} r_t = \sigma(W_{ir} x_t + b_{ir} + W_{hr} h_{(t-1)} + b_{hr}) \\ z_t = \sigma(W_{iz} x_t + b_{iz} + W_{hz} h_{(t-1)} + b_{hz}) \\ n_t = \tanh(W_{in} x_t + b_{in} + r_t * (W_{hn} h_{(t-1)}+ b_{hn})) \\ h_t = (1 - z_t) * n_t + z_t * h_{(t-1)} \end{array} \]where \(h_t\) is the hidden state at time t, \(x_t\) is the input at time t, \(h_{(t-1)}\) is the hidden state of the layer at time t-1 or the initial hidden state at time 0, and \(r_t\), \(z_t\), \(n_t\) are the reset, update, and new gates, respectively. \(\sigma\) is the sigmoid function, and \(*\) is the Hadamard product.
In a multilayer GRU, the input \(x^{(l)}_t\) of the \(l\) -th layer (\(l >= 2\)) is the hidden state \(h^{(l-1)}_t\) of the previous layer multiplied by dropout \(\delta^{(l-1)}_t\) where each \(\delta^{(l-1)}_t\) is a Bernoulli random variable which is \(0\) with probability
dropout
.- Parameters
input_size – The number of expected features in the input x
hidden_size – The number of features in the hidden state h
num_layers – Number of recurrent layers. E.g., setting
num_layers=2
would mean stacking two GRUs together to form a stacked GRU, with the second GRU taking in outputs of the first GRU and computing the final results. Default: 1bias – If
False
, then the layer does not use bias weights b_ih and b_hh. Default:True
batch_first – If
True
, then the input and output tensors are provided as (batch, seq, feature). Default:False
dropout – If non-zero, introduces a Dropout layer on the outputs of each GRU layer except the last layer, with dropout probability equal to
dropout
. Default: 0bidirectional – If
True
, becomes a bidirectional GRU. Default:False
- Inputs: input, h_0
input of shape (seq_len, batch, input_size): tensor containing the features of the input sequence. The input can also be a packed variable length sequence. See
torch.nn.utils.rnn.pack_padded_sequence()
for details.h_0 of shape (num_layers * num_directions, batch, hidden_size): tensor containing the initial hidden state for each element in the batch. Defaults to zero if not provided. If the RNN is bidirectional, num_directions should be 2, else it should be 1.
- Outputs: output, h_n
output of shape (seq_len, batch, num_directions * hidden_size): tensor containing the output features h_t from the last layer of the GRU, for each t. If a
torch.nn.utils.rnn.PackedSequence
has been given as the input, the output will also be a packed sequence. For the unpacked case, the directions can be separated usingoutput.view(seq_len, batch, num_directions, hidden_size)
, with forward and backward being direction 0 and 1 respectively.Similarly, the directions can be separated in the packed case.
h_n of shape (num_layers * num_directions, batch, hidden_size): tensor containing the hidden state for t = seq_len
Like output, the layers can be separated using
h_n.view(num_layers, num_directions, batch, hidden_size)
.
- Shape:
Input1: \((L, N, H_{in})\) tensor containing input features where \(H_{in}=\text{input\_size}\) and L represents a sequence length.
Input2: \((S, N, H_{out})\) tensor containing the initial hidden state for each element in the batch. \(H_{out}=\text{hidden\_size}\) Defaults to zero if not provided. where \(S=\text{num\_layers} * \text{num\_directions}\) If the RNN is bidirectional, num_directions should be 2, else it should be 1.
Output1: \((L, N, H_{all})\) where \(H_{all}=\text{num\_directions} * \text{hidden\_size}\)
Output2: \((S, N, H_{out})\) tensor containing the next hidden state for each element in the batch
- Variables
~GRU.weight_ih_l[k] – the learnable input-hidden weights of the \(\text{k}^{th}\) layer (W_ir|W_iz|W_in), of shape (3*hidden_size, input_size) for k = 0. Otherwise, the shape is (3*hidden_size, num_directions * hidden_size)
~GRU.weight_hh_l[k] – the learnable hidden-hidden weights of the \(\text{k}^{th}\) layer (W_hr|W_hz|W_hn), of shape (3*hidden_size, hidden_size)
~GRU.bias_ih_l[k] – the learnable input-hidden bias of the \(\text{k}^{th}\) layer (b_ir|b_iz|b_in), of shape (3*hidden_size)
~GRU.bias_hh_l[k] – the learnable hidden-hidden bias of the \(\text{k}^{th}\) layer (b_hr|b_hz|b_hn), of shape (3*hidden_size)
Note
All the weights and biases are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{1}{\text{hidden\_size}}\)
Examples:
>>> rnn = nn.GRU(10, 20, 2) >>> input = torch.randn(5, 3, 10) >>> h0 = torch.randn(2, 3, 20) >>> output, hn = rnn(input, h0)
RNNCell¶
-
class
torch.nn.
RNNCell
(input_size: int, hidden_size: int, bias: bool = True, nonlinearity: str = 'tanh')[source]¶ An Elman RNN cell with tanh or ReLU non-linearity.
\[h' = \tanh(W_{ih} x + b_{ih} + W_{hh} h + b_{hh})\]If
nonlinearity
is ‘relu’, then ReLU is used in place of tanh.- Parameters
input_size – The number of expected features in the input x
hidden_size – The number of features in the hidden state h
bias – If
False
, then the layer does not use bias weights b_ih and b_hh. Default:True
nonlinearity – The non-linearity to use. Can be either
'tanh'
or'relu'
. Default:'tanh'
- Inputs: input, hidden
input of shape (batch, input_size): tensor containing input features
hidden of shape (batch, hidden_size): tensor containing the initial hidden state for each element in the batch. Defaults to zero if not provided.
- Outputs: h’
h’ of shape (batch, hidden_size): tensor containing the next hidden state for each element in the batch
- Shape:
Input1: \((N, H_{in})\) tensor containing input features where \(H_{in}\) = input_size
Input2: \((N, H_{out})\) tensor containing the initial hidden state for each element in the batch where \(H_{out}\) = hidden_size Defaults to zero if not provided.
Output: \((N, H_{out})\) tensor containing the next hidden state for each element in the batch
- Variables
~RNNCell.weight_ih (torch.Tensor) – the learnable input-hidden weights, of shape (hidden_size, input_size)
~RNNCell.weight_hh (torch.Tensor) – the learnable hidden-hidden weights, of shape (hidden_size, hidden_size)
~RNNCell.bias_ih – the learnable input-hidden bias, of shape (hidden_size)
~RNNCell.bias_hh – the learnable hidden-hidden bias, of shape (hidden_size)
Note
All the weights and biases are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{1}{\text{hidden\_size}}\)
Examples:
>>> rnn = nn.RNNCell(10, 20) >>> input = torch.randn(6, 3, 10) >>> hx = torch.randn(3, 20) >>> output = [] >>> for i in range(6): hx = rnn(input[i], hx) output.append(hx)
LSTMCell¶
-
class
torch.nn.
LSTMCell
(input_size: int, hidden_size: int, bias: bool = True)[source]¶ A long short-term memory (LSTM) cell.
\[\begin{array}{ll} i = \sigma(W_{ii} x + b_{ii} + W_{hi} h + b_{hi}) \\ f = \sigma(W_{if} x + b_{if} + W_{hf} h + b_{hf}) \\ g = \tanh(W_{ig} x + b_{ig} + W_{hg} h + b_{hg}) \\ o = \sigma(W_{io} x + b_{io} + W_{ho} h + b_{ho}) \\ c' = f * c + i * g \\ h' = o * \tanh(c') \\ \end{array}\]where \(\sigma\) is the sigmoid function, and \(*\) is the Hadamard product.
- Parameters
input_size – The number of expected features in the input x
hidden_size – The number of features in the hidden state h
bias – If
False
, then the layer does not use bias weights b_ih and b_hh. Default:True
- Inputs: input, (h_0, c_0)
input of shape (batch, input_size): tensor containing input features
h_0 of shape (batch, hidden_size): tensor containing the initial hidden state for each element in the batch.
c_0 of shape (batch, hidden_size): tensor containing the initial cell state for each element in the batch.
If (h_0, c_0) is not provided, both h_0 and c_0 default to zero.
- Outputs: (h_1, c_1)
h_1 of shape (batch, hidden_size): tensor containing the next hidden state for each element in the batch
c_1 of shape (batch, hidden_size): tensor containing the next cell state for each element in the batch
- Variables
~LSTMCell.weight_ih (torch.Tensor) – the learnable input-hidden weights, of shape (4*hidden_size, input_size)
~LSTMCell.weight_hh (torch.Tensor) – the learnable hidden-hidden weights, of shape (4*hidden_size, hidden_size)
~LSTMCell.bias_ih – the learnable input-hidden bias, of shape (4*hidden_size)
~LSTMCell.bias_hh – the learnable hidden-hidden bias, of shape (4*hidden_size)
Note
All the weights and biases are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{1}{\text{hidden\_size}}\)
Examples:
>>> rnn = nn.LSTMCell(10, 20) >>> input = torch.randn(3, 10) >>> hx = torch.randn(3, 20) >>> cx = torch.randn(3, 20) >>> output = [] >>> for i in range(6): hx, cx = rnn(input[i], (hx, cx)) output.append(hx)
GRUCell¶
-
class
torch.nn.
GRUCell
(input_size: int, hidden_size: int, bias: bool = True)[source]¶ A gated recurrent unit (GRU) cell
\[\begin{array}{ll} r = \sigma(W_{ir} x + b_{ir} + W_{hr} h + b_{hr}) \\ z = \sigma(W_{iz} x + b_{iz} + W_{hz} h + b_{hz}) \\ n = \tanh(W_{in} x + b_{in} + r * (W_{hn} h + b_{hn})) \\ h' = (1 - z) * n + z * h \end{array}\]where \(\sigma\) is the sigmoid function, and \(*\) is the Hadamard product.
- Parameters
input_size – The number of expected features in the input x
hidden_size – The number of features in the hidden state h
bias – If
False
, then the layer does not use bias weights b_ih and b_hh. Default:True
- Inputs: input, hidden
input of shape (batch, input_size): tensor containing input features
hidden of shape (batch, hidden_size): tensor containing the initial hidden state for each element in the batch. Defaults to zero if not provided.
- Outputs: h’
h’ of shape (batch, hidden_size): tensor containing the next hidden state for each element in the batch
- Shape:
Input1: \((N, H_{in})\) tensor containing input features where \(H_{in}\) = input_size
Input2: \((N, H_{out})\) tensor containing the initial hidden state for each element in the batch where \(H_{out}\) = hidden_size Defaults to zero if not provided.
Output: \((N, H_{out})\) tensor containing the next hidden state for each element in the batch
- Variables
~GRUCell.weight_ih (torch.Tensor) – the learnable input-hidden weights, of shape (3*hidden_size, input_size)
~GRUCell.weight_hh (torch.Tensor) – the learnable hidden-hidden weights, of shape (3*hidden_size, hidden_size)
~GRUCell.bias_ih – the learnable input-hidden bias, of shape (3*hidden_size)
~GRUCell.bias_hh – the learnable hidden-hidden bias, of shape (3*hidden_size)
Note
All the weights and biases are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{1}{\text{hidden\_size}}\)
Examples:
>>> rnn = nn.GRUCell(10, 20) >>> input = torch.randn(6, 3, 10) >>> hx = torch.randn(3, 20) >>> output = [] >>> for i in range(6): hx = rnn(input[i], hx) output.append(hx)
Linear layers¶
Linear¶
-
class
torch.nn.
Linear
(in_features: int, out_features: int, bias: bool = True)[source]¶ Applies a linear transformation to the incoming data: \(y = xA^T + b\)
This module supports TensorFloat32.
- Parameters
in_features – size of each input sample
out_features – size of each output sample
bias – If set to
False
, the layer will not learn an additive bias. Default:True
- Shape:
Input: \((N, *, H_{in})\) where \(*\) means any number of additional dimensions and \(H_{in} = \text{in\_features}\)
Output: \((N, *, H_{out})\) where all but the last dimension are the same shape as the input and \(H_{out} = \text{out\_features}\).
- Variables
~Linear.weight (torch.Tensor) – the learnable weights of the module of shape \((\text{out\_features}, \text{in\_features})\). The values are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\), where \(k = \frac{1}{\text{in\_features}}\)
~Linear.bias – the learnable bias of the module of shape \((\text{out\_features})\). If
bias
isTrue
, the values are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{1}{\text{in\_features}}\)
Examples:
>>> m = nn.Linear(20, 30) >>> input = torch.randn(128, 20) >>> output = m(input) >>> print(output.size()) torch.Size([128, 30])
Bilinear¶
-
class
torch.nn.
Bilinear
(in1_features: int, in2_features: int, out_features: int, bias: bool = True)[source]¶ Applies a bilinear transformation to the incoming data: \(y = x_1^T A x_2 + b\)
- Parameters
in1_features – size of each first input sample
in2_features – size of each second input sample
out_features – size of each output sample
bias – If set to False, the layer will not learn an additive bias. Default:
True
- Shape:
Input1: \((N, *, H_{in1})\) where \(H_{in1}=\text{in1\_features}\) and \(*\) means any number of additional dimensions. All but the last dimension of the inputs should be the same.
Input2: \((N, *, H_{in2})\) where \(H_{in2}=\text{in2\_features}\).
Output: \((N, *, H_{out})\) where \(H_{out}=\text{out\_features}\) and all but the last dimension are the same shape as the input.
- Variables
~Bilinear.weight (torch.Tensor) – the learnable weights of the module of shape \((\text{out\_features}, \text{in1\_features}, \text{in2\_features})\). The values are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\), where \(k = \frac{1}{\text{in1\_features}}\)
~Bilinear.bias – the learnable bias of the module of shape \((\text{out\_features})\). If
bias
isTrue
, the values are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\), where \(k = \frac{1}{\text{in1\_features}}\)
Examples:
>>> m = nn.Bilinear(20, 30, 40) >>> input1 = torch.randn(128, 20) >>> input2 = torch.randn(128, 30) >>> output = m(input1, input2) >>> print(output.size()) torch.Size([128, 40])
Dropout layers¶
Dropout¶
-
class
torch.nn.
Dropout
(p: float = 0.5, inplace: bool = False)[source]¶ During training, randomly zeroes some of the elements of the input tensor with probability
p
using samples from a Bernoulli distribution. Each channel will be zeroed out independently on every forward call.This has proven to be an effective technique for regularization and preventing the co-adaptation of neurons as described in the paper Improving neural networks by preventing co-adaptation of feature detectors .
Furthermore, the outputs are scaled by a factor of \(\frac{1}{1-p}\) during training. This means that during evaluation the module simply computes an identity function.
- Parameters
p – probability of an element to be zeroed. Default: 0.5
inplace – If set to
True
, will do this operation in-place. Default:False
- Shape:
Input: \((*)\). Input can be of any shape
Output: \((*)\). Output is of the same shape as input
Examples:
>>> m = nn.Dropout(p=0.2) >>> input = torch.randn(20, 16) >>> output = m(input)
Dropout2d¶
-
class
torch.nn.
Dropout2d
(p: float = 0.5, inplace: bool = False)[source]¶ Randomly zero out entire channels (a channel is a 2D feature map, e.g., the \(j\)-th channel of the \(i\)-th sample in the batched input is a 2D tensor \(\text{input}[i, j]\)). Each channel will be zeroed out independently on every forward call with probability
p
using samples from a Bernoulli distribution.Usually the input comes from
nn.Conv2d
modules.As described in the paper Efficient Object Localization Using Convolutional Networks , if adjacent pixels within feature maps are strongly correlated (as is normally the case in early convolution layers) then i.i.d. dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease.
In this case,
nn.Dropout2d()
will help promote independence between feature maps and should be used instead.- Parameters
- Shape:
Input: \((N, C, H, W)\)
Output: \((N, C, H, W)\) (same shape as input)
Examples:
>>> m = nn.Dropout2d(p=0.2) >>> input = torch.randn(20, 16, 32, 32) >>> output = m(input)
Dropout3d¶
-
class
torch.nn.
Dropout3d
(p: float = 0.5, inplace: bool = False)[source]¶ Randomly zero out entire channels (a channel is a 3D feature map, e.g., the \(j\)-th channel of the \(i\)-th sample in the batched input is a 3D tensor \(\text{input}[i, j]\)). Each channel will be zeroed out independently on every forward call with probability
p
using samples from a Bernoulli distribution.Usually the input comes from
nn.Conv3d
modules.As described in the paper Efficient Object Localization Using Convolutional Networks , if adjacent pixels within feature maps are strongly correlated (as is normally the case in early convolution layers) then i.i.d. dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease.
In this case,
nn.Dropout3d()
will help promote independence between feature maps and should be used instead.- Parameters
- Shape:
Input: \((N, C, D, H, W)\)
Output: \((N, C, D, H, W)\) (same shape as input)
Examples:
>>> m = nn.Dropout3d(p=0.2) >>> input = torch.randn(20, 16, 4, 32, 32) >>> output = m(input)
AlphaDropout¶
-
class
torch.nn.
AlphaDropout
(p: float = 0.5, inplace: bool = False)[source]¶ Applies Alpha Dropout over the input.
Alpha Dropout is a type of Dropout that maintains the self-normalizing property. For an input with zero mean and unit standard deviation, the output of Alpha Dropout maintains the original mean and standard deviation of the input. Alpha Dropout goes hand-in-hand with SELU activation function, which ensures that the outputs have zero mean and unit standard deviation.
During training, it randomly masks some of the elements of the input tensor with probability p using samples from a bernoulli distribution. The elements to masked are randomized on every forward call, and scaled and shifted to maintain zero mean and unit standard deviation.
During evaluation the module simply computes an identity function.
More details can be found in the paper Self-Normalizing Neural Networks .
- Parameters
- Shape:
Input: \((*)\). Input can be of any shape
Output: \((*)\). Output is of the same shape as input
Examples:
>>> m = nn.AlphaDropout(p=0.2) >>> input = torch.randn(20, 16) >>> output = m(input)
Sparse layers¶
Embedding¶
-
class
torch.nn.
Embedding
(num_embeddings: int, embedding_dim: int, padding_idx: Optional[int] = None, max_norm: Optional[float] = None, norm_type: float = 2.0, scale_grad_by_freq: bool = False, sparse: bool = False, _weight: Optional[torch.Tensor] = None)[source]¶ A simple lookup table that stores embeddings of a fixed dictionary and size.
This module is often used to store word embeddings and retrieve them using indices. The input to the module is a list of indices, and the output is the corresponding word embeddings.
- Parameters
num_embeddings (int) – size of the dictionary of embeddings
embedding_dim (int) – the size of each embedding vector
padding_idx (int, optional) – If given, pads the output with the embedding vector at
padding_idx
(initialized to zeros) whenever it encounters the index.max_norm (float, optional) – If given, each embedding vector with norm larger than
max_norm
is renormalized to have normmax_norm
.norm_type (float, optional) – The p of the p-norm to compute for the
max_norm
option. Default2
.scale_grad_by_freq (boolean, optional) – If given, this will scale gradients by the inverse of frequency of the words in the mini-batch. Default
False
.sparse (bool, optional) – If
True
, gradient w.r.t.weight
matrix will be a sparse tensor. See Notes for more details regarding sparse gradients.
- Variables
~Embedding.weight (Tensor) – the learnable weights of the module of shape (num_embeddings, embedding_dim) initialized from \(\mathcal{N}(0, 1)\)
- Shape:
Input: \((*)\), IntTensor or LongTensor of arbitrary shape containing the indices to extract
Output: \((*, H)\), where * is the input shape and \(H=\text{embedding\_dim}\)
Note
Keep in mind that only a limited number of optimizers support sparse gradients: currently it’s
optim.SGD
(CUDA and CPU),optim.SparseAdam
(CUDA and CPU) andoptim.Adagrad
(CPU)Note
With
padding_idx
set, the embedding vector atpadding_idx
is initialized to all zeros. However, note that this vector can be modified afterwards, e.g., using a customized initialization method, and thus changing the vector used to pad the output. The gradient for this vector fromEmbedding
is always zero.Note
When
max_norm
is notNone
,Embedding
’s forward method will modify theweight
tensor in-place. Since tensors needed for gradient computations cannot be modified in-place, performing a differentiable operation onEmbedding.weight
before callingEmbedding
’s forward method requires cloningEmbedding.weight
whenmax_norm
is notNone
. For example:n, d, m = 3, 5, 7 embedding = nn.Embedding(n, d, max_norm=True) W = torch.randn((m, d), requires_grad=True) idx = torch.tensor([1, 2]) a = embedding.weight.clone() @ W.t() # weight must be cloned for this to be differentiable b = embedding(idx) @ W.t() # modifies weight in-place out = (a.unsqueeze(0) + b.unsqueeze(1)) loss = out.sigmoid().prod() loss.backward()
Examples:
>>> # an Embedding module containing 10 tensors of size 3 >>> embedding = nn.Embedding(10, 3) >>> # a batch of 2 samples of 4 indices each >>> input = torch.LongTensor([[1,2,4,5],[4,3,2,9]]) >>> embedding(input) tensor([[[-0.0251, -1.6902, 0.7172], [-0.6431, 0.0748, 0.6969], [ 1.4970, 1.3448, -0.9685], [-0.3677, -2.7265, -0.1685]], [[ 1.4970, 1.3448, -0.9685], [ 0.4362, -0.4004, 0.9400], [-0.6431, 0.0748, 0.6969], [ 0.9124, -2.3616, 1.1151]]]) >>> # example with padding_idx >>> embedding = nn.Embedding(10, 3, padding_idx=0) >>> input = torch.LongTensor([[0,2,0,5]]) >>> embedding(input) tensor([[[ 0.0000, 0.0000, 0.0000], [ 0.1535, -2.0309, 0.9315], [ 0.0000, 0.0000, 0.0000], [-0.1655, 0.9897, 0.0635]]])
-
classmethod
from_pretrained
(embeddings, freeze=True, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False)[source]¶ Creates Embedding instance from given 2-dimensional FloatTensor.
- Parameters
embeddings (Tensor) – FloatTensor containing weights for the Embedding. First dimension is being passed to Embedding as
num_embeddings
, second asembedding_dim
.freeze (boolean, optional) – If
True
, the tensor does not get updated in the learning process. Equivalent toembedding.weight.requires_grad = False
. Default:True
padding_idx (int, optional) – See module initialization documentation.
max_norm (float, optional) – See module initialization documentation.
norm_type (float, optional) – See module initialization documentation. Default
2
.scale_grad_by_freq (boolean, optional) – See module initialization documentation. Default
False
.sparse (bool, optional) – See module initialization documentation.
Examples:
>>> # FloatTensor containing pretrained weights >>> weight = torch.FloatTensor([[1, 2.3, 3], [4, 5.1, 6.3]]) >>> embedding = nn.Embedding.from_pretrained(weight) >>> # Get embeddings for index 1 >>> input = torch.LongTensor([1]) >>> embedding(input) tensor([[ 4.0000, 5.1000, 6.3000]])
EmbeddingBag¶
-
class
torch.nn.
EmbeddingBag
(num_embeddings: int, embedding_dim: int, max_norm: Optional[float] = None, norm_type: float = 2.0, scale_grad_by_freq: bool = False, mode: str = 'mean', sparse: bool = False, _weight: Optional[torch.Tensor] = None, include_last_offset: bool = False)[source]¶ Computes sums or means of ‘bags’ of embeddings, without instantiating the intermediate embeddings.
For bags of constant length and no
per_sample_weights
and 2D inputs, this classHowever,
EmbeddingBag
is much more time and memory efficient than using a chain of these operations.EmbeddingBag also supports per-sample weights as an argument to the forward pass. This scales the output of the Embedding before performing a weighted reduction as specified by
mode
. Ifper_sample_weights`
is passed, the only supportedmode
is"sum"
, which computes a weighted sum according toper_sample_weights
.- Parameters
num_embeddings (int) – size of the dictionary of embeddings
embedding_dim (int) – the size of each embedding vector
max_norm (float, optional) – If given, each embedding vector with norm larger than
max_norm
is renormalized to have normmax_norm
.norm_type (float, optional) – The p of the p-norm to compute for the
max_norm
option. Default2
.scale_grad_by_freq (boolean, optional) – if given, this will scale gradients by the inverse of frequency of the words in the mini-batch. Default
False
. Note: this option is not supported whenmode="max"
.mode (string, optional) –
"sum"
,"mean"
or"max"
. Specifies the way to reduce the bag."sum"
computes the weighted sum, takingper_sample_weights
into consideration."mean"
computes the average of the values in the bag,"max"
computes the max value over each bag. Default:"mean"
sparse (bool, optional) – if
True
, gradient w.r.t.weight
matrix will be a sparse tensor. See Notes for more details regarding sparse gradients. Note: this option is not supported whenmode="max"
.include_last_offset (bool, optional) – if
True
,offsets
has one additional element, where the last element is equivalent to the size of indices. This matches the CSR format.
- Variables
~EmbeddingBag.weight (Tensor) – the learnable weights of the module of shape (num_embeddings, embedding_dim) initialized from \(\mathcal{N}(0, 1)\).
- Inputs:
input
(IntTensor or LongTensor),offsets
(IntTensor or LongTensor, optional), and per_index_weights
(Tensor, optional)input
andoffsets
have to be of the same type, either int or longIf
input
is 2D of shape (B, N),it will be treated as
B
bags (sequences) each of fixed lengthN
, and this will returnB
values aggregated in a way depending on themode
.offsets
is ignored and required to beNone
in this case.If
input
is 1D of shape (N),it will be treated as a concatenation of multiple bags (sequences).
offsets
is required to be a 1D tensor containing the starting index positions of each bag ininput
. Therefore, foroffsets
of shape (B),input
will be viewed as havingB
bags. Empty bags (i.e., having 0-length) will have returned vectors filled by zeros.
- per_sample_weights (Tensor, optional): a tensor of float / double weights, or None
to indicate all weights should be taken to be
1
. If specified,per_sample_weights
must have exactly the same shape as input and is treated as having the sameoffsets
, if those are notNone
. Only supported formode='sum'
.
Output shape: (B, embedding_dim)
Examples:
>>> # an Embedding module containing 10 tensors of size 3 >>> embedding_sum = nn.EmbeddingBag(10, 3, mode='sum') >>> # a batch of 2 samples of 4 indices each >>> input = torch.LongTensor([1,2,4,5,4,3,2,9]) >>> offsets = torch.LongTensor([0,4]) >>> embedding_sum(input, offsets) tensor([[-0.8861, -5.4350, -0.0523], [ 1.1306, -2.5798, -1.0044]])
-
classmethod
from_pretrained
(embeddings: torch.Tensor, freeze: bool = True, max_norm: Optional[float] = None, norm_type: float = 2.0, scale_grad_by_freq: bool = False, mode: str = 'mean', sparse: bool = False, include_last_offset: bool = False) → torch.nn.modules.sparse.EmbeddingBag[source]¶ Creates EmbeddingBag instance from given 2-dimensional FloatTensor.
- Parameters
embeddings (Tensor) – FloatTensor containing weights for the EmbeddingBag. First dimension is being passed to EmbeddingBag as ‘num_embeddings’, second as ‘embedding_dim’.
freeze (boolean, optional) – If
True
, the tensor does not get updated in the learning process. Equivalent toembeddingbag.weight.requires_grad = False
. Default:True
max_norm (float, optional) – See module initialization documentation. Default:
None
norm_type (float, optional) – See module initialization documentation. Default
2
.scale_grad_by_freq (boolean, optional) – See module initialization documentation. Default
False
.mode (string, optional) – See module initialization documentation. Default:
"mean"
sparse (bool, optional) – See module initialization documentation. Default:
False
.include_last_offset (bool, optional) – See module initialization documentation. Default:
False
.
Examples:
>>> # FloatTensor containing pretrained weights >>> weight = torch.FloatTensor([[1, 2.3, 3], [4, 5.1, 6.3]]) >>> embeddingbag = nn.EmbeddingBag.from_pretrained(weight) >>> # Get embeddings for index 1 >>> input = torch.LongTensor([[1, 0]]) >>> embeddingbag(input) tensor([[ 2.5000, 3.7000, 4.6500]])
Distance functions¶
CosineSimilarity¶
-
class
torch.nn.
CosineSimilarity
(dim: int = 1, eps: float = 1e-08)[source]¶ Returns cosine similarity between \(x_1\) and \(x_2\), computed along dim.
\[\text{similarity} = \dfrac{x_1 \cdot x_2}{\max(\Vert x_1 \Vert _2 \cdot \Vert x_2 \Vert _2, \epsilon)}. \]- Parameters
- Shape:
Input1: \((\ast_1, D, \ast_2)\) where D is at position dim
Input2: \((\ast_1, D, \ast_2)\), same shape as the Input1
Output: \((\ast_1, \ast_2)\)
- Examples::
>>> input1 = torch.randn(100, 128) >>> input2 = torch.randn(100, 128) >>> cos = nn.CosineSimilarity(dim=1, eps=1e-6) >>> output = cos(input1, input2)
PairwiseDistance¶
-
class
torch.nn.
PairwiseDistance
(p: float = 2.0, eps: float = 1e-06, keepdim: bool = False)[source]¶ Computes the batchwise pairwise distance between vectors \(v_1\), \(v_2\) using the p-norm:
\[\Vert x \Vert _p = \left( \sum_{i=1}^n \vert x_i \vert ^ p \right) ^ {1/p}. \]- Parameters
- Shape:
Input1: \((N, D)\) where D = vector dimension
Input2: \((N, D)\), same shape as the Input1
Output: \((N)\). If
keepdim
isTrue
, then \((N, 1)\).
- Examples::
>>> pdist = nn.PairwiseDistance(p=2) >>> input1 = torch.randn(100, 128) >>> input2 = torch.randn(100, 128) >>> output = pdist(input1, input2)
Loss functions¶
L1Loss¶
-
class
torch.nn.
L1Loss
(size_average=None, reduce=None, reduction: str = 'mean')[source]¶ Creates a criterion that measures the mean absolute error (MAE) between each element in the input \(x\) and target \(y\).
The unreduced (i.e. with
reduction
set to'none'
) loss can be described as:\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = \left| x_n - y_n \right|, \]where \(N\) is the batch size. If
reduction
is not'none'
(default'mean'
), then:\[\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases} \]\(x\) and \(y\) are tensors of arbitrary shapes with a total of \(n\) elements each.
The sum operation still operates over all the elements, and divides by \(n\).
The division by \(n\) can be avoided if one sets
reduction = 'sum'
.Supports real-valued and complex-valued inputs.
- Parameters
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
- Shape:
Input: \((N, *)\) where \(*\) means, any number of additional dimensions
Target: \((N, *)\), same shape as the input
Output: scalar. If
reduction
is'none'
, then \((N, *)\), same shape as the input
Examples:
>>> loss = nn.L1Loss() >>> input = torch.randn(3, 5, requires_grad=True) >>> target = torch.randn(3, 5) >>> output = loss(input, target) >>> output.backward()
MSELoss¶
-
class
torch.nn.
MSELoss
(size_average=None, reduce=None, reduction: str = 'mean')[source]¶ Creates a criterion that measures the mean squared error (squared L2 norm) between each element in the input \(x\) and target \(y\).
The unreduced (i.e. with
reduction
set to'none'
) loss can be described as:\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = \left( x_n - y_n \right)^2, \]where \(N\) is the batch size. If
reduction
is not'none'
(default'mean'
), then:\[\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases} \]\(x\) and \(y\) are tensors of arbitrary shapes with a total of \(n\) elements each.
The mean operation still operates over all the elements, and divides by \(n\).
The division by \(n\) can be avoided if one sets
reduction = 'sum'
.- Parameters
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
- Shape:
Input: \((N, *)\) where \(*\) means, any number of additional dimensions
Target: \((N, *)\), same shape as the input
Examples:
>>> loss = nn.MSELoss() >>> input = torch.randn(3, 5, requires_grad=True) >>> target = torch.randn(3, 5) >>> output = loss(input, target) >>> output.backward()
CrossEntropyLoss¶
-
class
torch.nn.
CrossEntropyLoss
(weight: Optional[torch.Tensor] = None, size_average=None, ignore_index: int = - 100, reduce=None, reduction: str = 'mean')[source]¶ This criterion combines
LogSoftmax
andNLLLoss
in one single class.It is useful when training a classification problem with C classes. If provided, the optional argument
weight
should be a 1D Tensor assigning weight to each of the classes. This is particularly useful when you have an unbalanced training set.The input is expected to contain raw, unnormalized scores for each class.
input has to be a Tensor of size either \((minibatch, C)\) or \((minibatch, C, d_1, d_2, ..., d_K)\) with \(K \geq 1\) for the K-dimensional case (described later).
This criterion expects a class index in the range \([0, C-1]\) as the target for each value of a 1D tensor of size minibatch; if ignore_index is specified, this criterion also accepts this class index (this index may not necessarily be in the class range).
The loss can be described as:
\[\text{loss}(x, class) = -\log\left(\frac{\exp(x[class])}{\sum_j \exp(x[j])}\right) = -x[class] + \log\left(\sum_j \exp(x[j])\right) \]or in the case of the
weight
argument being specified:\[\text{loss}(x, class) = weight[class] \left(-x[class] + \log\left(\sum_j \exp(x[j])\right)\right) \]The losses are averaged across observations for each minibatch. If the
weight
argument is specified then this is a weighted average:\[\text{loss} = \frac{\sum^{N}_{i=1} loss(i, class[i])}{\sum^{N}_{i=1} weight[class[i]]} \]Can also be used for higher dimension inputs, such as 2D images, by providing an input of size \((minibatch, C, d_1, d_2, ..., d_K)\) with \(K \geq 1\), where \(K\) is the number of dimensions, and a target of appropriate shape (see below).
- Parameters
weight (Tensor, optional) – a manual rescaling weight given to each class. If given, has to be a Tensor of size C
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
ignore_index (int, optional) – Specifies a target value that is ignored and does not contribute to the input gradient. When
size_average
isTrue
, the loss is averaged over non-ignored targets.reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the weighted mean of the output is taken,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
- Shape:
Input: \((N, C)\) where C = number of classes, or \((N, C, d_1, d_2, ..., d_K)\) with \(K \geq 1\) in the case of K-dimensional loss.
Target: \((N)\) where each value is \(0 \leq \text{targets}[i] \leq C-1\), or \((N, d_1, d_2, ..., d_K)\) with \(K \geq 1\) in the case of K-dimensional loss.
Output: scalar. If
reduction
is'none'
, then the same size as the target: \((N)\), or \((N, d_1, d_2, ..., d_K)\) with \(K \geq 1\) in the case of K-dimensional loss.
Examples:
>>> loss = nn.CrossEntropyLoss() >>> input = torch.randn(3, 5, requires_grad=True) >>> target = torch.empty(3, dtype=torch.long).random_(5) >>> output = loss(input, target) >>> output.backward()
CTCLoss¶
-
class
torch.nn.
CTCLoss
(blank: int = 0, reduction: str = 'mean', zero_infinity: bool = False)[source]¶ The Connectionist Temporal Classification loss.
Calculates loss between a continuous (unsegmented) time series and a target sequence. CTCLoss sums over the probability of possible alignments of input to target, producing a loss value which is differentiable with respect to each input node. The alignment of input to target is assumed to be “many-to-one”, which limits the length of the target sequence such that it must be \(\leq\) the input length.
- Parameters
blank (int, optional) – blank label. Default \(0\).
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the output losses will be divided by the target lengths and then the mean over the batch is taken. Default:'mean'
zero_infinity (bool, optional) – Whether to zero infinite losses and the associated gradients. Default:
False
Infinite losses mainly occur when the inputs are too short to be aligned to the targets.
- Shape:
Log_probs: Tensor of size \((T, N, C)\), where \(T = \text{input length}\), \(N = \text{batch size}\), and \(C = \text{number of classes (including blank)}\). The logarithmized probabilities of the outputs (e.g. obtained with
torch.nn.functional.log_softmax()
).Targets: Tensor of size \((N, S)\) or \((\operatorname{sum}(\text{target\_lengths}))\), where \(N = \text{batch size}\) and \(S = \text{max target length, if shape is } (N, S)\). It represent the target sequences. Each element in the target sequence is a class index. And the target index cannot be blank (default=0). In the \((N, S)\) form, targets are padded to the length of the longest sequence, and stacked. In the \((\operatorname{sum}(\text{target\_lengths}))\) form, the targets are assumed to be un-padded and concatenated within 1 dimension.
Input_lengths: Tuple or tensor of size \((N)\), where \(N = \text{batch size}\). It represent the lengths of the inputs (must each be \(\leq T\)). And the lengths are specified for each sequence to achieve masking under the assumption that sequences are padded to equal lengths.
Target_lengths: Tuple or tensor of size \((N)\), where \(N = \text{batch size}\). It represent lengths of the targets. Lengths are specified for each sequence to achieve masking under the assumption that sequences are padded to equal lengths. If target shape is \((N,S)\), target_lengths are effectively the stop index \(s_n\) for each target sequence, such that
target_n = targets[n,0:s_n]
for each target in a batch. Lengths must each be \(\leq S\) If the targets are given as a 1d tensor that is the concatenation of individual targets, the target_lengths must add up to the total length of the tensor.Output: scalar. If
reduction
is'none'
, then \((N)\), where \(N = \text{batch size}\).
Examples:
>>> # Target are to be padded >>> T = 50 # Input sequence length >>> C = 20 # Number of classes (including blank) >>> N = 16 # Batch size >>> S = 30 # Target sequence length of longest target in batch (padding length) >>> S_min = 10 # Minimum target length, for demonstration purposes >>> >>> # Initialize random batch of input vectors, for *size = (T,N,C) >>> input = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_() >>> >>> # Initialize random batch of targets (0 = blank, 1:C = classes) >>> target = torch.randint(low=1, high=C, size=(N, S), dtype=torch.long) >>> >>> input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long) >>> target_lengths = torch.randint(low=S_min, high=S, size=(N,), dtype=torch.long) >>> ctc_loss = nn.CTCLoss() >>> loss = ctc_loss(input, target, input_lengths, target_lengths) >>> loss.backward() >>> >>> >>> # Target are to be un-padded >>> T = 50 # Input sequence length >>> C = 20 # Number of classes (including blank) >>> N = 16 # Batch size >>> >>> # Initialize random batch of input vectors, for *size = (T,N,C) >>> input = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_() >>> input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long) >>> >>> # Initialize random batch of targets (0 = blank, 1:C = classes) >>> target_lengths = torch.randint(low=1, high=T, size=(N,), dtype=torch.long) >>> target = torch.randint(low=1, high=C, size=(sum(target_lengths),), dtype=torch.long) >>> ctc_loss = nn.CTCLoss() >>> loss = ctc_loss(input, target, input_lengths, target_lengths) >>> loss.backward()
- Reference:
A. Graves et al.: Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks: https://www.cs.toronto.edu/~graves/icml_2006.pdf
Note
In order to use CuDNN, the following must be satisfied:
targets
must be in concatenated format, allinput_lengths
must be T. \(blank=0\),target_lengths
\(\leq 256\), the integer arguments must be of dtypetorch.int32
.The regular implementation uses the (more common in PyTorch) torch.long dtype.
Note
In some circumstances when using the CUDA backend with CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting
torch.backends.cudnn.deterministic = True
. Please see the notes on Reproducibility for background.
NLLLoss¶
-
class
torch.nn.
NLLLoss
(weight: Optional[torch.Tensor] = None, size_average=None, ignore_index: int = - 100, reduce=None, reduction: str = 'mean')[source]¶ The negative log likelihood loss. It is useful to train a classification problem with C classes.
If provided, the optional argument
weight
should be a 1D Tensor assigning weight to each of the classes. This is particularly useful when you have an unbalanced training set.The input given through a forward call is expected to contain log-probabilities of each class. input has to be a Tensor of size either \((minibatch, C)\) or \((minibatch, C, d_1, d_2, ..., d_K)\) with \(K \geq 1\) for the K-dimensional case (described later).
Obtaining log-probabilities in a neural network is easily achieved by adding a LogSoftmax layer in the last layer of your network. You may use CrossEntropyLoss instead, if you prefer not to add an extra layer.
The target that this loss expects should be a class index in the range \([0, C-1]\) where C = number of classes; if ignore_index is specified, this loss also accepts this class index (this index may not necessarily be in the class range).
The unreduced (i.e. with
reduction
set to'none'
) loss can be described as:\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_{y_n} x_{n,y_n}, \quad w_{c} = \text{weight}[c] \cdot \mathbb{1}\{c \not= \text{ignore\_index}\}, \]where \(x\) is the input, \(y\) is the target, \(w\) is the weight, and \(N\) is the batch size. If
reduction
is not'none'
(default'mean'
), then\[\ell(x, y) = \begin{cases} \sum_{n=1}^N \frac{1}{\sum_{n=1}^N w_{y_n}} l_n, & \text{if reduction} = \text{`mean';}\\ \sum_{n=1}^N l_n, & \text{if reduction} = \text{`sum'.} \end{cases} \]Can also be used for higher dimension inputs, such as 2D images, by providing an input of size \((minibatch, C, d_1, d_2, ..., d_K)\) with \(K \geq 1\), where \(K\) is the number of dimensions, and a target of appropriate shape (see below). In the case of images, it computes NLL loss per-pixel.
- Parameters
weight (Tensor, optional) – a manual rescaling weight given to each class. If given, it has to be a Tensor of size C. Otherwise, it is treated as if having all ones.
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
ignore_index (int, optional) – Specifies a target value that is ignored and does not contribute to the input gradient. When
size_average
isTrue
, the loss is averaged over non-ignored targets.reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the weighted mean of the output is taken,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
- Shape:
Input: \((N, C)\) where C = number of classes, or \((N, C, d_1, d_2, ..., d_K)\) with \(K \geq 1\) in the case of K-dimensional loss.
Target: \((N)\) where each value is \(0 \leq \text{targets}[i] \leq C-1\), or \((N, d_1, d_2, ..., d_K)\) with \(K \geq 1\) in the case of K-dimensional loss.
Output: scalar. If
reduction
is'none'
, then the same size as the target: \((N)\), or \((N, d_1, d_2, ..., d_K)\) with \(K \geq 1\) in the case of K-dimensional loss.
Examples:
>>> m = nn.LogSoftmax(dim=1) >>> loss = nn.NLLLoss() >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = loss(m(input), target) >>> output.backward() >>> >>> >>> # 2D loss example (used, for example, with image inputs) >>> N, C = 5, 4 >>> loss = nn.NLLLoss() >>> # input is of size N x C x height x width >>> data = torch.randn(N, 16, 10, 10) >>> conv = nn.Conv2d(16, C, (3, 3)) >>> m = nn.LogSoftmax(dim=1) >>> # each element in target has to have 0 <= value < C >>> target = torch.empty(N, 8, 8, dtype=torch.long).random_(0, C) >>> output = loss(m(conv(data)), target) >>> output.backward()
PoissonNLLLoss¶
-
class
torch.nn.
PoissonNLLLoss
(log_input: bool = True, full: bool = False, size_average=None, eps: float = 1e-08, reduce=None, reduction: str = 'mean')[source]¶ Negative log likelihood loss with Poisson distribution of target.
The loss can be described as:
\[\text{target} \sim \mathrm{Poisson}(\text{input}) \text{loss}(\text{input}, \text{target}) = \text{input} - \text{target} * \log(\text{input}) + \log(\text{target!})\]The last term can be omitted or approximated with Stirling formula. The approximation is used for target values more than 1. For targets less or equal to 1 zeros are added to the loss.
- Parameters
log_input (bool, optional) – if
True
the loss is computed as \(\exp(\text{input}) - \text{target}*\text{input}\), ifFalse
the loss is \(\text{input} - \text{target}*\log(\text{input}+\text{eps})\).full (bool, optional) –
whether to compute full loss, i. e. to add the Stirling approximation term
\[\text{target}*\log(\text{target}) - \text{target} + 0.5 * \log(2\pi\text{target}). \]size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
eps (float, optional) – Small value to avoid evaluation of \(\log(0)\) when
log_input = False
. Default: 1e-8reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
Examples:
>>> loss = nn.PoissonNLLLoss() >>> log_input = torch.randn(5, 2, requires_grad=True) >>> target = torch.randn(5, 2) >>> output = loss(log_input, target) >>> output.backward()
- Shape:
Input: \((N, *)\) where \(*\) means, any number of additional dimensions
Target: \((N, *)\), same shape as the input
Output: scalar by default. If
reduction
is'none'
, then \((N, *)\), the same shape as the input
KLDivLoss¶
-
class
torch.nn.
KLDivLoss
(size_average=None, reduce=None, reduction: str = 'mean', log_target: bool = False)[source]¶ The Kullback-Leibler divergence loss measure
Kullback-Leibler divergence is a useful distance measure for continuous distributions and is often useful when performing direct regression over the space of (discretely sampled) continuous output distributions.
As with
NLLLoss
, the input given is expected to contain log-probabilities and is not restricted to a 2D Tensor. The targets are interpreted as probabilities by default, but could be considered as log-probabilities withlog_target
set toTrue
.This criterion expects a target Tensor of the same size as the input Tensor.
The unreduced (i.e. with
reduction
set to'none'
) loss can be described as:\[l(x,y) = L = \{ l_1,\dots,l_N \}, \quad l_n = y_n \cdot \left( \log y_n - x_n \right) \]where the index \(N\) spans all dimensions of
input
and \(L\) has the same shape asinput
. Ifreduction
is not'none'
(default'mean'
), then:\[\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';} \\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases} \]In default
reduction
mode'mean'
, the losses are averaged for each minibatch over observations as well as over dimensions.'batchmean'
mode gives the correct KL divergence where losses are averaged over batch dimension only.'mean'
mode’s behavior will be changed to the same as'batchmean'
in the next major release.- Parameters
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'batchmean'
|'sum'
|'mean'
.'none'
: no reduction will be applied.'batchmean'
: the sum of the output will be divided by batchsize.'sum'
: the output will be summed.'mean'
: the output will be divided by the number of elements in the output. Default:'mean'
log_target (bool, optional) – Specifies whether target is passed in the log space. Default:
False
Note
size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
.Note
reduction
='mean'
doesn’t return the true kl divergence value, please usereduction
='batchmean'
which aligns with KL math definition. In the next major release,'mean'
will be changed to be the same as'batchmean'
.- Shape:
Input: \((N, *)\) where \(*\) means, any number of additional dimensions
Target: \((N, *)\), same shape as the input
Output: scalar by default. If :attr:
reduction
is'none'
, then \((N, *)\), the same shape as the input
BCELoss¶
-
class
torch.nn.
BCELoss
(weight: Optional[torch.Tensor] = None, size_average=None, reduce=None, reduction: str = 'mean')[source]¶ Creates a criterion that measures the Binary Cross Entropy between the target and the output:
The unreduced (i.e. with
reduction
set to'none'
) loss can be described as:\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_n \left[ y_n \cdot \log x_n + (1 - y_n) \cdot \log (1 - x_n) \right], \]where \(N\) is the batch size. If
reduction
is not'none'
(default'mean'
), then\[\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases} \]This is used for measuring the error of a reconstruction in for example an auto-encoder. Note that the targets \(y\) should be numbers between 0 and 1.
Notice that if \(x_n\) is either 0 or 1, one of the log terms would be mathematically undefined in the above loss equation. PyTorch chooses to set \(\log (0) = -\infty\), since \(\lim_{x\to 0} \log (x) = -\infty\). However, an infinite term in the loss equation is not desirable for several reasons.
For one, if either \(y_n = 0\) or \((1 - y_n) = 0\), then we would be multiplying 0 with infinity. Secondly, if we have an infinite loss value, then we would also have an infinite term in our gradient, since \(\lim_{x\to 0} \frac{d}{dx} \log (x) = \infty\). This would make BCELoss’s backward method nonlinear with respect to \(x_n\), and using it for things like linear regression would not be straight-forward.
Our solution is that BCELoss clamps its log function outputs to be greater than or equal to -100. This way, we can always have a finite loss value and a linear backward method.
- Parameters
weight (Tensor, optional) – a manual rescaling weight given to the loss of each batch element. If given, has to be a Tensor of size nbatch.
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
- Shape:
Input: \((N, *)\) where \(*\) means, any number of additional dimensions
Target: \((N, *)\), same shape as the input
Output: scalar. If
reduction
is'none'
, then \((N, *)\), same shape as input.
Examples:
>>> m = nn.Sigmoid() >>> loss = nn.BCELoss() >>> input = torch.randn(3, requires_grad=True) >>> target = torch.empty(3).random_(2) >>> output = loss(m(input), target) >>> output.backward()
BCEWithLogitsLoss¶
-
class
torch.nn.
BCEWithLogitsLoss
(weight: Optional[torch.Tensor] = None, size_average=None, reduce=None, reduction: str = 'mean', pos_weight: Optional[torch.Tensor] = None)[source]¶ This loss combines a Sigmoid layer and the BCELoss in one single class. This version is more numerically stable than using a plain Sigmoid followed by a BCELoss as, by combining the operations into one layer, we take advantage of the log-sum-exp trick for numerical stability.
The unreduced (i.e. with
reduction
set to'none'
) loss can be described as:\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_n \left[ y_n \cdot \log \sigma(x_n) + (1 - y_n) \cdot \log (1 - \sigma(x_n)) \right], \]where \(N\) is the batch size. If
reduction
is not'none'
(default'mean'
), then\[\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases} \]This is used for measuring the error of a reconstruction in for example an auto-encoder. Note that the targets t[i] should be numbers between 0 and 1.
It’s possible to trade off recall and precision by adding weights to positive examples. In the case of multi-label classification the loss can be described as:
\[\ell_c(x, y) = L_c = \{l_{1,c},\dots,l_{N,c}\}^\top, \quad l_{n,c} = - w_{n,c} \left[ p_c y_{n,c} \cdot \log \sigma(x_{n,c}) + (1 - y_{n,c}) \cdot \log (1 - \sigma(x_{n,c})) \right], \]where \(c\) is the class number (\(c > 1\) for multi-label binary classification, \(c = 1\) for single-label binary classification), \(n\) is the number of the sample in the batch and \(p_c\) is the weight of the positive answer for the class \(c\).
\(p_c > 1\) increases the recall, \(p_c < 1\) increases the precision.
For example, if a dataset contains 100 positive and 300 negative examples of a single class, then pos_weight for the class should be equal to \(\frac{300}{100}=3\). The loss would act as if the dataset contains \(3\times 100=300\) positive examples.
Examples:
>>> target = torch.ones([10, 64], dtype=torch.float32) # 64 classes, batch size = 10 >>> output = torch.full([10, 64], 1.5) # A prediction (logit) >>> pos_weight = torch.ones([64]) # All weights are equal to 1 >>> criterion = torch.nn.BCEWithLogitsLoss(pos_weight=pos_weight) >>> criterion(output, target) # -log(sigmoid(1.5)) tensor(0.2014)
- Parameters
weight (Tensor, optional) – a manual rescaling weight given to the loss of each batch element. If given, has to be a Tensor of size nbatch.
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
pos_weight (Tensor, optional) – a weight of positive examples. Must be a vector with length equal to the number of classes.
- Shape:
Input: \((N, *)\) where \(*\) means, any number of additional dimensions
Target: \((N, *)\), same shape as the input
Output: scalar. If
reduction
is'none'
, then \((N, *)\), same shape as input.
Examples:
>>> loss = nn.BCEWithLogitsLoss() >>> input = torch.randn(3, requires_grad=True) >>> target = torch.empty(3).random_(2) >>> output = loss(input, target) >>> output.backward()
MarginRankingLoss¶
-
class
torch.nn.
MarginRankingLoss
(margin: float = 0.0, size_average=None, reduce=None, reduction: str = 'mean')[source]¶ Creates a criterion that measures the loss given inputs \(x1\), \(x2\), two 1D mini-batch Tensors, and a label 1D mini-batch tensor \(y\) (containing 1 or -1).
If \(y = 1\) then it assumed the first input should be ranked higher (have a larger value) than the second input, and vice-versa for \(y = -1\).
The loss function for each pair of samples in the mini-batch is:
\[\text{loss}(x1, x2, y) = \max(0, -y * (x1 - x2) + \text{margin}) \]- Parameters
margin (float, optional) – Has a default value of \(0\).
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
- Shape:
Input1: \((N)\) where N is the batch size.
Input2: \((N)\), same shape as the Input1.
Target: \((N)\), same shape as the inputs.
Output: scalar. If
reduction
is'none'
, then \((N)\).
Examples:
>>> loss = nn.MarginRankingLoss() >>> input1 = torch.randn(3, requires_grad=True) >>> input2 = torch.randn(3, requires_grad=True) >>> target = torch.randn(3).sign() >>> output = loss(input1, input2, target) >>> output.backward()
HingeEmbeddingLoss¶
-
class
torch.nn.
HingeEmbeddingLoss
(margin: float = 1.0, size_average=None, reduce=None, reduction: str = 'mean')[source]¶ Measures the loss given an input tensor \(x\) and a labels tensor \(y\) (containing 1 or -1). This is usually used for measuring whether two inputs are similar or dissimilar, e.g. using the L1 pairwise distance as \(x\), and is typically used for learning nonlinear embeddings or semi-supervised learning.
The loss function for \(n\)-th sample in the mini-batch is
\[l_n = \begin{cases} x_n, & \text{if}\; y_n = 1,\\ \max \{0, \Delta - x_n\}, & \text{if}\; y_n = -1, \end{cases} \]and the total loss functions is
\[\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases} \]where \(L = \{l_1,\dots,l_N\}^\top\).
- Parameters
margin (float, optional) – Has a default value of 1.
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
- Shape:
Input: \((*)\) where \(*\) means, any number of dimensions. The sum operation operates over all the elements.
Target: \((*)\), same shape as the input
Output: scalar. If
reduction
is'none'
, then same shape as the input
MultiLabelMarginLoss¶
-
class
torch.nn.
MultiLabelMarginLoss
(size_average=None, reduce=None, reduction: str = 'mean')[source]¶ Creates a criterion that optimizes a multi-class multi-classification hinge loss (margin-based loss) between input \(x\) (a 2D mini-batch Tensor) and output \(y\) (which is a 2D Tensor of target class indices). For each sample in the mini-batch:
\[\text{loss}(x, y) = \sum_{ij}\frac{\max(0, 1 - (x[y[j]] - x[i]))}{\text{x.size}(0)} \]where \(x \in \left\{0, \; \cdots , \; \text{x.size}(0) - 1\right\}\), \(y \in \left\{0, \; \cdots , \; \text{y.size}(0) - 1\right\}\), \(0 \leq y[j] \leq \text{x.size}(0)-1\), and \(i \neq y[j]\) for all \(i\) and \(j\).
\(y\) and \(x\) must have the same size.
The criterion only considers a contiguous block of non-negative targets that starts at the front.
This allows for different samples to have variable amounts of target classes.
- Parameters
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
- Shape:
Input: \((C)\) or \((N, C)\) where N is the batch size and C is the number of classes.
Target: \((C)\) or \((N, C)\), label targets padded by -1 ensuring same shape as the input.
Output: scalar. If
reduction
is'none'
, then \((N)\).
Examples:
>>> loss = nn.MultiLabelMarginLoss() >>> x = torch.FloatTensor([[0.1, 0.2, 0.4, 0.8]]) >>> # for target y, only consider labels 3 and 0, not after label -1 >>> y = torch.LongTensor([[3, 0, -1, 1]]) >>> loss(x, y) >>> # 0.25 * ((1-(0.1-0.2)) + (1-(0.1-0.4)) + (1-(0.8-0.2)) + (1-(0.8-0.4))) tensor(0.8500)
SmoothL1Loss¶
-
class
torch.nn.
SmoothL1Loss
(size_average=None, reduce=None, reduction: str = 'mean', beta: float = 1.0)[source]¶ Creates a criterion that uses a squared term if the absolute element-wise error falls below beta and an L1 term otherwise. It is less sensitive to outliers than the
torch.nn.MSELoss
and in some cases prevents exploding gradients (e.g. see Fast R-CNN paper by Ross Girshick). Omitting a scaling factor ofbeta
, this loss is also known as the Huber loss:\[\text{loss}(x, y) = \frac{1}{n} \sum_{i} z_{i} \]where \(z_{i}\) is given by:
\[z_{i} = \begin{cases} 0.5 (x_i - y_i)^2 / beta, & \text{if } |x_i - y_i| < beta \\ |x_i - y_i| - 0.5 * beta, & \text{otherwise } \end{cases} \]\(x\) and \(y\) arbitrary shapes with a total of \(n\) elements each the sum operation still operates over all the elements, and divides by \(n\).
beta
is an optional parameter that defaults to 1.Note: When
beta
is set to 0, this is equivalent toL1Loss
. Passing a negative value in forbeta
will result in an exception.The division by \(n\) can be avoided if sets
reduction = 'sum'
.- Parameters
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
beta (float, optional) – Specifies the threshold at which to change between L1 and L2 loss. This value defaults to 1.0.
- Shape:
Input: \((N, *)\) where \(*\) means, any number of additional dimensions
Target: \((N, *)\), same shape as the input
Output: scalar. If
reduction
is'none'
, then \((N, *)\), same shape as the input
SoftMarginLoss¶
-
class
torch.nn.
SoftMarginLoss
(size_average=None, reduce=None, reduction: str = 'mean')[source]¶ Creates a criterion that optimizes a two-class classification logistic loss between input tensor \(x\) and target tensor \(y\) (containing 1 or -1).
\[\text{loss}(x, y) = \sum_i \frac{\log(1 + \exp(-y[i]*x[i]))}{\text{x.nelement}()} \]- Parameters
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
- Shape:
Input: \((*)\) where \(*\) means, any number of additional dimensions
Target: \((*)\), same shape as the input
Output: scalar. If
reduction
is'none'
, then same shape as the input
MultiLabelSoftMarginLoss¶
-
class
torch.nn.
MultiLabelSoftMarginLoss
(weight: Optional[torch.Tensor] = None, size_average=None, reduce=None, reduction: str = 'mean')[source]¶ Creates a criterion that optimizes a multi-label one-versus-all loss based on max-entropy, between input \(x\) and target \(y\) of size \((N, C)\). For each sample in the minibatch:
\[loss(x, y) = - \frac{1}{C} * \sum_i y[i] * \log((1 + \exp(-x[i]))^{-1}) + (1-y[i]) * \log\left(\frac{\exp(-x[i])}{(1 + \exp(-x[i]))}\right) \]where \(i \in \left\{0, \; \cdots , \; \text{x.nElement}() - 1\right\}\), \(y[i] \in \left\{0, \; 1\right\}\).
- Parameters
weight (Tensor, optional) – a manual rescaling weight given to each class. If given, it has to be a Tensor of size C. Otherwise, it is treated as if having all ones.
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
- Shape:
Input: \((N, C)\) where N is the batch size and C is the number of classes.
Target: \((N, C)\), label targets padded by -1 ensuring same shape as the input.
Output: scalar. If
reduction
is'none'
, then \((N)\).
CosineEmbeddingLoss¶
-
class
torch.nn.
CosineEmbeddingLoss
(margin: float = 0.0, size_average=None, reduce=None, reduction: str = 'mean')[source]¶ Creates a criterion that measures the loss given input tensors \(x_1\), \(x_2\) and a Tensor label \(y\) with values 1 or -1. This is used for measuring whether two inputs are similar or dissimilar, using the cosine distance, and is typically used for learning nonlinear embeddings or semi-supervised learning.
The loss function for each sample is:
\[\text{loss}(x, y) = \begin{cases} 1 - \cos(x_1, x_2), & \text{if } y = 1 \\ \max(0, \cos(x_1, x_2) - \text{margin}), & \text{if } y = -1 \end{cases} \]- Parameters
margin (float, optional) – Should be a number from \(-1\) to \(1\), \(0\) to \(0.5\) is suggested. If
margin
is missing, the default value is \(0\).size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
MultiMarginLoss¶
-
class
torch.nn.
MultiMarginLoss
(p: int = 1, margin: float = 1.0, weight: Optional[torch.Tensor] = None, size_average=None, reduce=None, reduction: str = 'mean')[source]¶ Creates a criterion that optimizes a multi-class classification hinge loss (margin-based loss) between input \(x\) (a 2D mini-batch Tensor) and output \(y\) (which is a 1D tensor of target class indices, \(0 \leq y \leq \text{x.size}(1)-1\)):
For each mini-batch sample, the loss in terms of the 1D input \(x\) and scalar output \(y\) is:
\[\text{loss}(x, y) = \frac{\sum_i \max(0, \text{margin} - x[y] + x[i]))^p}{\text{x.size}(0)} \]where \(x \in \left\{0, \; \cdots , \; \text{x.size}(0) - 1\right\}\) and \(i \neq y\).
Optionally, you can give non-equal weighting on the classes by passing a 1D
weight
tensor into the constructor.The loss function then becomes:
\[\text{loss}(x, y) = \frac{\sum_i \max(0, w[y] * (\text{margin} - x[y] + x[i]))^p)}{\text{x.size}(0)} \]- Parameters
p (int, optional) – Has a default value of \(1\). \(1\) and \(2\) are the only supported values.
margin (float, optional) – Has a default value of \(1\).
weight (Tensor, optional) – a manual rescaling weight given to each class. If given, it has to be a Tensor of size C. Otherwise, it is treated as if having all ones.
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
TripletMarginLoss¶
-
class
torch.nn.
TripletMarginLoss
(margin: float = 1.0, p: float = 2.0, eps: float = 1e-06, swap: bool = False, size_average=None, reduce=None, reduction: str = 'mean')[source]¶ Creates a criterion that measures the triplet loss given an input tensors \(x1\), \(x2\), \(x3\) and a margin with a value greater than \(0\). This is used for measuring a relative similarity between samples. A triplet is composed by a, p and n (i.e., anchor, positive examples and negative examples respectively). The shapes of all input tensors should be \((N, D)\).
The distance swap is described in detail in the paper Learning shallow convolutional feature descriptors with triplet losses by V. Balntas, E. Riba et al.
The loss function for each sample in the mini-batch is:
\[L(a, p, n) = \max \{d(a_i, p_i) - d(a_i, n_i) + {\rm margin}, 0\} \]where
\[d(x_i, y_i) = \left\lVert {\bf x}_i - {\bf y}_i \right\rVert_p \]See also
TripletMarginWithDistanceLoss
, which computes the triplet margin loss for input tensors using a custom distance function.- Parameters
margin (float, optional) – Default: \(1\).
p (int, optional) – The norm degree for pairwise distance. Default: \(2\).
swap (bool, optional) – The distance swap is described in detail in the paper Learning shallow convolutional feature descriptors with triplet losses by V. Balntas, E. Riba et al. Default:
False
.size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
- Shape:
Input: \((N, D)\) where \(D\) is the vector dimension.
- Output: A Tensor of shape \((N)\) if
reduction
is'none'
, or a scalar otherwise.
- Output: A Tensor of shape \((N)\) if
>>> triplet_loss = nn.TripletMarginLoss(margin=1.0, p=2) >>> anchor = torch.randn(100, 128, requires_grad=True) >>> positive = torch.randn(100, 128, requires_grad=True) >>> negative = torch.randn(100, 128, requires_grad=True) >>> output = triplet_loss(anchor, positive, negative) >>> output.backward()
Vision layers¶
PixelShuffle¶
-
class
torch.nn.
PixelShuffle
(upscale_factor: int)[source]¶ Rearranges elements in a tensor of shape \((*, C \times r^2, H, W)\) to a tensor of shape \((*, C, H \times r, W \times r)\), where r is an upscale factor.
This is useful for implementing efficient sub-pixel convolution with a stride of \(1/r\).
See the paper: Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network by Shi et. al (2016) for more details.
- Parameters
upscale_factor (int) – factor to increase spatial resolution by
- Shape:
Input: \((*, C_{in}, H_{in}, W_{in})\), where * is zero or more batch dimensions
Output: \((*, C_{out}, H_{out}, W_{out})\), where
\[C_{out} = C_{in} \div \text{upscale\_factor}^2 \]\[H_{out} = H_{in} \times \text{upscale\_factor} \]\[W_{out} = W_{in} \times \text{upscale\_factor} \]Examples:
>>> pixel_shuffle = nn.PixelShuffle(3) >>> input = torch.randn(1, 9, 4, 4) >>> output = pixel_shuffle(input) >>> print(output.size()) torch.Size([1, 1, 12, 12])
Upsample¶
-
class
torch.nn.
Upsample
(size: Optional[Union[T, Tuple[T, …]]] = None, scale_factor: Optional[Union[T, Tuple[T, …]]] = None, mode: str = 'nearest', align_corners: Optional[bool] = None)[source]¶ Upsamples a given multi-channel 1D (temporal), 2D (spatial) or 3D (volumetric) data.
The input data is assumed to be of the form minibatch x channels x [optional depth] x [optional height] x width. Hence, for spatial inputs, we expect a 4D Tensor and for volumetric inputs, we expect a 5D Tensor.
The algorithms available for upsampling are nearest neighbor and linear, bilinear, bicubic and trilinear for 3D, 4D and 5D input Tensor, respectively.
One can either give a
scale_factor
or the target outputsize
to calculate the output size. (You cannot give both, as it is ambiguous)- Parameters
size (int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int], optional) – output spatial sizes
scale_factor (float or Tuple[float] or Tuple[float, float] or Tuple[float, float, float], optional) – multiplier for spatial size. Has to match input size if it is a tuple.
mode (str, optional) – the upsampling algorithm: one of
'nearest'
,'linear'
,'bilinear'
,'bicubic'
and'trilinear'
. Default:'nearest'
align_corners (bool, optional) – if
True
, the corner pixels of the input and output tensors are aligned, and thus preserving the values at those pixels. This only has effect whenmode
is'linear'
,'bilinear'
, or'trilinear'
. Default:False
- Shape:
Input: \((N, C, W_{in})\), \((N, C, H_{in}, W_{in})\) or \((N, C, D_{in}, H_{in}, W_{in})\)
Output: \((N, C, W_{out})\), \((N, C, H_{out}, W_{out})\) or \((N, C, D_{out}, H_{out}, W_{out})\), where
\[D_{out} = \left\lfloor D_{in} \times \text{scale\_factor} \right\rfloor \]\[H_{out} = \left\lfloor H_{in} \times \text{scale\_factor} \right\rfloor \]\[W_{out} = \left\lfloor W_{in} \times \text{scale\_factor} \right\rfloor \]Warning
With
align_corners = True
, the linearly interpolating modes (linear, bilinear, bicubic, and trilinear) don’t proportionally align the output and input pixels, and thus the output values can depend on the input size. This was the default behavior for these modes up to version 0.3.1. Since then, the default behavior isalign_corners = False
. See below for concrete examples on how this affects the outputs.Note
If you want downsampling/general resizing, you should use
interpolate()
.Examples:
>>> input = torch.arange(1, 5, dtype=torch.float32).view(1, 1, 2, 2) >>> input tensor([[[[ 1., 2.], [ 3., 4.]]]]) >>> m = nn.Upsample(scale_factor=2, mode='nearest') >>> m(input) tensor([[[[ 1., 1., 2., 2.], [ 1., 1., 2., 2.], [ 3., 3., 4., 4.], [ 3., 3., 4., 4.]]]]) >>> m = nn.Upsample(scale_factor=2, mode='bilinear') # align_corners=False >>> m(input) tensor([[[[ 1.0000, 1.2500, 1.7500, 2.0000], [ 1.5000, 1.7500, 2.2500, 2.5000], [ 2.5000, 2.7500, 3.2500, 3.5000], [ 3.0000, 3.2500, 3.7500, 4.0000]]]]) >>> m = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True) >>> m(input) tensor([[[[ 1.0000, 1.3333, 1.6667, 2.0000], [ 1.6667, 2.0000, 2.3333, 2.6667], [ 2.3333, 2.6667, 3.0000, 3.3333], [ 3.0000, 3.3333, 3.6667, 4.0000]]]]) >>> # Try scaling the same data in a larger tensor >>> >>> input_3x3 = torch.zeros(3, 3).view(1, 1, 3, 3) >>> input_3x3[:, :, :2, :2].copy_(input) tensor([[[[ 1., 2.], [ 3., 4.]]]]) >>> input_3x3 tensor([[[[ 1., 2., 0.], [ 3., 4., 0.], [ 0., 0., 0.]]]]) >>> m = nn.Upsample(scale_factor=2, mode='bilinear') # align_corners=False >>> # Notice that values in top left corner are the same with the small input (except at boundary) >>> m(input_3x3) tensor([[[[ 1.0000, 1.2500, 1.7500, 1.5000, 0.5000, 0.0000], [ 1.5000, 1.7500, 2.2500, 1.8750, 0.6250, 0.0000], [ 2.5000, 2.7500, 3.2500, 2.6250, 0.8750, 0.0000], [ 2.2500, 2.4375, 2.8125, 2.2500, 0.7500, 0.0000], [ 0.7500, 0.8125, 0.9375, 0.7500, 0.2500, 0.0000], [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]]) >>> m = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True) >>> # Notice that values in top left corner are now changed >>> m(input_3x3) tensor([[[[ 1.0000, 1.4000, 1.8000, 1.6000, 0.8000, 0.0000], [ 1.8000, 2.2000, 2.6000, 2.2400, 1.1200, 0.0000], [ 2.6000, 3.0000, 3.4000, 2.8800, 1.4400, 0.0000], [ 2.4000, 2.7200, 3.0400, 2.5600, 1.2800, 0.0000], [ 1.2000, 1.3600, 1.5200, 1.2800, 0.6400, 0.0000], [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]])
UpsamplingNearest2d¶
-
class
torch.nn.
UpsamplingNearest2d
(size: Optional[Union[T, Tuple[T, T]]] = None, scale_factor: Optional[Union[T, Tuple[T, T]]] = None)[source]¶ Applies a 2D nearest neighbor upsampling to an input signal composed of several input channels.
To specify the scale, it takes either the
size
or thescale_factor
as it’s constructor argument.When
size
is given, it is the output size of the image (h, w).- Parameters
Warning
This class is deprecated in favor of
interpolate()
.- Shape:
Input: \((N, C, H_{in}, W_{in})\)
Output: \((N, C, H_{out}, W_{out})\) where
\[H_{out} = \left\lfloor H_{in} \times \text{scale\_factor} \right\rfloor \]\[W_{out} = \left\lfloor W_{in} \times \text{scale\_factor} \right\rfloor \]Examples:
>>> input = torch.arange(1, 5, dtype=torch.float32).view(1, 1, 2, 2) >>> input tensor([[[[ 1., 2.], [ 3., 4.]]]]) >>> m = nn.UpsamplingNearest2d(scale_factor=2) >>> m(input) tensor([[[[ 1., 1., 2., 2.], [ 1., 1., 2., 2.], [ 3., 3., 4., 4.], [ 3., 3., 4., 4.]]]])
UpsamplingBilinear2d¶
-
class
torch.nn.
UpsamplingBilinear2d
(size: Optional[Union[T, Tuple[T, T]]] = None, scale_factor: Optional[Union[T, Tuple[T, T]]] = None)[source]¶ Applies a 2D bilinear upsampling to an input signal composed of several input channels.
To specify the scale, it takes either the
size
or thescale_factor
as it’s constructor argument.When
size
is given, it is the output size of the image (h, w).- Parameters
Warning
This class is deprecated in favor of
interpolate()
. It is equivalent tonn.functional.interpolate(..., mode='bilinear', align_corners=True)
.- Shape:
Input: \((N, C, H_{in}, W_{in})\)
Output: \((N, C, H_{out}, W_{out})\) where
\[H_{out} = \left\lfloor H_{in} \times \text{scale\_factor} \right\rfloor \]\[W_{out} = \left\lfloor W_{in} \times \text{scale\_factor} \right\rfloor \]Examples:
>>> input = torch.arange(1, 5, dtype=torch.float32).view(1, 1, 2, 2) >>> input tensor([[[[ 1., 2.], [ 3., 4.]]]]) >>> m = nn.UpsamplingBilinear2d(scale_factor=2) >>> m(input) tensor([[[[ 1.0000, 1.3333, 1.6667, 2.0000], [ 1.6667, 2.0000, 2.3333, 2.6667], [ 2.3333, 2.6667, 3.0000, 3.3333], [ 3.0000, 3.3333, 3.6667, 4.0000]]]])
DataParallel layers (multi-GPU, distributed)¶
DataParallel¶
-
class
torch.nn.
DataParallel
(module, device_ids=None, output_device=None, dim=0)[source]¶ Implements data parallelism at the module level.
This container parallelizes the application of the given
module
by splitting the input across the specified devices by chunking in the batch dimension (other objects will be copied once per device). In the forward pass, the module is replicated on each device, and each replica handles a portion of the input. During the backwards pass, gradients from each replica are summed into the original module.The batch size should be larger than the number of GPUs used.
Warning
It is recommended to use
DistributedDataParallel
, instead of this class, to do multi-GPU training, even if there is only a single node. See: cuda-nn-ddp-instead and ddp.Arbitrary positional and keyword inputs are allowed to be passed into DataParallel but some types are specially handled. tensors will be scattered on dim specified (default 0). tuple, list and dict types will be shallow copied. The other types will be shared among different threads and can be corrupted if written to in the model’s forward pass.
The parallelized
module
must have its parameters and buffers ondevice_ids[0]
before running thisDataParallel
module.Warning
In each forward,
module
is replicated on each device, so any updates to the running module inforward
will be lost. For example, ifmodule
has a counter attribute that is incremented in eachforward
, it will always stay at the initial value because the update is done on the replicas which are destroyed afterforward
. However,DataParallel
guarantees that the replica ondevice[0]
will have its parameters and buffers sharing storage with the base parallelizedmodule
. So in-place updates to the parameters or buffers ondevice[0]
will be recorded. E.g.,BatchNorm2d
andspectral_norm()
rely on this behavior to update the buffers.Warning
Forward and backward hooks defined on
module
and its submodules will be invokedlen(device_ids)
times, each with inputs located on a particular device. Particularly, the hooks are only guaranteed to be executed in correct order with respect to operations on corresponding devices. For example, it is not guaranteed that hooks set viaregister_forward_pre_hook()
be executed before alllen(device_ids)
forward()
calls, but that each such hook be executed before the correspondingforward()
call of that device.Warning
When
module
returns a scalar (i.e., 0-dimensional tensor) inforward()
, this wrapper will return a vector of length equal to number of devices used in data parallelism, containing the result from each device.Note
There is a subtlety in using the
pack sequence -> recurrent network -> unpack sequence
pattern in aModule
wrapped inDataParallel
. See My recurrent network doesn’t work with data parallelism section in FAQ for details.- Parameters
module (Module) – module to be parallelized
device_ids (list of python:int or torch.device) – CUDA devices (default: all devices)
output_device (int or torch.device) – device location of output (default: device_ids[0])
- Variables
~DataParallel.module (Module) – the module to be parallelized
Example:
>>> net = torch.nn.DataParallel(model, device_ids=[0, 1, 2]) >>> output = net(input_var) # input_var can be on any device, including CPU
DistributedDataParallel¶
-
class
torch.nn.parallel.
DistributedDataParallel
(module, device_ids=None, output_device=None, dim=0, broadcast_buffers=True, process_group=None, bucket_cap_mb=25, find_unused_parameters=False, check_reduction=False, gradient_as_bucket_view=False)[source]¶ Implements distributed data parallelism that is based on
torch.distributed
package at the module level.This container parallelizes the application of the given module by splitting the input across the specified devices by chunking in the batch dimension. The module is replicated on each machine and each device, and each such replica handles a portion of the input. During the backwards pass, gradients from each node are averaged.
The batch size should be larger than the number of GPUs used locally.
See also: Basics and cuda-nn-ddp-instead. The same constraints on input as in
torch.nn.DataParallel
apply.Creation of this class requires that
torch.distributed
to be already initialized, by callingtorch.distributed.init_process_group()
.DistributedDataParallel
is proven to be significantly faster thantorch.nn.DataParallel
for single-node multi-GPU data parallel training.To use
DistributedDataParallel
on a host with N GPUs, you should spawn upN
processes, ensuring that each process exclusively works on a single GPU from 0 to N-1. This can be done by either settingCUDA_VISIBLE_DEVICES
for every process or by calling:>>> torch.cuda.set_device(i)
where i is from 0 to N-1. In each process, you should refer the following to construct this module:
>>> torch.distributed.init_process_group( >>> backend='nccl', world_size=N, init_method='...' >>> ) >>> model = DistributedDataParallel(model, device_ids=[i], output_device=i)
In order to spawn up multiple processes per node, you can use either
torch.distributed.launch
ortorch.multiprocessing.spawn
.Note
Please refer to PyTorch Distributed Overview for a brief introduction to all features related to distributed training.
Note
nccl
backend is currently the fastest and highly recommended backend when using GPUs. This applies to both single-node and multi-node distributed training.Note
This module also supports mixed-precision distributed training. This means that your model can have different types of parameters such as mixed types of
fp16
andfp32
, the gradient reduction on these mixed types of parameters will just work fine.Note
If you use
torch.save
on one process to checkpoint the module, andtorch.load
on some other processes to recover it, make sure thatmap_location
is configured properly for every process. Withoutmap_location
,torch.load
would recover the module to devices where the module was saved from.Note
When a model is trained on
M
nodes withbatch=N
, the gradient will beM
times smaller when compared to the same model trained on a single node withbatch=M*N
if the loss is summed (NOT averaged as usual) across instances in a batch (because the gradients between different nodes are averaged). You should take this into consideration when you want to obtain a mathematically equivalent training process compared to the local training counterpart. But in most cases, you can just treat a DistributedDataParallel wrapped model, a DataParallel wrapped model and an ordinary model on a single GPU as the same (E.g. using the same learning rate for equivalent batch size).Note
Parameters are never broadcast between processes. The module performs an all-reduce step on gradients and assumes that they will be modified by the optimizer in all processes in the same way. Buffers (e.g. BatchNorm stats) are broadcast from the module in process of rank 0, to all other replicas in the system in every iteration.
Note
If you are using DistributedDataParallel in conjunction with the distributed-rpc-framework, you should always use
torch.distributed.autograd.backward()
to compute gradients andtorch.distributed.optim.DistributedOptimizer
for optimizing parameters.Example:
>>> import torch.distributed.autograd as dist_autograd >>> from torch.nn.parallel import DistributedDataParallel as DDP >>> from torch import optim >>> from torch.distributed.optim import DistributedOptimizer >>> from torch.distributed.rpc import RRef >>> >>> t1 = torch.rand((3, 3), requires_grad=True) >>> t2 = torch.rand((3, 3), requires_grad=True) >>> rref = rpc.remote("worker1", torch.add, args=(t1, t2)) >>> ddp_model = DDP(my_model) >>> >>> # Setup optimizer >>> optimizer_params = [rref] >>> for param in ddp_model.parameters(): >>> optimizer_params.append(RRef(param)) >>> >>> dist_optim = DistributedOptimizer( >>> optim.SGD, >>> optimizer_params, >>> lr=0.05, >>> ) >>> >>> with dist_autograd.context() as context_id: >>> pred = ddp_model(rref.to_here()) >>> loss = loss_func(pred, loss) >>> dist_autograd.backward(context_id, loss) >>> dist_optim.step()
Warning
Constructor, forward method, and differentiation of the output (or a function of the output of this module) are distributed synchronization points. Take that into account in case different processes might be executing different code.
Warning
This module assumes all parameters are registered in the model by the time it is created. No parameters should be added nor removed later. Same applies to buffers.
Warning
This module assumes all parameters are registered in the model of each distributed processes are in the same order. The module itself will conduct gradient
allreduce
following the reverse order of the registered parameters of the model. In other words, it is users’ responsibility to ensure that each distributed process has the exact same model and thus the exact same parameter registration order.Warning
This module allows parameters with non-rowmajor-contiguous strides. For example, your model may contain some parameters whose
torch.memory_format
istorch.contiguous_format
and others whose format istorch.channels_last
. However, corresponding parameters in different processes must have the same strides.Warning
This module doesn’t work with
torch.autograd.grad()
(i.e. it will only work if gradients are to be accumulated in.grad
attributes of parameters).Warning
If you plan on using this module with a
nccl
backend or agloo
backend (that uses Infiniband), together with a DataLoader that uses multiple workers, please change the multiprocessing start method toforkserver
(Python 3 only) orspawn
. Unfortunately Gloo (that uses Infiniband) and NCCL2 are not fork safe, and you will likely experience deadlocks if you don’t change this setting.Warning
Forward and backward hooks defined on
module
and its submodules won’t be invoked anymore, unless the hooks are initialized in theforward()
method.Warning
You should never try to change your model’s parameters after wrapping up your model with
DistributedDataParallel
. Because, when wrapping up your model withDistributedDataParallel
, the constructor ofDistributedDataParallel
will register the additional gradient reduction functions on all the parameters of the model itself at the time of construction. If you change the model’s parameters afterwards, gradient redunction functions no longer match the correct set of parameters.Warning
Using
DistributedDataParallel
in conjunction with the distributed-rpc-framework is experimental and subject to change.Warning
The
gradient_as_bucket_view
mode does not yet work with Automatic Mixed Precision (AMP). AMP maintains stashed gradients that are used for unscaling gradients. Withgradient_as_bucket_view=True
, these stashed gradients will point to communication buckets in the first iteration. In the next iteration, the communication buckets are mutated and thus these stashed gradients will be unexpectedly mutated as well, which might lead to wrong results.- Parameters
module (Module) – module to be parallelized
device_ids (list of python:int or torch.device) – CUDA devices. This should only be provided when the input module resides on a single CUDA device. For single-device modules, the i’th
module
replica is placed ondevice_ids[i]
. For multi-device modules and CPU modules,device_ids
must beNone
or an empty list, and input data for the forward pass must be placed on the correct device. (default: all visible devices for single-device modules)output_device (int or torch.device) – Device location of output for single-device CUDA modules. For multi-device modules and CPU modules, it must be
None
, and the module itself dictates the output location. (default:device_ids[0]
for single-device modules)broadcast_buffers (bool) – Flag that enables syncing (broadcasting) buffers of the module at beginning of the
forward
function. (default:True
)process_group – The process group to be used for distributed data all-reduction. If
None
, the default process group, which is created bytorch.distributed.init_process_group()
, will be used. (default:None
)bucket_cap_mb –
DistributedDataParallel
will bucket parameters into multiple buckets so that gradient reduction of each bucket can potentially overlap with backward computation.bucket_cap_mb
controls the bucket size in MegaBytes (MB). (default: 25)find_unused_parameters (bool) – Traverse the autograd graph from all tensors contained in the return value of the wrapped module’s
forward
function. Parameters that don’t receive gradients as part of this graph are preemptively marked as being ready to be reduced. Note that allforward
outputs that are derived from module parameters must participate in calculating loss and later the gradient computation. If they don’t, this wrapper will hang waiting for autograd to produce gradients for those parameters. Any outputs derived from module parameters that are otherwise unused can be detached from the autograd graph usingtorch.Tensor.detach
. (default:False
)check_reduction – This argument is deprecated.
gradient_as_bucket_view (bool) – This is a prototype feature and subject to changes. When set to
True
, gradients will be views pointing to different offsets ofallreduce
communication buckets. This can reduce peak memory usage, where the saved memory size will be equal to the total gradients size. Moreover, it avoids the overhead of copying between gradients andallreduce
communication buckets. When gradients are views,detach_()
cannot be called on the gradients. If hitting such errors, please fix it by referring to thezero_grad()
function intorch/optim/optimizer.py
as a solution.
- Variables
~DistributedDataParallel.module (Module) – the module to be parallelized.
Example:
>>> torch.distributed.init_process_group(backend='nccl', world_size=4, init_method='...') >>> net = torch.nn.parallel.DistributedDataParallel(model, pg)
-
join
(divide_by_initial_world_size=True, enable=True)[source]¶ A context manager to be used in conjunction with an instance of
torch.nn.parallel.DistributedDataParallel
to be able to train with uneven inputs across participating processes.This context manager will keep track of already-joined DDP processes, and “shadow” the forward and backward passes by inserting collective communication operations to match with the ones created by non-joined DDP processes. This will ensure each collective call has a corresponding call by already-joined DDP processes, preventing hangs or errors that would otherwise happen when training with uneven inputs across processes.
Once all DDP processes have joined, the context manager will broadcast the model corresponding to the last joined process to all processes to ensure the model is the same across all processes (which is guaranteed by DDP).
To use this to enable training with uneven inputs across processes, simply wrap this context manager around your training loop. No further modifications to the model or data loading is required.
Warning
This module works only with the multi-process, single-device usage of
torch.nn.parallel.DistributedDataParallel
, which means that a single process works on a single GPU.Warning
This module currently does not support custom distributed collective operations in the forward pass, such as
SyncBatchNorm
or other custom defined collectives in the model’s forward pass.- Parameters
divide_by_initial_world_size (bool) – If
True
, will divide gradients by the initialworld_size
DDP training was launched with. IfFalse
, will compute the effective world size (number of ranks that have not depleted their inputs yet) and divide gradients by that during allreduce. Setdivide_by_initial_world_size=True
to ensure every input sample including the uneven inputs have equal weight in terms of how much they contribute to the global gradient. This is achieved by always dividing the gradient by the initialworld_size
even when we encounter uneven inputs. If you set this toFalse
, we divide the gradient by the remaining number of nodes. This ensures parity with training on a smallerworld_size
although it also means the uneven inputs would contribute more towards the global gradient. Typically, you would want to set this toTrue
for cases where the last few inputs of your training job are uneven. In extreme cases, where there is a large discrepancy in the number of inputs, setting this toFalse
might provide better results.enable (bool) – Whether to enable uneven input detection or not. Pass in
enable=False
to disable in cases where you know that inputs are even across participating processes. Default isTrue
.
Example:
>>> import torch >>> import torch.distributed as dist >>> import os >>> import torch.multiprocessing as mp >>> import torch.nn as nn >>> # On each spawned worker >>> def worker(rank): >>> dist.init_process_group("nccl", rank=rank, world_size=2) >>> torch.cuda.set_device(rank) >>> model = nn.Linear(1, 1, bias=False).to(rank) >>> model = torch.nn.parallel.DistributedDataParallel( >>> model, device_ids=[rank], output_device=rank >>> ) >>> # Rank 1 gets one more input than rank 0. >>> inputs = [torch.tensor([1]).float() for _ in range(10 + rank)] >>> with model.join(): >>> for _ in range(5): >>> for inp in inputs: >>> loss = model(inp).sum() >>> loss.backward() >>> # Without the join() API, the below synchronization will hang >>> # blocking for rank 1's allreduce to complete. >>> torch.cuda.synchronize(device=rank)
-
no_sync
()[source]¶ A context manager to disable gradient synchronizations across DDP processes. Within this context, gradients will be accumulated on module variables, which will later be synchronized in the first forward-backward pass exiting the context.
Example:
>>> ddp = torch.nn.parallel.DistributedDataParallel(model, pg) >>> with ddp.no_sync(): >>> for input in inputs: >>> ddp(input).backward() # no synchronization, accumulate grads >>> ddp(another_input).backward() # synchronize grads
-
register_comm_hook
(state: object, hook: callable)[source]¶ Registers a communication hook which is an enhancement that provides a flexible hook to users where they can specify how DDP aggregates gradients across multiple workers.
This hook would be very useful for researchers to try out new ideas. For example, this hook can be used to implement several algorithms like GossipGrad and gradient compression which involve different communication strategies for parameter syncs while running Distributed DataParallel training.
- Parameters
state (object) –
Passed to the hook to maintain any state information during the training process. Examples include error feedback in gradient compression, peers to communicate with next in GossipGrad, etc.
It is locally stored by each worker and shared by all the gradient tensors on the worker.
hook (callable) –
Averages gradient tensors across workers and defined as:
hook(state: object, bucket: dist._GradBucket) -> torch.futures.Future
:This function is called once the bucket is ready. The hook can perform whatever processing is needed and return a Future indicating completion of any async work (ex: allreduce). If the hook doesn’t perform any communication, it can also just return a completed Future. The Future should hold the new value of grad bucket’s tensors. Once a bucket is ready, c10d reducer would call this hook and use the tensors returned by the Future and copy grads to individual parameters.
We also provide an API called
get_future
to retrieve a Future associated with the completion ofc10d.ProcessGroup.work
.
Warning
Grad bucket’s tensors will not be predivided by world_size. User is responsible to divide by the world_size in case of operations like allreduce.
Warning
DDP communication hook can only be registered once and should be registered before calling backward.
Warning
The Future object that hook returns should contain a result that has the same shape with the tensors inside grad bucket.
Warning
DDP communication hook does not support single-process multiple-device mode. Gradbucket tensors should consist of only a single tensor.
Warning
get_future
API supports only NCCL backend and will return atorch._C.Future
which is an internal type and should be used with caution. It can still be used byregister_comm_hook
API, but it is subject to some subtle differences compared totorch.futures.Future
.Warning
DDP communication hook is experimental and subject to change.
- Example::
Below is an example of a noop hook that returns the same tensors.
>>> def noop(state: object, bucket: dist._GradBucket): -> torch.futures.Future >>> fut = torch.futures.Future() >>> fut.set_result(bucket.get_tensors()) >>> return fut
>>> ddp.register_comm_hook(state = None, hook = noop)
- Example::
Below is an example of a Parallel SGD algorithm where gradients are encoded before allreduce, and then decoded after allreduce.
>>> def encode_and_decode(state: object, bucket: dist._GradBucket): -> torch.futures.Future >>> tensors = [t / process_group.world_size for t in bucket.get_tensors()] >>> encoded_tensors = encode(tensors) # encode gradients >>> fut = process_group.allreduce(encoded_tensors).get_future() >>> # Define the then callback to decode. >>> def decode(fut): >>> decoded_tensors = decode(fut.value()) # decode gradients >>> return decoded_tensors >>> return fut.then(decode)
>>> ddp.register_comm_hook(state = None, hook = encode_and_decode)
DistributedDataParallelCPU¶
Utilities¶
clip_grad_norm_¶
-
torch.nn.utils.
clip_grad_norm_
(parameters: Union[torch.Tensor, Iterable[torch.Tensor]], max_norm: float, norm_type: float = 2.0) → torch.Tensor[source]¶ Clips gradient norm of an iterable of parameters.
The norm is computed over all gradients together, as if they were concatenated into a single vector. Gradients are modified in-place.
- Parameters
- Returns
Total norm of the parameters (viewed as a single vector).
clip_grad_value_¶
-
torch.nn.utils.
clip_grad_value_
(parameters: Union[torch.Tensor, Iterable[torch.Tensor]], clip_value: float) → None[source]¶ Clips gradient of an iterable of parameters at specified value.
Gradients are modified in-place.
parameters_to_vector¶
-
torch.nn.utils.
parameters_to_vector
(parameters: Iterable[torch.Tensor]) → torch.Tensor[source]¶ Convert parameters to one vector
- Parameters
parameters (Iterable[Tensor]) – an iterator of Tensors that are the parameters of a model.
- Returns
The parameters represented by a single vector
vector_to_parameters¶
-
torch.nn.utils.
vector_to_parameters
(vec: torch.Tensor, parameters: Iterable[torch.Tensor]) → None[source]¶ Convert one vector to the parameters
weight_norm¶
-
torch.nn.utils.
weight_norm
(module: T_module, name: str = 'weight', dim: int = 0) → T_module[source]¶ Applies weight normalization to a parameter in the given module.
\[\mathbf{w} = g \dfrac{\mathbf{v}}{\|\mathbf{v}\|} \]Weight normalization is a reparameterization that decouples the magnitude of a weight tensor from its direction. This replaces the parameter specified by
name
(e.g.'weight'
) with two parameters: one specifying the magnitude (e.g.'weight_g'
) and one specifying the direction (e.g.'weight_v'
). Weight normalization is implemented via a hook that recomputes the weight tensor from the magnitude and direction before everyforward()
call.By default, with
dim=0
, the norm is computed independently per output channel/plane. To compute a norm over the entire weight tensor, usedim=None
.See https://arxiv.org/abs/1602.07868
- Parameters
- Returns
The original module with the weight norm hook
Example:
>>> m = weight_norm(nn.Linear(20, 40), name='weight') >>> m Linear(in_features=20, out_features=40, bias=True) >>> m.weight_g.size() torch.Size([40, 1]) >>> m.weight_v.size() torch.Size([40, 20])
remove_weight_norm¶
spectral_norm¶
-
torch.nn.utils.
spectral_norm
(module: T_module, name: str = 'weight', n_power_iterations: int = 1, eps: float = 1e-12, dim: Optional[int] = None) → T_module[source]¶ Applies spectral normalization to a parameter in the given module.
\[\mathbf{W}_{SN} = \dfrac{\mathbf{W}}{\sigma(\mathbf{W})}, \sigma(\mathbf{W}) = \max_{\mathbf{h}: \mathbf{h} \ne 0} \dfrac{\|\mathbf{W} \mathbf{h}\|_2}{\|\mathbf{h}\|_2} \]Spectral normalization stabilizes the training of discriminators (critics) in Generative Adversarial Networks (GANs) by rescaling the weight tensor with spectral norm \(\sigma\) of the weight matrix calculated using power iteration method. If the dimension of the weight tensor is greater than 2, it is reshaped to 2D in power iteration method to get spectral norm. This is implemented via a hook that calculates spectral norm and rescales weight before every
forward()
call.See Spectral Normalization for Generative Adversarial Networks .
- Parameters
module (nn.Module) – containing module
name (str, optional) – name of weight parameter
n_power_iterations (int, optional) – number of power iterations to calculate spectral norm
eps (float, optional) – epsilon for numerical stability in calculating norms
dim (int, optional) – dimension corresponding to number of outputs, the default is
0
, except for modules that are instances of ConvTranspose{1,2,3}d, when it is1
- Returns
The original module with the spectral norm hook
Example:
>>> m = spectral_norm(nn.Linear(20, 40)) >>> m Linear(in_features=20, out_features=40, bias=True) >>> m.weight_u.size() torch.Size([40])
remove_spectral_norm¶
PackedSequence¶
-
torch.nn.utils.rnn.
PackedSequence
(data: torch.Tensor, batch_sizes: torch.Tensor = None, sorted_indices: Optional[torch.Tensor] = None, unsorted_indices: Optional[torch.Tensor] = None)[source]¶ Holds the data and list of
batch_sizes
of a packed sequence.All RNN modules accept packed sequences as inputs.
Note
Instances of this class should never be created manually. They are meant to be instantiated by functions like
pack_padded_sequence()
.Batch sizes represent the number elements at each sequence step in the batch, not the varying sequence lengths passed to
pack_padded_sequence()
. For instance, given dataabc
andx
thePackedSequence
would contain dataaxbc
withbatch_sizes=[2,1,1]
.- Variables
~PackedSequence.data (Tensor) – Tensor containing packed sequence
~PackedSequence.batch_sizes (Tensor) – Tensor of integers holding information about the batch size at each sequence step
~PackedSequence.sorted_indices (Tensor, optional) – Tensor of integers holding how this
PackedSequence
is constructed from sequences.~PackedSequence.unsorted_indices (Tensor, optional) – Tensor of integers holding how this to recover the original sequences with correct order.
Note
data
can be on arbitrary device and of arbitrary dtype.sorted_indices
andunsorted_indices
must betorch.int64
tensors on the same device asdata
.However,
batch_sizes
should always be a CPUtorch.int64
tensor.This invariant is maintained throughout
PackedSequence
class, and all functions that construct a :class:PackedSequence in PyTorch (i.e., they only pass in tensors conforming to this constraint).
pack_padded_sequence¶
-
torch.nn.utils.rnn.
pack_padded_sequence
(input, lengths, batch_first=False, enforce_sorted=True)[source]¶ Packs a Tensor containing padded sequences of variable length.
input
can be of sizeT x B x *
where T is the length of the longest sequence (equal tolengths[0]
),B
is the batch size, and*
is any number of dimensions (including 0). Ifbatch_first
isTrue
,B x T x *
input
is expected.For unsorted sequences, use enforce_sorted = False. If
enforce_sorted
isTrue
, the sequences should be sorted by length in a decreasing order, i.e.input[:,0]
should be the longest sequence, andinput[:,B-1]
the shortest one. enforce_sorted = True is only necessary for ONNX export.Note
This function accepts any input that has at least two dimensions. You can apply it to pack the labels, and use the output of the RNN with them to compute the loss directly. A Tensor can be retrieved from a
PackedSequence
object by accessing its.data
attribute.- Parameters
input (Tensor) – padded batch of variable length sequences.
lengths (Tensor or list(int)) – list of sequence lengths of each batch element (must be on the CPU if provided as a tensor).
batch_first (bool, optional) – if
True
, the input is expected inB x T x *
format.enforce_sorted (bool, optional) – if
True
, the input is expected to contain sequences sorted by length in a decreasing order. IfFalse
, the input will get sorted unconditionally. Default:True
.
- Returns
a
PackedSequence
object
pad_packed_sequence¶
-
torch.nn.utils.rnn.
pad_packed_sequence
(sequence, batch_first=False, padding_value=0.0, total_length=None)[source]¶ Pads a packed batch of variable length sequences.
It is an inverse operation to
pack_padded_sequence()
.The returned Tensor’s data will be of size
T x B x *
, where T is the length of the longest sequence and B is the batch size. Ifbatch_first
is True, the data will be transposed intoB x T x *
format.Example
>>> from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence >>> seq = torch.tensor([[1,2,0], [3,0,0], [4,5,6]]) >>> lens = [2, 1, 3] >>> packed = pack_padded_sequence(seq, lens, batch_first=True, enforce_sorted=False) >>> packed PackedSequence(data=tensor([4, 1, 3, 5, 2, 6]), batch_sizes=tensor([3, 2, 1]), sorted_indices=tensor([2, 0, 1]), unsorted_indices=tensor([1, 2, 0])) >>> seq_unpacked, lens_unpacked = pad_packed_sequence(packed, batch_first=True) >>> seq_unpacked tensor([[1, 2, 0], [3, 0, 0], [4, 5, 6]]) >>> lens_unpacked tensor([2, 1, 3])
Note
total_length
is useful to implement thepack sequence -> recurrent network -> unpack sequence
pattern in aModule
wrapped inDataParallel
. See this FAQ section for details.- Parameters
sequence (PackedSequence) – batch to pad
batch_first (bool, optional) – if
True
, the output will be inB x T x *
format.padding_value (float, optional) – values for padded elements.
total_length (int, optional) – if not
None
, the output will be padded to have lengthtotal_length
. This method will throwValueError
iftotal_length
is less than the max sequence length insequence
.
- Returns
Tuple of Tensor containing the padded sequence, and a Tensor containing the list of lengths of each sequence in the batch. Batch elements will be re-ordered as they were ordered originally when the batch was passed to
pack_padded_sequence
orpack_sequence
.
pad_sequence¶
-
torch.nn.utils.rnn.
pad_sequence
(sequences, batch_first=False, padding_value=0.0)[source]¶ Pad a list of variable length Tensors with
padding_value
pad_sequence
stacks a list of Tensors along a new dimension, and pads them to equal length. For example, if the input is list of sequences with sizeL x *
and if batch_first is False, andT x B x *
otherwise.B is batch size. It is equal to the number of elements in
sequences
. T is length of the longest sequence. L is length of the sequence. * is any number of trailing dimensions, including none.Example
>>> from torch.nn.utils.rnn import pad_sequence >>> a = torch.ones(25, 300) >>> b = torch.ones(22, 300) >>> c = torch.ones(15, 300) >>> pad_sequence([a, b, c]).size() torch.Size([25, 3, 300])
Note
This function returns a Tensor of size
T x B x *
orB x T x *
where T is the length of the longest sequence. This function assumes trailing dimensions and type of all the Tensors in sequences are same.- Parameters
- Returns
Tensor of size
T x B x *
ifbatch_first
isFalse
. Tensor of sizeB x T x *
otherwise
pack_sequence¶
-
torch.nn.utils.rnn.
pack_sequence
(sequences, enforce_sorted=True)[source]¶ Packs a list of variable length Tensors
sequences
should be a list of Tensors of sizeL x *
, where L is the length of a sequence and * is any number of trailing dimensions, including zero.For unsorted sequences, use enforce_sorted = False. If
enforce_sorted
isTrue
, the sequences should be sorted in the order of decreasing length.enforce_sorted = True
is only necessary for ONNX export.Example
>>> from torch.nn.utils.rnn import pack_sequence >>> a = torch.tensor([1,2,3]) >>> b = torch.tensor([4,5]) >>> c = torch.tensor([6]) >>> pack_sequence([a, b, c]) PackedSequence(data=tensor([ 1, 4, 6, 2, 5, 3]), batch_sizes=tensor([ 3, 2, 1]))
- Parameters
- Returns
a
PackedSequence
object
torch.nn.functional¶
Convolution functions¶
conv1d¶
-
torch.nn.functional.
conv1d
(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) → Tensor¶ Applies a 1D convolution over an input signal composed of several input planes.
This operator supports TensorFloat32.
See
Conv1d
for details and output shape.Note
In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting
torch.backends.cudnn.deterministic = True
. See Reproducibility for more information.- Parameters
input – input tensor of shape \((\text{minibatch} , \text{in\_channels} , iW)\)
weight – filters of shape \((\text{out\_channels} , \frac{\text{in\_channels}}{\text{groups}} , kW)\)
bias – optional bias of shape \((\text{out\_channels})\). Default:
None
stride – the stride of the convolving kernel. Can be a single number or a one-element tuple (sW,). Default: 1
padding – implicit paddings on both sides of the input. Can be a single number or a one-element tuple (padW,). Default: 0
dilation – the spacing between kernel elements. Can be a single number or a one-element tuple (dW,). Default: 1
groups – split input into groups, \(\text{in\_channels}\) should be divisible by the number of groups. Default: 1
Examples:
>>> filters = torch.randn(33, 16, 3) >>> inputs = torch.randn(20, 16, 50) >>> F.conv1d(inputs, filters)
conv2d¶
-
torch.nn.functional.
conv2d
(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) → Tensor¶ Applies a 2D convolution over an input image composed of several input planes.
This operator supports TensorFloat32.
See
Conv2d
for details and output shape.Note
In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting
torch.backends.cudnn.deterministic = True
. See Reproducibility for more information.- Parameters
input – input tensor of shape \((\text{minibatch} , \text{in\_channels} , iH , iW)\)
weight – filters of shape \((\text{out\_channels} , \frac{\text{in\_channels}}{\text{groups}} , kH , kW)\)
bias – optional bias tensor of shape \((\text{out\_channels})\). Default:
None
stride – the stride of the convolving kernel. Can be a single number or a tuple (sH, sW). Default: 1
padding – implicit paddings on both sides of the input. Can be a single number or a tuple (padH, padW). Default: 0
dilation – the spacing between kernel elements. Can be a single number or a tuple (dH, dW). Default: 1
groups – split input into groups, \(\text{in\_channels}\) should be divisible by the number of groups. Default: 1
Examples:
>>> # With square kernels and equal stride >>> filters = torch.randn(8,4,3,3) >>> inputs = torch.randn(1,4,5,5) >>> F.conv2d(inputs, filters, padding=1)
conv3d¶
-
torch.nn.functional.
conv3d
(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) → Tensor¶ Applies a 3D convolution over an input image composed of several input planes.
This operator supports TensorFloat32.
See
Conv3d
for details and output shape.Note
In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting
torch.backends.cudnn.deterministic = True
. See Reproducibility for more information.- Parameters
input – input tensor of shape \((\text{minibatch} , \text{in\_channels} , iT , iH , iW)\)
weight – filters of shape \((\text{out\_channels} , \frac{\text{in\_channels}}{\text{groups}} , kT , kH , kW)\)
bias – optional bias tensor of shape \((\text{out\_channels})\). Default: None
stride – the stride of the convolving kernel. Can be a single number or a tuple (sT, sH, sW). Default: 1
padding – implicit paddings on both sides of the input. Can be a single number or a tuple (padT, padH, padW). Default: 0
dilation – the spacing between kernel elements. Can be a single number or a tuple (dT, dH, dW). Default: 1
groups – split input into groups, \(\text{in\_channels}\) should be divisible by the number of groups. Default: 1
Examples:
>>> filters = torch.randn(33, 16, 3, 3, 3) >>> inputs = torch.randn(20, 16, 50, 10, 20) >>> F.conv3d(inputs, filters)
conv_transpose1d¶
-
torch.nn.functional.
conv_transpose1d
(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1) → Tensor¶ Applies a 1D transposed convolution operator over an input signal composed of several input planes, sometimes also called “deconvolution”.
This operator supports TensorFloat32.
See
ConvTranspose1d
for details and output shape.Note
In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting
torch.backends.cudnn.deterministic = True
. See Reproducibility for more information.- Parameters
input – input tensor of shape \((\text{minibatch} , \text{in\_channels} , iW)\)
weight – filters of shape \((\text{in\_channels} , \frac{\text{out\_channels}}{\text{groups}} , kW)\)
bias – optional bias of shape \((\text{out\_channels})\). Default: None
stride – the stride of the convolving kernel. Can be a single number or a tuple
(sW,)
. Default: 1padding –
dilation * (kernel_size - 1) - padding
zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple(padW,)
. Default: 0output_padding – additional size added to one side of each dimension in the output shape. Can be a single number or a tuple
(out_padW)
. Default: 0groups – split input into groups, \(\text{in\_channels}\) should be divisible by the number of groups. Default: 1
dilation – the spacing between kernel elements. Can be a single number or a tuple
(dW,)
. Default: 1
Examples:
>>> inputs = torch.randn(20, 16, 50) >>> weights = torch.randn(16, 33, 5) >>> F.conv_transpose1d(inputs, weights)
conv_transpose2d¶
-
torch.nn.functional.
conv_transpose2d
(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1) → Tensor¶ Applies a 2D transposed convolution operator over an input image composed of several input planes, sometimes also called “deconvolution”.
This operator supports TensorFloat32.
See
ConvTranspose2d
for details and output shape.Note
In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting
torch.backends.cudnn.deterministic = True
. See Reproducibility for more information.- Parameters
input – input tensor of shape \((\text{minibatch} , \text{in\_channels} , iH , iW)\)
weight – filters of shape \((\text{in\_channels} , \frac{\text{out\_channels}}{\text{groups}} , kH , kW)\)
bias – optional bias of shape \((\text{out\_channels})\). Default: None
stride – the stride of the convolving kernel. Can be a single number or a tuple
(sH, sW)
. Default: 1padding –
dilation * (kernel_size - 1) - padding
zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple(padH, padW)
. Default: 0output_padding – additional size added to one side of each dimension in the output shape. Can be a single number or a tuple
(out_padH, out_padW)
. Default: 0groups – split input into groups, \(\text{in\_channels}\) should be divisible by the number of groups. Default: 1
dilation – the spacing between kernel elements. Can be a single number or a tuple
(dH, dW)
. Default: 1
Examples:
>>> # With square kernels and equal stride >>> inputs = torch.randn(1, 4, 5, 5) >>> weights = torch.randn(4, 8, 3, 3) >>> F.conv_transpose2d(inputs, weights, padding=1)
conv_transpose3d¶
-
torch.nn.functional.
conv_transpose3d
(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1) → Tensor¶ Applies a 3D transposed convolution operator over an input image composed of several input planes, sometimes also called “deconvolution”
This operator supports TensorFloat32.
See
ConvTranspose3d
for details and output shape.Note
In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting
torch.backends.cudnn.deterministic = True
. See Reproducibility for more information.- Parameters
input – input tensor of shape \((\text{minibatch} , \text{in\_channels} , iT , iH , iW)\)
weight – filters of shape \((\text{in\_channels} , \frac{\text{out\_channels}}{\text{groups}} , kT , kH , kW)\)
bias – optional bias of shape \((\text{out\_channels})\). Default: None
stride – the stride of the convolving kernel. Can be a single number or a tuple
(sT, sH, sW)
. Default: 1padding –
dilation * (kernel_size - 1) - padding
zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple(padT, padH, padW)
. Default: 0output_padding – additional size added to one side of each dimension in the output shape. Can be a single number or a tuple
(out_padT, out_padH, out_padW)
. Default: 0groups – split input into groups, \(\text{in\_channels}\) should be divisible by the number of groups. Default: 1
dilation – the spacing between kernel elements. Can be a single number or a tuple (dT, dH, dW). Default: 1
Examples:
>>> inputs = torch.randn(20, 16, 50, 10, 20) >>> weights = torch.randn(16, 33, 3, 3, 3) >>> F.conv_transpose3d(inputs, weights)
unfold¶
-
torch.nn.functional.
unfold
(input, kernel_size, dilation=1, padding=0, stride=1)[source]¶ Extracts sliding local blocks from a batched input tensor.
Warning
Currently, only 4-D input tensors (batched image-like tensors) are supported.
Warning
More than one element of the unfolded tensor may refer to a single memory location. As a result, in-place operations (especially ones that are vectorized) may result in incorrect behavior. If you need to write to the tensor, please clone it first.
See
torch.nn.Unfold
for details
fold¶
-
torch.nn.functional.
fold
(input, output_size, kernel_size, dilation=1, padding=0, stride=1)[source]¶ Combines an array of sliding local blocks into a large containing tensor.
Warning
Currently, only 3-D output tensors (unfolded batched image-like tensors) are supported.
See
torch.nn.Fold
for details
Pooling functions¶
avg_pool1d¶
-
torch.nn.functional.
avg_pool1d
(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True) → Tensor¶ Applies a 1D average pooling over an input signal composed of several input planes.
See
AvgPool1d
for details and output shape.- Parameters
input – input tensor of shape \((\text{minibatch} , \text{in\_channels} , iW)\)
kernel_size – the size of the window. Can be a single number or a tuple (kW,)
stride – the stride of the window. Can be a single number or a tuple (sW,). Default:
kernel_size
padding – implicit zero paddings on both sides of the input. Can be a single number or a tuple (padW,). Default: 0
ceil_mode – when True, will use ceil instead of floor to compute the output shape. Default:
False
count_include_pad – when True, will include the zero-padding in the averaging calculation. Default:
True
Examples:
>>> # pool of square window of size=3, stride=2 >>> input = torch.tensor([[[1, 2, 3, 4, 5, 6, 7]]], dtype=torch.float32) >>> F.avg_pool1d(input, kernel_size=3, stride=2) tensor([[[ 2., 4., 6.]]])
avg_pool2d¶
-
torch.nn.functional.
avg_pool2d
(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None) → Tensor¶ Applies 2D average-pooling operation in \(kH \times kW\) regions by step size \(sH \times sW\) steps. The number of output features is equal to the number of input planes.
See
AvgPool2d
for details and output shape.- Parameters
input – input tensor \((\text{minibatch} , \text{in\_channels} , iH , iW)\)
kernel_size – size of the pooling region. Can be a single number or a tuple (kH, kW)
stride – stride of the pooling operation. Can be a single number or a tuple (sH, sW). Default:
kernel_size
padding – implicit zero paddings on both sides of the input. Can be a single number or a tuple (padH, padW). Default: 0
ceil_mode – when True, will use ceil instead of floor in the formula to compute the output shape. Default:
False
count_include_pad – when True, will include the zero-padding in the averaging calculation. Default:
True
divisor_override – if specified, it will be used as divisor, otherwise size of the pooling region will be used. Default: None
avg_pool3d¶
-
torch.nn.functional.
avg_pool3d
(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None) → Tensor¶ Applies 3D average-pooling operation in \(kT \times kH \times kW\) regions by step size \(sT \times sH \times sW\) steps. The number of output features is equal to \(\lfloor\frac{\text{input planes}}{sT}\rfloor\).
See
AvgPool3d
for details and output shape.- Parameters
input – input tensor \((\text{minibatch} , \text{in\_channels} , iT \times iH , iW)\)
kernel_size – size of the pooling region. Can be a single number or a tuple (kT, kH, kW)
stride – stride of the pooling operation. Can be a single number or a tuple (sT, sH, sW). Default:
kernel_size
padding – implicit zero paddings on both sides of the input. Can be a single number or a tuple (padT, padH, padW), Default: 0
ceil_mode – when True, will use ceil instead of floor in the formula to compute the output shape
count_include_pad – when True, will include the zero-padding in the averaging calculation
divisor_override – if specified, it will be used as divisor, otherwise size of the pooling region will be used. Default: None
max_pool1d¶
max_pool2d¶
max_pool3d¶
max_unpool1d¶
-
torch.nn.functional.
max_unpool1d
(input, indices, kernel_size, stride=None, padding=0, output_size=None)[source]¶ Computes a partial inverse of
MaxPool1d
.See
MaxUnpool1d
for details.
max_unpool2d¶
-
torch.nn.functional.
max_unpool2d
(input, indices, kernel_size, stride=None, padding=0, output_size=None)[source]¶ Computes a partial inverse of
MaxPool2d
.See
MaxUnpool2d
for details.
max_unpool3d¶
-
torch.nn.functional.
max_unpool3d
(input, indices, kernel_size, stride=None, padding=0, output_size=None)[source]¶ Computes a partial inverse of
MaxPool3d
.See
MaxUnpool3d
for details.
lp_pool1d¶
-
torch.nn.functional.
lp_pool1d
(input, norm_type, kernel_size, stride=None, ceil_mode=False)[source]¶ Applies a 1D power-average pooling over an input signal composed of several input planes. If the sum of all inputs to the power of p is zero, the gradient is set to zero as well.
See
LPPool1d
for details.
lp_pool2d¶
-
torch.nn.functional.
lp_pool2d
(input, norm_type, kernel_size, stride=None, ceil_mode=False)[source]¶ Applies a 2D power-average pooling over an input signal composed of several input planes. If the sum of all inputs to the power of p is zero, the gradient is set to zero as well.
See
LPPool2d
for details.
adaptive_max_pool1d¶
-
torch.nn.functional.
adaptive_max_pool1d
(*args, **kwargs)¶ Applies a 1D adaptive max pooling over an input signal composed of several input planes.
See
AdaptiveMaxPool1d
for details and output shape.- Parameters
output_size – the target output size (single integer)
return_indices – whether to return pooling indices. Default:
False
adaptive_max_pool2d¶
-
torch.nn.functional.
adaptive_max_pool2d
(*args, **kwargs)¶ Applies a 2D adaptive max pooling over an input signal composed of several input planes.
See
AdaptiveMaxPool2d
for details and output shape.- Parameters
output_size – the target output size (single integer or double-integer tuple)
return_indices – whether to return pooling indices. Default:
False
adaptive_max_pool3d¶
-
torch.nn.functional.
adaptive_max_pool3d
(*args, **kwargs)¶ Applies a 3D adaptive max pooling over an input signal composed of several input planes.
See
AdaptiveMaxPool3d
for details and output shape.- Parameters
output_size – the target output size (single integer or triple-integer tuple)
return_indices – whether to return pooling indices. Default:
False
adaptive_avg_pool1d¶
-
torch.nn.functional.
adaptive_avg_pool1d
(input, output_size) → Tensor¶ Applies a 1D adaptive average pooling over an input signal composed of several input planes.
See
AdaptiveAvgPool1d
for details and output shape.- Parameters
output_size – the target output size (single integer)
adaptive_avg_pool2d¶
-
torch.nn.functional.
adaptive_avg_pool2d
(input, output_size)[source]¶ Applies a 2D adaptive average pooling over an input signal composed of several input planes.
See
AdaptiveAvgPool2d
for details and output shape.- Parameters
output_size – the target output size (single integer or double-integer tuple)
adaptive_avg_pool3d¶
-
torch.nn.functional.
adaptive_avg_pool3d
(input, output_size)[source]¶ Applies a 3D adaptive average pooling over an input signal composed of several input planes.
See
AdaptiveAvgPool3d
for details and output shape.- Parameters
output_size – the target output size (single integer or triple-integer tuple)
Non-linear activation functions¶
threshold¶
-
torch.nn.functional.
threshold
(input: torch.Tensor, threshold: float, value: float, inplace: bool = False) → torch.Tensor¶ Thresholds each element of the input Tensor.
See
Threshold
for more details.
-
torch.nn.functional.
threshold_
(input, threshold, value) → Tensor¶ In-place version of
threshold()
.
relu¶
hardtanh¶
-
torch.nn.functional.
hardtanh
(input, min_val=- 1.0, max_val=1.0, inplace=False) → Tensor[source]¶ Applies the HardTanh function element-wise. See
Hardtanh
for more details.
-
torch.nn.functional.
hardtanh_
(input, min_val=- 1.0, max_val=1.0) → Tensor¶ In-place version of
hardtanh()
.
relu6¶
elu¶
-
torch.nn.functional.
elu
(input: torch.Tensor, alpha: float = 1.0, inplace: bool = False) → torch.Tensor[source]¶ Applies element-wise, \(\text{ELU}(x) = \max(0,x) + \min(0, \alpha * (\exp(x) - 1))\).
See
ELU
for more details.
selu¶
celu¶
leaky_relu¶
-
torch.nn.functional.
leaky_relu
(input, negative_slope=0.01, inplace=False) → Tensor[source]¶ Applies element-wise, \(\text{LeakyReLU}(x) = \max(0, x) + \text{negative\_slope} * \min(0, x)\)
See
LeakyReLU
for more details.
-
torch.nn.functional.
leaky_relu_
(input, negative_slope=0.01) → Tensor¶ In-place version of
leaky_relu()
.
prelu¶
rrelu¶
glu¶
logsigmoid¶
-
torch.nn.functional.
logsigmoid
(input) → Tensor¶ Applies element-wise \(\text{LogSigmoid}(x_i) = \log \left(\frac{1}{1 + \exp(-x_i)}\right)\)
See
LogSigmoid
for more details.
hardshrink¶
-
torch.nn.functional.
hardshrink
(input, lambd=0.5) → Tensor[source]¶ Applies the hard shrinkage function element-wise
See
Hardshrink
for more details.
tanhshrink¶
-
torch.nn.functional.
tanhshrink
(input) → Tensor[source]¶ Applies element-wise, \(\text{Tanhshrink}(x) = x - \text{Tanh}(x)\)
See
Tanhshrink
for more details.
softsign¶
softplus¶
-
torch.nn.functional.
softplus
(input, beta=1, threshold=20) → Tensor¶ Applies element-wise, the function \(\text{Softplus}(x) = \frac{1}{\beta} * \log(1 + \exp(\beta * x))\).
For numerical stability the implementation reverts to the linear function when \(input \times \beta > threshold\).
See
Softplus
for more details.
softmin¶
-
torch.nn.functional.
softmin
(input: torch.Tensor, dim: Optional[int] = None, _stacklevel: int = 3, dtype: Optional[int] = None) → torch.Tensor[source]¶ Applies a softmin function.
Note that \(\text{Softmin}(x) = \text{Softmax}(-x)\). See softmax definition for mathematical formula.
See
Softmin
for more details.- Parameters
input (Tensor) – input
dim (int) – A dimension along which softmin will be computed (so every slice along dim will sum to 1).
dtype (
torch.dtype
, optional) – the desired data type of returned tensor. If specified, the input tensor is casted todtype
before the operation is performed. This is useful for preventing data type overflows. Default: None.
softmax¶
-
torch.nn.functional.
softmax
(input: torch.Tensor, dim: Optional[int] = None, _stacklevel: int = 3, dtype: Optional[int] = None) → torch.Tensor[source]¶ Applies a softmax function.
Softmax is defined as:
\(\text{Softmax}(x_{i}) = \frac{\exp(x_i)}{\sum_j \exp(x_j)}\)
It is applied to all slices along dim, and will re-scale them so that the elements lie in the range [0, 1] and sum to 1.
See
Softmax
for more details.- Parameters
input (Tensor) – input
dim (int) – A dimension along which softmax will be computed.
dtype (
torch.dtype
, optional) – the desired data type of returned tensor. If specified, the input tensor is casted todtype
before the operation is performed. This is useful for preventing data type overflows. Default: None.
Note
This function doesn’t work directly with NLLLoss, which expects the Log to be computed between the Softmax and itself. Use log_softmax instead (it’s faster and has better numerical properties).
softshrink¶
-
torch.nn.functional.
softshrink
(input, lambd=0.5) → Tensor¶ Applies the soft shrinkage function elementwise
See
Softshrink
for more details.
gumbel_softmax¶
-
torch.nn.functional.
gumbel_softmax
(logits: torch.Tensor, tau: float = 1, hard: bool = False, eps: float = 1e-10, dim: int = - 1) → torch.Tensor[source]¶ Samples from the Gumbel-Softmax distribution (Link 1 Link 2) and optionally discretizes.
- Parameters
logits – […, num_features] unnormalized log probabilities
tau – non-negative scalar temperature
hard – if
True
, the returned samples will be discretized as one-hot vectors, but will be differentiated as if it is the soft sample in autograddim (int) – A dimension along which softmax will be computed. Default: -1.
- Returns
Sampled tensor of same shape as logits from the Gumbel-Softmax distribution. If
hard=True
, the returned samples will be one-hot, otherwise they will be probability distributions that sum to 1 across dim.
Note
This function is here for legacy reasons, may be removed from nn.Functional in the future.
Note
The main trick for hard is to do y_hard - y_soft.detach() + y_soft
It achieves two things: - makes the output value exactly one-hot (since we add then subtract y_soft value) - makes the gradient equal to y_soft gradient (since we strip all other gradients)
- Examples::
>>> logits = torch.randn(20, 32) >>> # Sample soft categorical using reparametrization trick: >>> F.gumbel_softmax(logits, tau=1, hard=False) >>> # Sample hard categorical using "Straight-through" trick: >>> F.gumbel_softmax(logits, tau=1, hard=True)
log_softmax¶
-
torch.nn.functional.
log_softmax
(input: torch.Tensor, dim: Optional[int] = None, _stacklevel: int = 3, dtype: Optional[int] = None) → torch.Tensor[source]¶ Applies a softmax followed by a logarithm.
While mathematically equivalent to log(softmax(x)), doing these two operations separately is slower, and numerically unstable. This function uses an alternative formulation to compute the output and gradient correctly.
See
LogSoftmax
for more details.- Parameters
input (Tensor) – input
dim (int) – A dimension along which log_softmax will be computed.
dtype (
torch.dtype
, optional) – the desired data type of returned tensor. If specified, the input tensor is casted todtype
before the operation is performed. This is useful for preventing data type overflows. Default: None.
tanh¶
sigmoid¶
Normalization functions¶
batch_norm¶
-
torch.nn.functional.
batch_norm
(input: torch.Tensor, running_mean: Optional[torch.Tensor], running_var: Optional[torch.Tensor], weight: Optional[torch.Tensor] = None, bias: Optional[torch.Tensor] = None, training: bool = False, momentum: float = 0.1, eps: float = 1e-05) → torch.Tensor[source]¶ Applies Batch Normalization for each channel across a batch of data.
See
BatchNorm1d
,BatchNorm2d
,BatchNorm3d
for details.
instance_norm¶
-
torch.nn.functional.
instance_norm
(input: torch.Tensor, running_mean: Optional[torch.Tensor] = None, running_var: Optional[torch.Tensor] = None, weight: Optional[torch.Tensor] = None, bias: Optional[torch.Tensor] = None, use_input_stats: bool = True, momentum: float = 0.1, eps: float = 1e-05) → torch.Tensor[source]¶ Applies Instance Normalization for each channel in each data sample in a batch.
See
InstanceNorm1d
,InstanceNorm2d
,InstanceNorm3d
for details.
layer_norm¶
-
torch.nn.functional.
layer_norm
(input: torch.Tensor, normalized_shape: List[int], weight: Optional[torch.Tensor] = None, bias: Optional[torch.Tensor] = None, eps: float = 1e-05) → torch.Tensor[source]¶ Applies Layer Normalization for last certain number of dimensions.
See
LayerNorm
for details.
local_response_norm¶
-
torch.nn.functional.
local_response_norm
(input: torch.Tensor, size: int, alpha: float = 0.0001, beta: float = 0.75, k: float = 1.0) → torch.Tensor[source]¶ Applies local response normalization over an input signal composed of several input planes, where channels occupy the second dimension. Applies normalization across channels.
See
LocalResponseNorm
for details.
normalize¶
-
torch.nn.functional.
normalize
(input: torch.Tensor, p: float = 2, dim: int = 1, eps: float = 1e-12, out: Optional[torch.Tensor] = None) → torch.Tensor[source]¶ Performs \(L_p\) normalization of inputs over specified dimension.
For a tensor
input
of sizes \((n_0, ..., n_{dim}, ..., n_k)\), each \(n_{dim}\) -element vector \(v\) along dimensiondim
is transformed as\[v = \frac{v}{\max(\lVert v \rVert_p, \epsilon)}. \]With the default arguments it uses the Euclidean norm over vectors along dimension \(1\) for normalization.
- Parameters
input – input tensor of any shape
p (float) – the exponent value in the norm formulation. Default: 2
dim (int) – the dimension to reduce. Default: 1
eps (float) – small value to avoid division by zero. Default: 1e-12
out (Tensor, optional) – the output tensor. If
out
is used, this operation won’t be differentiable.
Linear functions¶
linear¶
-
torch.nn.functional.
linear
(input: torch.Tensor, weight: torch.Tensor, bias: Optional[torch.Tensor] = None) → torch.Tensor[source]¶ Applies a linear transformation to the incoming data: \(y = xA^T + b\).
This operator supports TensorFloat32.
Shape:
Input: \((N, *, in\_features)\) N is the batch size, * means any number of additional dimensions
Weight: \((out\_features, in\_features)\)
Bias: \((out\_features)\)
Output: \((N, *, out\_features)\)
bilinear¶
-
torch.nn.functional.
bilinear
(input1: torch.Tensor, input2: torch.Tensor, weight: torch.Tensor, bias: Optional[torch.Tensor] = None) → torch.Tensor[source]¶ Applies a bilinear transformation to the incoming data: \(y = x_1^T A x_2 + b\)
Shape:
input1: \((N, *, H_{in1})\) where \(H_{in1}=\text{in1\_features}\) and \(*\) means any number of additional dimensions. All but the last dimension of the inputs should be the same.
input2: \((N, *, H_{in2})\) where \(H_{in2}=\text{in2\_features}\)
weight: \((\text{out\_features}, \text{in1\_features}, \text{in2\_features})\)
bias: \((\text{out\_features})\)
output: \((N, *, H_{out})\) where \(H_{out}=\text{out\_features}\) and all but the last dimension are the same shape as the input.
Dropout functions¶
dropout¶
-
torch.nn.functional.
dropout
(input: torch.Tensor, p: float = 0.5, training: bool = True, inplace: bool = False) → torch.Tensor[source]¶ During training, randomly zeroes some of the elements of the input tensor with probability
p
using samples from a Bernoulli distribution.See
Dropout
for details.- Parameters
p – probability of an element to be zeroed. Default: 0.5
training – apply dropout if is
True
. Default:True
inplace – If set to
True
, will do this operation in-place. Default:False
alpha_dropout¶
-
torch.nn.functional.
alpha_dropout
(input: torch.Tensor, p: float = 0.5, training: bool = False, inplace: bool = False) → torch.Tensor[source]¶ Applies alpha dropout to the input.
See
AlphaDropout
for details.
dropout2d¶
-
torch.nn.functional.
dropout2d
(input: torch.Tensor, p: float = 0.5, training: bool = True, inplace: bool = False) → torch.Tensor[source]¶ Randomly zero out entire channels (a channel is a 2D feature map, e.g., the \(j\)-th channel of the \(i\)-th sample in the batched input is a 2D tensor \(\text{input}[i, j]\)) of the input tensor). Each channel will be zeroed out independently on every forward call with probability
p
using samples from a Bernoulli distribution.See
Dropout2d
for details.- Parameters
p – probability of a channel to be zeroed. Default: 0.5
training – apply dropout if is
True
. Default:True
inplace – If set to
True
, will do this operation in-place. Default:False
dropout3d¶
-
torch.nn.functional.
dropout3d
(input: torch.Tensor, p: float = 0.5, training: bool = True, inplace: bool = False) → torch.Tensor[source]¶ Randomly zero out entire channels (a channel is a 3D feature map, e.g., the \(j\)-th channel of the \(i\)-th sample in the batched input is a 3D tensor \(\text{input}[i, j]\)) of the input tensor). Each channel will be zeroed out independently on every forward call with probability
p
using samples from a Bernoulli distribution.See
Dropout3d
for details.- Parameters
p – probability of a channel to be zeroed. Default: 0.5
training – apply dropout if is
True
. Default:True
inplace – If set to
True
, will do this operation in-place. Default:False
Sparse functions¶
embedding¶
-
torch.nn.functional.
embedding
(input: torch.Tensor, weight: torch.Tensor, padding_idx: Optional[int] = None, max_norm: Optional[float] = None, norm_type: float = 2.0, scale_grad_by_freq: bool = False, sparse: bool = False) → torch.Tensor[source]¶ A simple lookup table that looks up embeddings in a fixed dictionary and size.
This module is often used to retrieve word embeddings using indices. The input to the module is a list of indices, and the embedding matrix, and the output is the corresponding word embeddings.
See
torch.nn.Embedding
for more details.- Parameters
input (LongTensor) – Tensor containing indices into the embedding matrix
weight (Tensor) – The embedding matrix with number of rows equal to the maximum possible index + 1, and number of columns equal to the embedding size
padding_idx (int, optional) – If given, pads the output with the embedding vector at
padding_idx
(initialized to zeros) whenever it encounters the index.max_norm (float, optional) – If given, each embedding vector with norm larger than
max_norm
is renormalized to have normmax_norm
. Note: this will modifyweight
in-place.norm_type (float, optional) – The p of the p-norm to compute for the
max_norm
option. Default2
.scale_grad_by_freq (boolean, optional) – If given, this will scale gradients by the inverse of frequency of the words in the mini-batch. Default
False
.sparse (bool, optional) – If
True
, gradient w.r.t.weight
will be a sparse tensor. See Notes undertorch.nn.Embedding
for more details regarding sparse gradients.
- Shape:
Input: LongTensor of arbitrary shape containing the indices to extract
- Weight: Embedding matrix of floating point type with shape (V, embedding_dim),
where V = maximum index + 1 and embedding_dim = the embedding size
Output: (*, embedding_dim), where * is the input shape
Examples:
>>> # a batch of 2 samples of 4 indices each >>> input = torch.tensor([[1,2,4,5],[4,3,2,9]]) >>> # an embedding matrix containing 10 tensors of size 3 >>> embedding_matrix = torch.rand(10, 3) >>> F.embedding(input, embedding_matrix) tensor([[[ 0.8490, 0.9625, 0.6753], [ 0.9666, 0.7761, 0.6108], [ 0.6246, 0.9751, 0.3618], [ 0.4161, 0.2419, 0.7383]], [[ 0.6246, 0.9751, 0.3618], [ 0.0237, 0.7794, 0.0528], [ 0.9666, 0.7761, 0.6108], [ 0.3385, 0.8612, 0.1867]]]) >>> # example with padding_idx >>> weights = torch.rand(10, 3) >>> weights[0, :].zero_() >>> embedding_matrix = weights >>> input = torch.tensor([[0,2,0,5]]) >>> F.embedding(input, embedding_matrix, padding_idx=0) tensor([[[ 0.0000, 0.0000, 0.0000], [ 0.5609, 0.5384, 0.8720], [ 0.0000, 0.0000, 0.0000], [ 0.6262, 0.2438, 0.7471]]])
embedding_bag¶
-
torch.nn.functional.
embedding_bag
(input: torch.Tensor, weight: torch.Tensor, offsets: Optional[torch.Tensor] = None, max_norm: Optional[float] = None, norm_type: float = 2, scale_grad_by_freq: bool = False, mode: str = 'mean', sparse: bool = False, per_sample_weights: Optional[torch.Tensor] = None, include_last_offset: bool = False) → torch.Tensor[source]¶ Computes sums, means or maxes of bags of embeddings, without instantiating the intermediate embeddings.
See
torch.nn.EmbeddingBag
for more details.Note
This operation may produce nondeterministic gradients when given tensors on a CUDA device. See Reproducibility for more information.
- Parameters
input (LongTensor) – Tensor containing bags of indices into the embedding matrix
weight (Tensor) – The embedding matrix with number of rows equal to the maximum possible index + 1, and number of columns equal to the embedding size
offsets (LongTensor, optional) – Only used when
input
is 1D.offsets
determines the starting index position of each bag (sequence) ininput
.max_norm (float, optional) – If given, each embedding vector with norm larger than
max_norm
is renormalized to have normmax_norm
. Note: this will modifyweight
in-place.norm_type (float, optional) – The
p
in thep
-norm to compute for themax_norm
option. Default2
.scale_grad_by_freq (boolean, optional) – if given, this will scale gradients by the inverse of frequency of the words in the mini-batch. Default
False
. Note: this option is not supported whenmode="max"
.mode (string, optional) –
"sum"
,"mean"
or"max"
. Specifies the way to reduce the bag. Default:"mean"
sparse (bool, optional) – if
True
, gradient w.r.t.weight
will be a sparse tensor. See Notes undertorch.nn.Embedding
for more details regarding sparse gradients. Note: this option is not supported whenmode="max"
.per_sample_weights (Tensor, optional) – a tensor of float / double weights, or None to indicate all weights should be taken to be 1. If specified,
per_sample_weights
must have exactly the same shape as input and is treated as having the sameoffsets
, if those are not None.include_last_offset (bool, optional) – if
True
, the size of offsets is equal to the number of bags + 1.last element is the size of the input (The) –
the ending index position of the last bag (or) –
Shape:
input
(LongTensor) andoffsets
(LongTensor, optional)If
input
is 2D of shape (B, N),it will be treated as
B
bags (sequences) each of fixed lengthN
, and this will returnB
values aggregated in a way depending on themode
.offsets
is ignored and required to beNone
in this case.If
input
is 1D of shape (N),it will be treated as a concatenation of multiple bags (sequences).
offsets
is required to be a 1D tensor containing the starting index positions of each bag ininput
. Therefore, foroffsets
of shape (B),input
will be viewed as havingB
bags. Empty bags (i.e., having 0-length) will have returned vectors filled by zeros.
weight
(Tensor): the learnable weights of the module of shape (num_embeddings, embedding_dim)per_sample_weights
(Tensor, optional). Has the same shape asinput
.output
: aggregated embedding values of shape (B, embedding_dim)
Examples:
>>> # an Embedding module containing 10 tensors of size 3 >>> embedding_matrix = torch.rand(10, 3) >>> # a batch of 2 samples of 4 indices each >>> input = torch.tensor([1,2,4,5,4,3,2,9]) >>> offsets = torch.tensor([0,4]) >>> F.embedding_bag(embedding_matrix, input, offsets) tensor([[ 0.3397, 0.3552, 0.5545], [ 0.5893, 0.4386, 0.5882]])
Distance functions¶
pairwise_distance¶
-
torch.nn.functional.
pairwise_distance
(x1: torch.Tensor, x2: torch.Tensor, p: float = 2.0, eps: float = 1e-06, keepdim: bool = False) → torch.Tensor[source]¶ See
torch.nn.PairwiseDistance
for details
cosine_similarity¶
-
torch.nn.functional.
cosine_similarity
(x1, x2, dim=1, eps=1e-08) → Tensor¶ Returns cosine similarity between x1 and x2, computed along dim.
\[\text{similarity} = \dfrac{x_1 \cdot x_2}{\max(\Vert x_1 \Vert _2 \cdot \Vert x_2 \Vert _2, \epsilon)} \]- Parameters
- Shape:
Input: \((\ast_1, D, \ast_2)\) where D is at position dim.
Output: \((\ast_1, \ast_2)\) where 1 is at position dim.
Example:
>>> input1 = torch.randn(100, 128) >>> input2 = torch.randn(100, 128) >>> output = F.cosine_similarity(input1, input2) >>> print(output)
pdist¶
-
torch.nn.functional.
pdist
(input, p=2) → Tensor¶ Computes the p-norm distance between every pair of row vectors in the input. This is identical to the upper triangular portion, excluding the diagonal, of torch.norm(input[:, None] - input, dim=2, p=p). This function will be faster if the rows are contiguous.
If input has shape \(N \times M\) then the output will have shape \(\frac{1}{2} N (N - 1)\).
This function is equivalent to scipy.spatial.distance.pdist(input, ‘minkowski’, p=p) if \(p \in (0, \infty)\). When \(p = 0\) it is equivalent to scipy.spatial.distance.pdist(input, ‘hamming’) * M. When \(p = \infty\), the closest scipy function is scipy.spatial.distance.pdist(xn, lambda x, y: np.abs(x - y).max()).
- Parameters
input – input tensor of shape \(N \times M\).
p – p value for the p-norm distance to calculate between each vector pair \(\in [0, \infty]\).
Loss functions¶
binary_cross_entropy¶
-
torch.nn.functional.
binary_cross_entropy
(input: torch.Tensor, target: torch.Tensor, weight: Optional[torch.Tensor] = None, size_average: Optional[bool] = None, reduce: Optional[bool] = None, reduction: str = 'mean') → torch.Tensor[source]¶ Function that measures the Binary Cross Entropy between the target and the output.
See
BCELoss
for details.- Parameters
input – Tensor of arbitrary shape
target – Tensor of the same shape as input
weight (Tensor, optional) – a manual rescaling weight if provided it’s repeated to match input tensor shape
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored when reduce isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
Examples:
>>> input = torch.randn((3, 2), requires_grad=True) >>> target = torch.rand((3, 2), requires_grad=False) >>> loss = F.binary_cross_entropy(F.sigmoid(input), target) >>> loss.backward()
binary_cross_entropy_with_logits¶
-
torch.nn.functional.
binary_cross_entropy_with_logits
(input: torch.Tensor, target: torch.Tensor, weight: Optional[torch.Tensor] = None, size_average: Optional[bool] = None, reduce: Optional[bool] = None, reduction: str = 'mean', pos_weight: Optional[torch.Tensor] = None) → torch.Tensor[source]¶ Function that measures Binary Cross Entropy between target and output logits.
See
BCEWithLogitsLoss
for details.- Parameters
input – Tensor of arbitrary shape
target – Tensor of the same shape as input
weight (Tensor, optional) – a manual rescaling weight if provided it’s repeated to match input tensor shape
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored when reduce isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
pos_weight (Tensor, optional) – a weight of positive examples. Must be a vector with length equal to the number of classes.
Examples:
>>> input = torch.randn(3, requires_grad=True) >>> target = torch.empty(3).random_(2) >>> loss = F.binary_cross_entropy_with_logits(input, target) >>> loss.backward()
poisson_nll_loss¶
-
torch.nn.functional.
poisson_nll_loss
(input: torch.Tensor, target: torch.Tensor, log_input: bool = True, full: bool = False, size_average: Optional[bool] = None, eps: float = 1e-08, reduce: Optional[bool] = None, reduction: str = 'mean') → torch.Tensor[source]¶ Poisson negative log likelihood loss.
See
PoissonNLLLoss
for details.- Parameters
input – expectation of underlying Poisson distribution.
target – random sample \(target \sim \text{Poisson}(input)\).
log_input – if
True
the loss is computed as \(\exp(\text{input}) - \text{target} * \text{input}\), ifFalse
then loss is \(\text{input} - \text{target} * \log(\text{input}+\text{eps})\). Default:True
full – whether to compute full loss, i. e. to add the Stirling approximation term. Default:
False
\(\text{target} * \log(\text{target}) - \text{target} + 0.5 * \log(2 * \pi * \text{target})\).size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored when reduce isFalse
. Default:True
eps (float, optional) – Small value to avoid evaluation of \(\log(0)\) when
log_input`=``False`
. Default: 1e-8reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
cosine_embedding_loss¶
-
torch.nn.functional.
cosine_embedding_loss
(input1, input2, target, margin=0, size_average=None, reduce=None, reduction='mean') → Tensor[source]¶ See
CosineEmbeddingLoss
for details.
cross_entropy¶
-
torch.nn.functional.
cross_entropy
(input: torch.Tensor, target: torch.Tensor, weight: Optional[torch.Tensor] = None, size_average: Optional[bool] = None, ignore_index: int = - 100, reduce: Optional[bool] = None, reduction: str = 'mean') → torch.Tensor[source]¶ This criterion combines log_softmax and nll_loss in a single function.
See
CrossEntropyLoss
for details.- Parameters
input (Tensor) – \((N, C)\) where C = number of classes or \((N, C, H, W)\) in case of 2D Loss, or \((N, C, d_1, d_2, ..., d_K)\) where \(K \geq 1\) in the case of K-dimensional loss.
target (Tensor) – \((N)\) where each value is \(0 \leq \text{targets}[i] \leq C-1\), or \((N, d_1, d_2, ..., d_K)\) where \(K \geq 1\) for K-dimensional loss.
weight (Tensor, optional) – a manual rescaling weight given to each class. If given, has to be a Tensor of size C
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored when reduce isFalse
. Default:True
ignore_index (int, optional) – Specifies a target value that is ignored and does not contribute to the input gradient. When
size_average
isTrue
, the loss is averaged over non-ignored targets. Default: -100reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
Examples:
>>> input = torch.randn(3, 5, requires_grad=True) >>> target = torch.randint(5, (3,), dtype=torch.int64) >>> loss = F.cross_entropy(input, target) >>> loss.backward()
ctc_loss¶
-
torch.nn.functional.
ctc_loss
(log_probs: torch.Tensor, targets: torch.Tensor, input_lengths: torch.Tensor, target_lengths: torch.Tensor, blank: int = 0, reduction: str = 'mean', zero_infinity: bool = False) → torch.Tensor[source]¶ The Connectionist Temporal Classification loss.
See
CTCLoss
for details.Note
In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting
torch.backends.cudnn.deterministic = True
. See Reproducibility for more information.Note
This operation may produce nondeterministic gradients when given tensors on a CUDA device. See Reproducibility for more information.
- Parameters
log_probs – \((T, N, C)\) where C = number of characters in alphabet including blank, T = input length, and N = batch size. The logarithmized probabilities of the outputs (e.g. obtained with
torch.nn.functional.log_softmax()
).targets – \((N, S)\) or (sum(target_lengths)). Targets cannot be blank. In the second form, the targets are assumed to be concatenated.
input_lengths – \((N)\). Lengths of the inputs (must each be \(\leq T\))
target_lengths – \((N)\). Lengths of the targets
blank (int, optional) – Blank label. Default \(0\).
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the output losses will be divided by the target lengths and then the mean over the batch is taken,'sum'
: the output will be summed. Default:'mean'
zero_infinity (bool, optional) – Whether to zero infinite losses and the associated gradients. Default:
False
Infinite losses mainly occur when the inputs are too short to be aligned to the targets.
Example:
>>> log_probs = torch.randn(50, 16, 20).log_softmax(2).detach().requires_grad_() >>> targets = torch.randint(1, 20, (16, 30), dtype=torch.long) >>> input_lengths = torch.full((16,), 50, dtype=torch.long) >>> target_lengths = torch.randint(10,30,(16,), dtype=torch.long) >>> loss = F.ctc_loss(log_probs, targets, input_lengths, target_lengths) >>> loss.backward()
hinge_embedding_loss¶
-
torch.nn.functional.
hinge_embedding_loss
(input, target, margin=1.0, size_average=None, reduce=None, reduction='mean') → Tensor[source]¶ See
HingeEmbeddingLoss
for details.
kl_div¶
-
torch.nn.functional.
kl_div
(input: torch.Tensor, target: torch.Tensor, size_average: Optional[bool] = None, reduce: Optional[bool] = None, reduction: str = 'mean', log_target: bool = False) → torch.Tensor[source]¶ The Kullback-Leibler divergence Loss
See
KLDivLoss
for details.- Parameters
input – Tensor of arbitrary shape
target – Tensor of the same shape as input
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored when reduce isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'batchmean'
|'sum'
|'mean'
.'none'
: no reduction will be applied'batchmean'
: the sum of the output will be divided by the batchsize'sum'
: the output will be summed'mean'
: the output will be divided by the number of elements in the output Default:'mean'
log_target (bool) – A flag indicating whether
target
is passed in the log space. It is recommended to pass certain distributions (likesoftmax
) in the log space to avoid numerical issues caused by explicitlog
. Default:False
Note
size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
.Note
:attr:
reduction
='mean'
doesn’t return the true kl divergence value, please use :attr:reduction
='batchmean'
which aligns with KL math definition. In the next major release,'mean'
will be changed to be the same as ‘batchmean’.
l1_loss¶
mse_loss¶
margin_ranking_loss¶
-
torch.nn.functional.
margin_ranking_loss
(input1, input2, target, margin=0, size_average=None, reduce=None, reduction='mean') → Tensor[source]¶ See
MarginRankingLoss
for details.
multilabel_margin_loss¶
-
torch.nn.functional.
multilabel_margin_loss
(input, target, size_average=None, reduce=None, reduction='mean') → Tensor[source]¶ See
MultiLabelMarginLoss
for details.
multilabel_soft_margin_loss¶
-
torch.nn.functional.
multilabel_soft_margin_loss
(input, target, weight=None, size_average=None) → Tensor[source]¶ See
MultiLabelSoftMarginLoss
for details.
multi_margin_loss¶
-
torch.nn.functional.
multi_margin_loss
(input: torch.Tensor, target: torch.Tensor, p: int = 1, margin: float = 1.0, weight: Optional[torch.Tensor] = None, size_average: Optional[bool] = None, reduce: Optional[bool] = None, reduction: str = 'mean') → torch.Tensor[source]¶ - multi_margin_loss(input, target, p=1, margin=1, weight=None, size_average=None,
reduce=None, reduction=’mean’) -> Tensor
See
MultiMarginLoss
for details.
nll_loss¶
-
torch.nn.functional.
nll_loss
(input: torch.Tensor, target: torch.Tensor, weight: Optional[torch.Tensor] = None, size_average: Optional[bool] = None, ignore_index: int = - 100, reduce: Optional[bool] = None, reduction: str = 'mean') → torch.Tensor[source]¶ The negative log likelihood loss.
See
NLLLoss
for details.- Parameters
input – \((N, C)\) where C = number of classes or \((N, C, H, W)\) in case of 2D Loss, or \((N, C, d_1, d_2, ..., d_K)\) where \(K \geq 1\) in the case of K-dimensional loss.
target – \((N)\) where each value is \(0 \leq \text{targets}[i] \leq C-1\), or \((N, d_1, d_2, ..., d_K)\) where \(K \geq 1\) for K-dimensional loss.
weight (Tensor, optional) – a manual rescaling weight given to each class. If given, has to be a Tensor of size C
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored when reduce isFalse
. Default:True
ignore_index (int, optional) – Specifies a target value that is ignored and does not contribute to the input gradient. When
size_average
isTrue
, the loss is averaged over non-ignored targets. Default: -100reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
Example:
>>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward()
smooth_l1_loss¶
-
torch.nn.functional.
smooth_l1_loss
(input: torch.Tensor, target: torch.Tensor, size_average: Optional[bool] = None, reduce: Optional[bool] = None, reduction: str = 'mean', beta: float = 1.0) → torch.Tensor[source]¶ Function that uses a squared term if the absolute element-wise error falls below beta and an L1 term otherwise.
See
SmoothL1Loss
for details.
soft_margin_loss¶
-
torch.nn.functional.
soft_margin_loss
(input, target, size_average=None, reduce=None, reduction='mean') → Tensor[source]¶ See
SoftMarginLoss
for details.
triplet_margin_loss¶
-
torch.nn.functional.
triplet_margin_loss
(anchor: torch.Tensor, positive: torch.Tensor, negative: torch.Tensor, margin: float = 1.0, p: float = 2, eps: float = 1e-06, swap: bool = False, size_average: Optional[bool] = None, reduce: Optional[bool] = None, reduction: str = 'mean') → torch.Tensor[source]¶ See
TripletMarginLoss
for details
Vision functions¶
pixel_shuffle¶
-
torch.nn.functional.
pixel_shuffle
(input, upscale_factor) → Tensor¶ Rearranges elements in a tensor of shape \((*, C \times r^2, H, W)\) to a tensor of shape \((*, C, H \times r, W \times r)\), where r is the
upscale_factor
.See
PixelShuffle
for details.- Parameters
Examples:
>>> input = torch.randn(1, 9, 4, 4) >>> output = torch.nn.functional.pixel_shuffle(input, 3) >>> print(output.size()) torch.Size([1, 1, 12, 12])
pad¶
-
torch.nn.functional.
pad
(input: torch.Tensor, pad: List[int], mode: str = 'constant', value: float = 0) → torch.Tensor¶ Pads tensor.
- Padding size:
The padding size by which to pad some dimensions of
input
are described starting from the last dimension and moving forward. \(\left\lfloor\frac{\text{len(pad)}}{2}\right\rfloor\) dimensions ofinput
will be padded. For example, to pad only the last dimension of the input tensor, thenpad
has the form \((\text{padding\_left}, \text{padding\_right})\); to pad the last 2 dimensions of the input tensor, then use \((\text{padding\_left}, \text{padding\_right},\) \(\text{padding\_top}, \text{padding\_bottom})\); to pad the last 3 dimensions, use \((\text{padding\_left}, \text{padding\_right},\) \(\text{padding\_top}, \text{padding\_bottom}\) \(\text{padding\_front}, \text{padding\_back})\).- Padding mode:
See
torch.nn.ConstantPad2d
,torch.nn.ReflectionPad2d
, andtorch.nn.ReplicationPad2d
for concrete examples on how each of the padding modes works. Constant padding is implemented for arbitrary dimensions. Replicate padding is implemented for padding the last 3 dimensions of 5D input tensor, or the last 2 dimensions of 4D input tensor, or the last dimension of 3D input tensor. Reflect padding is only implemented for padding the last 2 dimensions of 4D input tensor, or the last dimension of 3D input tensor.
Note
When using the CUDA backend, this operation may induce nondeterministic behaviour in its backward pass that is not easily switched off. Please see the notes on Reproducibility for background.
- Parameters
Examples:
>>> t4d = torch.empty(3, 3, 4, 2) >>> p1d = (1, 1) # pad last dim by 1 on each side >>> out = F.pad(t4d, p1d, "constant", 0) # effectively zero padding >>> print(out.size()) torch.Size([3, 3, 4, 4]) >>> p2d = (1, 1, 2, 2) # pad last dim by (1, 1) and 2nd to last by (2, 2) >>> out = F.pad(t4d, p2d, "constant", 0) >>> print(out.size()) torch.Size([3, 3, 8, 4]) >>> t4d = torch.empty(3, 3, 4, 2) >>> p3d = (0, 1, 2, 1, 3, 3) # pad by (0, 1), (2, 1), and (3, 3) >>> out = F.pad(t4d, p3d, "constant", 0) >>> print(out.size()) torch.Size([3, 9, 7, 3])
interpolate¶
-
torch.nn.functional.
interpolate
(input, size=None, scale_factor=None, mode='nearest', align_corners=None, recompute_scale_factor=None)[source]¶ Down/up samples the input to either the given
size
or the givenscale_factor
The algorithm used for interpolation is determined by
mode
.Currently temporal, spatial and volumetric sampling are supported, i.e. expected inputs are 3-D, 4-D or 5-D in shape.
The input dimensions are interpreted in the form: mini-batch x channels x [optional depth] x [optional height] x width.
The modes available for resizing are: nearest, linear (3D-only), bilinear, bicubic (4D-only), trilinear (5D-only), area
- Parameters
input (Tensor) – the input tensor
size (int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int]) – output spatial size.
scale_factor (float or Tuple[float]) – multiplier for spatial size. Has to match input size if it is a tuple.
mode (str) – algorithm used for upsampling:
'nearest'
|'linear'
|'bilinear'
|'bicubic'
|'trilinear'
|'area'
. Default:'nearest'
align_corners (bool, optional) – Geometrically, we consider the pixels of the input and output as squares rather than points. If set to
True
, the input and output tensors are aligned by the center points of their corner pixels, preserving the values at the corner pixels. If set toFalse
, the input and output tensors are aligned by the corner points of their corner pixels, and the interpolation uses edge value padding for out-of-boundary values, making this operation independent of input size whenscale_factor
is kept the same. This only has an effect whenmode
is'linear'
,'bilinear'
,'bicubic'
or'trilinear'
. Default:False
recompute_scale_factor (bool, optional) – recompute the scale_factor for use in the interpolation calculation. When scale_factor is passed as a parameter, it is used to compute the output_size. If recompute_scale_factor is
False
or not specified, the passed-in scale_factor will be used in the interpolation computation. Otherwise, a new scale_factor will be computed based on the output and input sizes for use in the interpolation computation (i.e. the computation will be identical to if the computed output_size were passed-in explicitly). Note that when scale_factor is floating-point, the recomputed scale_factor may differ from the one passed in due to rounding and precision issues.
Note
With
mode='bicubic'
, it’s possible to cause overshoot, in other words it can produce negative values or values greater than 255 for images. Explicitly callresult.clamp(min=0, max=255)
if you want to reduce the overshoot when displaying the image.Warning
With
align_corners = True
, the linearly interpolating modes (linear, bilinear, and trilinear) don’t proportionally align the output and input pixels, and thus the output values can depend on the input size. This was the default behavior for these modes up to version 0.3.1. Since then, the default behavior isalign_corners = False
. SeeUpsample
for concrete examples on how this affects the outputs.Warning
When scale_factor is specified, if recompute_scale_factor=True, scale_factor is used to compute the output_size which will then be used to infer new scales for the interpolation. The default behavior for recompute_scale_factor changed to False in 1.6.0, and scale_factor is used in the interpolation calculation.
Note
This operation may produce nondeterministic gradients when given tensors on a CUDA device. See Reproducibility for more information.
upsample¶
-
torch.nn.functional.
upsample
(input, size=None, scale_factor=None, mode='nearest', align_corners=None)[source]¶ Upsamples the input to either the given
size
or the givenscale_factor
Warning
This function is deprecated in favor of
torch.nn.functional.interpolate()
. This is equivalent withnn.functional.interpolate(...)
.Note
This operation may produce nondeterministic gradients when given tensors on a CUDA device. See Reproducibility for more information.
The algorithm used for upsampling is determined by
mode
.Currently temporal, spatial and volumetric upsampling are supported, i.e. expected inputs are 3-D, 4-D or 5-D in shape.
The input dimensions are interpreted in the form: mini-batch x channels x [optional depth] x [optional height] x width.
The modes available for upsampling are: nearest, linear (3D-only), bilinear, bicubic (4D-only), trilinear (5D-only)
- Parameters
input (Tensor) – the input tensor
size (int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int]) – output spatial size.
scale_factor (float or Tuple[float]) – multiplier for spatial size. Has to match input size if it is a tuple.
mode (string) – algorithm used for upsampling:
'nearest'
|'linear'
|'bilinear'
|'bicubic'
|'trilinear'
. Default:'nearest'
align_corners (bool, optional) – Geometrically, we consider the pixels of the input and output as squares rather than points. If set to
True
, the input and output tensors are aligned by the center points of their corner pixels, preserving the values at the corner pixels. If set toFalse
, the input and output tensors are aligned by the corner points of their corner pixels, and the interpolation uses edge value padding for out-of-boundary values, making this operation independent of input size whenscale_factor
is kept the same. This only has an effect whenmode
is'linear'
,'bilinear'
,'bicubic'
or'trilinear'
. Default:False
Note
With
mode='bicubic'
, it’s possible to cause overshoot, in other words it can produce negative values or values greater than 255 for images. Explicitly callresult.clamp(min=0, max=255)
if you want to reduce the overshoot when displaying the image.Warning
With
align_corners = True
, the linearly interpolating modes (linear, bilinear, and trilinear) don’t proportionally align the output and input pixels, and thus the output values can depend on the input size. This was the default behavior for these modes up to version 0.3.1. Since then, the default behavior isalign_corners = False
. SeeUpsample
for concrete examples on how this affects the outputs.
upsample_nearest¶
-
torch.nn.functional.
upsample_nearest
(input, size=None, scale_factor=None)[source]¶ Upsamples the input, using nearest neighbours’ pixel values.
Warning
This function is deprecated in favor of
torch.nn.functional.interpolate()
. This is equivalent withnn.functional.interpolate(..., mode='nearest')
.Currently spatial and volumetric upsampling are supported (i.e. expected inputs are 4 or 5 dimensional).
- Parameters
Note
This operation may produce nondeterministic gradients when given tensors on a CUDA device. See Reproducibility for more information.
upsample_bilinear¶
-
torch.nn.functional.
upsample_bilinear
(input, size=None, scale_factor=None)[source]¶ Upsamples the input, using bilinear upsampling.
Warning
This function is deprecated in favor of
torch.nn.functional.interpolate()
. This is equivalent withnn.functional.interpolate(..., mode='bilinear', align_corners=True)
.Expected inputs are spatial (4 dimensional). Use upsample_trilinear fo volumetric (5 dimensional) inputs.
- Parameters
Note
This operation may produce nondeterministic gradients when given tensors on a CUDA device. See Reproducibility for more information.
grid_sample¶
-
torch.nn.functional.
grid_sample
(input: torch.Tensor, grid: torch.Tensor, mode: str = 'bilinear', padding_mode: str = 'zeros', align_corners: Optional[bool] = None) → torch.Tensor[source]¶ Given an
input
and a flow-fieldgrid
, computes theoutput
usinginput
values and pixel locations fromgrid
.Currently, only spatial (4-D) and volumetric (5-D)
input
are supported.In the spatial (4-D) case, for
input
with shape \((N, C, H_\text{in}, W_\text{in})\) andgrid
with shape \((N, H_\text{out}, W_\text{out}, 2)\), the output will have shape \((N, C, H_\text{out}, W_\text{out})\).For each output location
output[n, :, h, w]
, the size-2 vectorgrid[n, h, w]
specifiesinput
pixel locationsx
andy
, which are used to interpolate the output valueoutput[n, :, h, w]
. In the case of 5D inputs,grid[n, d, h, w]
specifies thex
,y
,z
pixel locations for interpolatingoutput[n, :, d, h, w]
.mode
argument specifiesnearest
orbilinear
interpolation method to sample the input pixels.grid
specifies the sampling pixel locations normalized by theinput
spatial dimensions. Therefore, it should have most values in the range of[-1, 1]
. For example, valuesx = -1, y = -1
is the left-top pixel ofinput
, and valuesx = 1, y = 1
is the right-bottom pixel ofinput
.If
grid
has values outside the range of[-1, 1]
, the corresponding outputs are handled as defined bypadding_mode
. Options arepadding_mode="zeros"
: use0
for out-of-bound grid locations,padding_mode="border"
: use border values for out-of-bound grid locations,padding_mode="reflection"
: use values at locations reflected by the border for out-of-bound grid locations. For location far away from the border, it will keep being reflected until becoming in bound, e.g., (normalized) pixel locationx = -3.5
reflects by border-1
and becomesx' = 1.5
, then reflects by border1
and becomesx'' = -0.5
.
Note
This function is often used in conjunction with
affine_grid()
to build Spatial Transformer Networks .Note
When using the CUDA backend, this operation may induce nondeterministic behaviour in its backward pass that is not easily switched off. Please see the notes on Reproducibility for background.
Note
NaN values in
grid
would be interpreted as-1
.- Parameters
input (Tensor) – input of shape \((N, C, H_\text{in}, W_\text{in})\) (4-D case) or \((N, C, D_\text{in}, H_\text{in}, W_\text{in})\) (5-D case)
grid (Tensor) – flow-field of shape \((N, H_\text{out}, W_\text{out}, 2)\) (4-D case) or \((N, D_\text{out}, H_\text{out}, W_\text{out}, 3)\) (5-D case)
mode (str) – interpolation mode to calculate output values
'bilinear'
|'nearest'
|'bicubic'
. Default:'bilinear'
Note:mode='bicubic'
supports only 4-D input. Whenmode='bilinear'
and the input is 5-D, the interpolation mode used internally will actually be trilinear. However, when the input is 4-D, the interpolation mode will legitimately be bilinear.padding_mode (str) – padding mode for outside grid values
'zeros'
|'border'
|'reflection'
. Default:'zeros'
align_corners (bool, optional) – Geometrically, we consider the pixels of the input as squares rather than points. If set to
True
, the extrema (-1
and1
) are considered as referring to the center points of the input’s corner pixels. If set toFalse
, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic. This option parallels thealign_corners
option ininterpolate()
, and so whichever option is used here should also be used there to resize the input image before grid sampling. Default:False
- Returns
output Tensor
- Return type
output (Tensor)
Warning
When
align_corners = True
, the grid positions depend on the pixel size relative to the input image size, and so the locations sampled bygrid_sample()
will differ for the same input given at different resolutions (that is, after being upsampled or downsampled). The default behavior up to version 1.2.0 wasalign_corners = True
. Since then, the default behavior has been changed toalign_corners = False
, in order to bring it in line with the default forinterpolate()
.Note
mode='bicubic'
is implemented using the cubic convolution algorithm with \(\alpha=-0.75\). The constant \(\alpha\) might be different from packages to packages. For example, PIL and OpenCV use -0.5 and -0.75 respectively. This algorithm may “overshoot” the range of values it’s interpolating. For example, it may produce negative values or values greater than 255 when interpolating input in [0, 255]. Clamp the results with :func: torch.clamp to ensure they are within the valid range.
affine_grid¶
-
torch.nn.functional.
affine_grid
(theta: torch.Tensor, size: List[int], align_corners: Optional[bool] = None) → torch.Tensor[source]¶ Generates a 2D or 3D flow field (sampling grid), given a batch of affine matrices
theta
.Note
This function is often used in conjunction with
grid_sample()
to build Spatial Transformer Networks .- Parameters
theta (Tensor) – input batch of affine matrices with shape (\(N \times 2 \times 3\)) for 2D or (\(N \times 3 \times 4\)) for 3D
size (torch.Size) – the target output image size. (\(N \times C \times H \times W\) for 2D or \(N \times C \times D \times H \times W\) for 3D) Example: torch.Size((32, 3, 24, 24))
align_corners (bool, optional) – if
True
, consider-1
and1
to refer to the centers of the corner pixels rather than the image corners. Refer togrid_sample()
for a more complete description. A grid generated byaffine_grid()
should be passed togrid_sample()
with the same setting for this option. Default:False
- Returns
output Tensor of size (\(N \times H \times W \times 2\))
- Return type
output (Tensor)
Warning
When
align_corners = True
, the grid positions depend on the pixel size relative to the input image size, and so the locations sampled bygrid_sample()
will differ for the same input given at different resolutions (that is, after being upsampled or downsampled). The default behavior up to version 1.2.0 wasalign_corners = True
. Since then, the default behavior has been changed toalign_corners = False
, in order to bring it in line with the default forinterpolate()
.Warning
When
align_corners = True
, 2D affine transforms on 1D data and 3D affine transforms on 2D data (that is, when one of the spatial dimensions has unit size) are ill-defined, and not an intended use case. This is not a problem whenalign_corners = False
. Up to version 1.2.0, all grid points along a unit dimension were considered arbitrarily to be at-1
. From version 1.3.0, underalign_corners = True
all grid points along a unit dimension are considered to be at`0
(the center of the input image).
DataParallel functions (multi-GPU, distributed)¶
data_parallel¶
-
torch.nn.parallel.
data_parallel
(module, inputs, device_ids=None, output_device=None, dim=0, module_kwargs=None)[source]¶ Evaluates module(input) in parallel across the GPUs given in device_ids.
This is the functional version of the DataParallel module.
- Parameters
module (Module) – the module to evaluate in parallel
inputs (Tensor) – inputs to the module
device_ids (list of python:int or torch.device) – GPU ids on which to replicate module
output_device (list of python:int or torch.device) – GPU location of the output Use -1 to indicate the CPU. (default: device_ids[0])
- Returns
a Tensor containing the result of module(input) located on output_device
torch.nn.init¶
-
torch.nn.init.
calculate_gain
(nonlinearity, param=None)[source]¶ Return the recommended gain value for the given nonlinearity function. The values are as follows:
nonlinearity
gain
Linear / Identity
\(1\)
Conv{1,2,3}D
\(1\)
Sigmoid
\(1\)
Tanh
\(\frac{5}{3}\)
ReLU
\(\sqrt{2}\)
Leaky Relu
\(\sqrt{\frac{2}{1 + \text{negative\_slope}^2}}\)
SELU
\(\frac{3}{4}\)
- Parameters
nonlinearity – the non-linear function (nn.functional name)
param – optional parameter for the non-linear function
Examples
>>> gain = nn.init.calculate_gain('leaky_relu', 0.2) # leaky_relu with negative_slope=0.2
-
torch.nn.init.
uniform_
(tensor: torch.Tensor, a: float = 0.0, b: float = 1.0) → torch.Tensor[source]¶ Fills the input Tensor with values drawn from the uniform distribution \(\mathcal{U}(a, b)\).
- Parameters
tensor – an n-dimensional torch.Tensor
a – the lower bound of the uniform distribution
b – the upper bound of the uniform distribution
Examples
>>> w = torch.empty(3, 5) >>> nn.init.uniform_(w)
-
torch.nn.init.
normal_
(tensor: torch.Tensor, mean: float = 0.0, std: float = 1.0) → torch.Tensor[source]¶ Fills the input Tensor with values drawn from the normal distribution \(\mathcal{N}(\text{mean}, \text{std}^2)\).
- Parameters
tensor – an n-dimensional torch.Tensor
mean – the mean of the normal distribution
std – the standard deviation of the normal distribution
Examples
>>> w = torch.empty(3, 5) >>> nn.init.normal_(w)
-
torch.nn.init.
constant_
(tensor: torch.Tensor, val: float) → torch.Tensor[source]¶ Fills the input Tensor with the value \(\text{val}\).
- Parameters
tensor – an n-dimensional torch.Tensor
val – the value to fill the tensor with
Examples
>>> w = torch.empty(3, 5) >>> nn.init.constant_(w, 0.3)
-
torch.nn.init.
eye_
(tensor)[source]¶ Fills the 2-dimensional input Tensor with the identity matrix. Preserves the identity of the inputs in Linear layers, where as many inputs are preserved as possible.
- Parameters
tensor – a 2-dimensional torch.Tensor
Examples
>>> w = torch.empty(3, 5) >>> nn.init.eye_(w)
-
torch.nn.init.
dirac_
(tensor, groups=1)[source]¶ Fills the {3, 4, 5}-dimensional input Tensor with the Dirac delta function. Preserves the identity of the inputs in Convolutional layers, where as many input channels are preserved as possible. In case of groups>1, each group of channels preserves identity
- Parameters
tensor – a {3, 4, 5}-dimensional torch.Tensor
groups (optional) – number of groups in the conv layer (default: 1)
Examples
>>> w = torch.empty(3, 16, 5, 5) >>> nn.init.dirac_(w) >>> w = torch.empty(3, 24, 5, 5) >>> nn.init.dirac_(w, 3)
-
torch.nn.init.
xavier_uniform_
(tensor: torch.Tensor, gain: float = 1.0) → torch.Tensor[source]¶ Fills the input Tensor with values according to the method described in Understanding the difficulty of training deep feedforward neural networks - Glorot, X. & Bengio, Y. (2010), using a uniform distribution. The resulting tensor will have values sampled from \(\mathcal{U}(-a, a)\) where
\[a = \text{gain} \times \sqrt{\frac{6}{\text{fan\_in} + \text{fan\_out}}} \]Also known as Glorot initialization.
- Parameters
tensor – an n-dimensional torch.Tensor
gain – an optional scaling factor
Examples
>>> w = torch.empty(3, 5) >>> nn.init.xavier_uniform_(w, gain=nn.init.calculate_gain('relu'))
-
torch.nn.init.
xavier_normal_
(tensor: torch.Tensor, gain: float = 1.0) → torch.Tensor[source]¶ Fills the input Tensor with values according to the method described in Understanding the difficulty of training deep feedforward neural networks - Glorot, X. & Bengio, Y. (2010), using a normal distribution. The resulting tensor will have values sampled from \(\mathcal{N}(0, \text{std}^2)\) where
\[\text{std} = \text{gain} \times \sqrt{\frac{2}{\text{fan\_in} + \text{fan\_out}}} \]Also known as Glorot initialization.
- Parameters
tensor – an n-dimensional torch.Tensor
gain – an optional scaling factor
Examples
>>> w = torch.empty(3, 5) >>> nn.init.xavier_normal_(w)
-
torch.nn.init.
kaiming_uniform_
(tensor, a=0, mode='fan_in', nonlinearity='leaky_relu')[source]¶ Fills the input Tensor with values according to the method described in Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification - He, K. et al. (2015), using a uniform distribution. The resulting tensor will have values sampled from \(\mathcal{U}(-\text{bound}, \text{bound})\) where
\[\text{bound} = \text{gain} \times \sqrt{\frac{3}{\text{fan\_mode}}} \]Also known as He initialization.
- Parameters
tensor – an n-dimensional torch.Tensor
a – the negative slope of the rectifier used after this layer (only used with
'leaky_relu'
)mode – either
'fan_in'
(default) or'fan_out'
. Choosing'fan_in'
preserves the magnitude of the variance of the weights in the forward pass. Choosing'fan_out'
preserves the magnitudes in the backwards pass.nonlinearity – the non-linear function (nn.functional name), recommended to use only with
'relu'
or'leaky_relu'
(default).
Examples
>>> w = torch.empty(3, 5) >>> nn.init.kaiming_uniform_(w, mode='fan_in', nonlinearity='relu')
-
torch.nn.init.
kaiming_normal_
(tensor, a=0, mode='fan_in', nonlinearity='leaky_relu')[source]¶ Fills the input Tensor with values according to the method described in Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification - He, K. et al. (2015), using a normal distribution. The resulting tensor will have values sampled from \(\mathcal{N}(0, \text{std}^2)\) where
\[\text{std} = \frac{\text{gain}}{\sqrt{\text{fan\_mode}}} \]Also known as He initialization.
- Parameters
tensor – an n-dimensional torch.Tensor
a – the negative slope of the rectifier used after this layer (only used with
'leaky_relu'
)mode – either
'fan_in'
(default) or'fan_out'
. Choosing'fan_in'
preserves the magnitude of the variance of the weights in the forward pass. Choosing'fan_out'
preserves the magnitudes in the backwards pass.nonlinearity – the non-linear function (nn.functional name), recommended to use only with
'relu'
or'leaky_relu'
(default).
Examples
>>> w = torch.empty(3, 5) >>> nn.init.kaiming_normal_(w, mode='fan_out', nonlinearity='relu')
-
torch.nn.init.
orthogonal_
(tensor, gain=1)[source]¶ Fills the input Tensor with a (semi) orthogonal matrix, as described in Exact solutions to the nonlinear dynamics of learning in deep linear neural networks - Saxe, A. et al. (2013). The input tensor must have at least 2 dimensions, and for tensors with more than 2 dimensions the trailing dimensions are flattened.
- Parameters
tensor – an n-dimensional torch.Tensor, where \(n \geq 2\)
gain – optional scaling factor
Examples
>>> w = torch.empty(3, 5) >>> nn.init.orthogonal_(w)
-
torch.nn.init.
sparse_
(tensor, sparsity, std=0.01)[source]¶ Fills the 2D input Tensor as a sparse matrix, where the non-zero elements will be drawn from the normal distribution \(\mathcal{N}(0, 0.01)\), as described in Deep learning via Hessian-free optimization - Martens, J. (2010).
- Parameters
tensor – an n-dimensional torch.Tensor
sparsity – The fraction of elements in each column to be set to zero
std – the standard deviation of the normal distribution used to generate the non-zero values
Examples
>>> w = torch.empty(3, 5) >>> nn.init.sparse_(w, sparsity=0.1)