imaginaire.third_party.upfirdn2d package

Submodules

imaginaire.third_party.upfirdn2d.setup module

imaginaire.third_party.upfirdn2d.upfirdn2d module

Custom PyTorch ops for efficient resampling of 2D images.

class imaginaire.third_party.upfirdn2d.upfirdn2d.Blur(kernel=(1, 3, 3, 1), pad=0, padding_mode='zeros')[source]

Bases: torch.nn.modules.module.Module

extra_repr()[source]

Set the extra representation of the module

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training = None
class imaginaire.third_party.upfirdn2d.upfirdn2d.BlurDownsample(kernel=(1, 3, 3, 1), factor=2, padding_mode='zeros')[source]

Bases: torch.nn.modules.module.Module

extra_repr()[source]

Set the extra representation of the module

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training = None
class imaginaire.third_party.upfirdn2d.upfirdn2d.BlurUpsample(kernel=(1, 3, 3, 1), factor=2, padding_mode='zeros')[source]

Bases: torch.nn.modules.module.Module

extra_repr()[source]

Set the extra representation of the module

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training = None
imaginaire.third_party.upfirdn2d.upfirdn2d.downsample2d(x, f, down=2, padding=0, flip_filter=False, gain=1, impl='cuda')[source]

Downsample a batch of 2D images using the given 2D FIR filter.

By default, the result is padded so that its shape is a fraction of the input. User-specified padding is applied on top of that, with negative values indicating cropping. Pixels outside the image are assumed to be zero.

Parameters
  • x – Float32/float64/float16 input tensor of the shape [batch_size, num_channels, in_height, in_width].

  • f – Float32 FIR filter of the shape [filter_height, filter_width] (non-separable), [filter_taps] (separable), or None (identity).

  • down – Integer downsampling factor. Can be a single int or a list/tuple [x, y] (default: 1).

  • padding – Padding with respect to the input. Can be a single number or a list/tuple [x, y] or [x_before, x_after, y_before, y_after] (default: 0).

  • flip_filter – False = convolution, True = correlation (default: False).

  • gain – Overall scaling factor for signal magnitude (default: 1).

  • impl – Implementation to use. Can be ‘ref’ or ‘cuda’ (default: ‘cuda’).

Returns

Tensor of the shape [batch_size, num_channels, out_height, out_width].

imaginaire.third_party.upfirdn2d.upfirdn2d.filter2d(x, f, padding=0, flip_filter=False, gain=1, impl='cuda')[source]

Filter a batch of 2D images using the given 2D FIR filter.

By default, the result is padded so that its shape matches the input. User-specified padding is applied on top of that, with negative values indicating cropping. Pixels outside the image are assumed to be zero.

Parameters
  • x – Float32/float64/float16 input tensor of the shape [batch_size, num_channels, in_height, in_width].

  • f – Float32 FIR filter of the shape [filter_height, filter_width] (non-separable), [filter_taps] (separable), or None (identity).

  • padding – Padding with respect to the output. Can be a single number or a list/tuple [x, y] or [x_before, x_after, y_before, y_after] (default: 0).

  • flip_filter – False = convolution, True = correlation (default: False).

  • gain – Overall scaling factor for signal magnitude (default: 1).

  • impl – Implementation to use. Can be ‘ref’ or ‘cuda’ (default: ‘cuda’).

Returns

Tensor of the shape [batch_size, num_channels, out_height, out_width].

imaginaire.third_party.upfirdn2d.upfirdn2d.setup_filter(f, device=device(type='cpu'), normalize=True, flip_filter=False, gain=1, separable=None)[source]

Convenience function to setup 2D FIR filter for upfirdn2d().

Parameters
  • f – Torch tensor, numpy array, or python list of the shape [filter_height, filter_width] (non-separable), [filter_taps] (separable), [] (impulse), or None (identity).

  • device – Result device (default: cpu).

  • normalize – Normalize the filter so that it retains the magnitude for constant input signal (DC)? (default: True).

  • flip_filter – Flip the filter? (default: False).

  • gain – Overall scaling factor for signal magnitude (default: 1).

  • separable – Return a separable filter? (default: select automatically).

Returns

Float32 tensor of the shape [filter_height, filter_width] (non-separable) or [filter_taps] (separable).

imaginaire.third_party.upfirdn2d.upfirdn2d.upfirdn2d(x, f, up=1, down=1, padding=0, flip_filter=False, gain=1, impl='cuda')[source]

Pad, upsample, filter, and downsample a batch of 2D images.

Performs the following sequence of operations for each channel:

  1. Upsample the image by inserting N-1 zeros after each pixel (up).

  2. Pad the image with the specified number of zeros on each side (padding). Negative padding corresponds to cropping the image.

  3. Convolve the image with the specified 2D FIR filter (f), shrinking it so that the footprint of all output pixels lies within the input image.

  4. Downsample the image by keeping every Nth pixel (down).

This sequence of operations bears close resemblance to scipy.signal.upfirdn(). The fused op is considerably more efficient than performing the same calculation using standard PyTorch ops. It supports gradients of arbitrary order.

Parameters
  • x – Float32/float64/float16 input tensor of the shape [batch_size, num_channels, in_height, in_width].

  • f – Float32 FIR filter of the shape [filter_height, filter_width] (non-separable), [filter_taps] (separable), or None (identity).

  • up – Integer upsampling factor. Can be a single int or a list/tuple [x, y] (default: 1).

  • down – Integer downsampling factor. Can be a single int or a list/tuple [x, y] (default: 1).

  • padding – Padding with respect to the upsampled image. Can be a single number or a list/tuple [x, y] or [x_before, x_after, y_before, y_after] (default: 0).

  • flip_filter – False = convolution, True = correlation (default: False).

  • gain – Overall scaling factor for signal magnitude (default: 1).

  • impl – Implementation to use. Can be ‘ref’ or ‘cuda’ (default: ‘cuda’).

Returns

Tensor of the shape [batch_size, num_channels, out_height, out_width].

imaginaire.third_party.upfirdn2d.upfirdn2d.upsample2d(x, f, up=2, padding=0, flip_filter=False, gain=1, impl='cuda')[source]

Upsample a batch of 2D images using the given 2D FIR filter.

By default, the result is padded so that its shape is a multiple of the input. User-specified padding is applied on top of that, with negative values indicating cropping. Pixels outside the image are assumed to be zero.

Parameters
  • x – Float32/float64/float16 input tensor of the shape [batch_size, num_channels, in_height, in_width].

  • f – Float32 FIR filter of the shape [filter_height, filter_width] (non-separable), [filter_taps] (separable), or None (identity).

  • up – Integer upsampling factor. Can be a single int or a list/tuple [x, y] (default: 1).

  • padding – Padding with respect to the output. Can be a single number or a list/tuple [x, y] or [x_before, x_after, y_before, y_after] (default: 0).

  • flip_filter – False = convolution, True = correlation (default: False).

  • gain – Overall scaling factor for signal magnitude (default: 1).

  • impl – Implementation to use. Can be ‘ref’ or ‘cuda’ (default: ‘cuda’).

Returns

Tensor of the shape [batch_size, num_channels, out_height, out_width].

Module contents

class imaginaire.third_party.upfirdn2d.BlurUpsample(kernel=(1, 3, 3, 1), factor=2, padding_mode='zeros')[source]

Bases: torch.nn.modules.module.Module

extra_repr()[source]

Set the extra representation of the module

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training = None
class imaginaire.third_party.upfirdn2d.BlurDownsample(kernel=(1, 3, 3, 1), factor=2, padding_mode='zeros')[source]

Bases: torch.nn.modules.module.Module

extra_repr()[source]

Set the extra representation of the module

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training = None
class imaginaire.third_party.upfirdn2d.Blur(kernel=(1, 3, 3, 1), pad=0, padding_mode='zeros')[source]

Bases: torch.nn.modules.module.Module

extra_repr()[source]

Set the extra representation of the module

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training = None