imaginaire.model_utils.gancraft package

Submodules

imaginaire.model_utils.gancraft.camctl module

class imaginaire.model_utils.gancraft.camctl.EvalCameraController(voxel, maxstep=128, pattern=0, cam_ang=73, smooth_decay_multiplier=1.0)[source]

Bases: object

filtfilt(height_history, decay=0.2)[source]
class imaginaire.model_utils.gancraft.camctl.TourCameraController(voxel, maxstep=128)[source]

Bases: object

imaginaire.model_utils.gancraft.camctl.get_neighbor_height(heightmap, loc0, loc1, minheight, neighbor_size=7)[source]
imaginaire.model_utils.gancraft.camctl.rand_camera_pose_birdseye(voxel, border=128)[source]

Generating random camera pose in the upper hemisphere, in the format of origin-direction-up Assuming [Y X Z] coordinate. Y is negative gravity direction. The camera pose is converted into the voxel coordinate system so that it can be used directly for rendering 1. Uniformly sample a point on the upper hemisphere of a unit sphere, as cam_ori. 2. Set cam_dir to be from cam_ori to the origin 3. cam_up is always pointing towards sky 4. move cam_ori to random place according to voxel size

imaginaire.model_utils.gancraft.camctl.rand_camera_pose_firstperson(voxel, border=128)[source]

Generating random camera pose in the upper hemisphere, in the format of origin-direction-up

imaginaire.model_utils.gancraft.camctl.rand_camera_pose_insideout(voxel)[source]
imaginaire.model_utils.gancraft.camctl.rand_camera_pose_thridperson(voxel, border=96)[source]
imaginaire.model_utils.gancraft.camctl.rand_camera_pose_thridperson2(voxel, border=48)[source]
imaginaire.model_utils.gancraft.camctl.rand_camera_pose_thridperson3(voxel, border=64)[source]

Attempting to solve the camera too close to wall problem and the lack of aerial poses.

imaginaire.model_utils.gancraft.camctl.rand_camera_pose_tour(voxel)[source]

imaginaire.model_utils.gancraft.layers module

class imaginaire.model_utils.gancraft.layers.AffineMod(in_features, style_features, mod_bias=True)[source]

Bases: torch.nn.modules.module.Module

Learning affine modulation of activation.

Parameters
  • in_features (int) – Number of input features.

  • style_features (int) – Number of style features.

  • mod_bias (bool) – Whether to modulate bias.

forward(x, z)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training = None
class imaginaire.model_utils.gancraft.layers.ModLinear(in_features, out_features, style_features, bias=True, mod_bias=True, output_mode=False, weight_gain=1, bias_init=0)[source]

Bases: torch.nn.modules.module.Module

Linear layer with affine modulation (Based on StyleGAN2 mod demod). Equivalent to affine modulation following linear, but faster when the same modulation parameters are shared across multiple inputs. :param in_features: Number of input features. :type in_features: int :param out_features: Number of output features. :type out_features: int :param style_features: Number of style features. :type style_features: int :param bias: Apply additive bias before the activation function? :type bias: bool :param mod_bias: Whether to modulate bias. :type mod_bias: bool :param output_mode: If True, modulate output instead of input. :type output_mode: bool :param weight_gain: Initialization gain :type weight_gain: float

forward(x, z)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training = None

imaginaire.model_utils.gancraft.loss module

class imaginaire.model_utils.gancraft.loss.GANLoss(target_real_label=1.0, target_fake_label=0.0)[source]

Bases: torch.nn.modules.module.Module

forward(input_x, t_real, weight=None, reduce_dim=True, dis_update=True)[source]

GAN loss computation.

Parameters
  • input_x (tensor or list of tensors) – Output values.

  • t_real (boolean) – Is this output value for real images.

  • reduce_dim (boolean) – Whether we reduce the dimensions first. This makes a difference when we use

  • discriminators. (multi-resolution) –

  • weight (float) – Weight to scale the loss value.

  • dis_update (boolean) – Updating the discriminator or the generator.

Returns

Loss value.

Return type

loss (tensor)

loss(input_x, t_real, weight=None, reduce_dim=True, dis_update=True)[source]

N+1 label GAN loss computation.

Parameters
  • input_x (tensor) – Output values.

  • t_real (boolean) – Is this output value for real images.

  • reduce_dim (boolean) – Whether we reduce the dimensions first. This makes a difference when we use

  • discriminators. (multi-resolution) –

  • weight (float) – Weight to scale the loss value.

  • dis_update (boolean) – Updating the discriminator or the generator.

Returns

Loss value.

Return type

loss (tensor)

training = None

imaginaire.model_utils.gancraft.mc_lbl_reduction module

class imaginaire.model_utils.gancraft.mc_lbl_reduction.ReducedLabelMapper[source]

Bases: object

gglbl2ggid(gglbl)[source]

imaginaire.model_utils.gancraft.mc_utils module

class imaginaire.model_utils.gancraft.mc_utils.MCLabelTranslator[source]

Bases: object

Resolving mapping across Minecraft voxel, coco-stuff label and GANcraft reduced label set.

coco2reduced(coco)[source]
get_num_reduced_lbls()[source]
gglbl2ggid(gglbl)[source]
mc2coco(mc)[source]
mc2reduced(mc, ign2dirt=False)[source]
mc_color(img)[source]

Obtaining Minecraft default color.

Parameters

img (H x W x 1 int32 numpy tensor) – Segmentation map.

static uint32_to_4uint8(x)[source]
class imaginaire.model_utils.gancraft.mc_utils.McVoxel(voxel_t, preproc_ver)[source]

Bases: torch.nn.modules.module.Module

Voxel management.

is_sea(loc)[source]

loc: [2]: x, z.

training = None
world2local(v, is_vec=False)[source]
imaginaire.model_utils.gancraft.mc_utils.calc_height_map(voxel_t)[source]

Calculate height map given a voxel grid [Y, X, Z] as input. The height is defined as the Y index of the surface (non-air) block

Parameters

voxel (Y x X x Z torch.IntTensor, CPU) – Input voxel of three dimensions

Output:

heightmap (X x Z torch.IntTensor)

imaginaire.model_utils.gancraft.mc_utils.colormap(x, cmap='viridis')[source]
imaginaire.model_utils.gancraft.mc_utils.cumsum_exclusive(tensor, dim)[source]
imaginaire.model_utils.gancraft.mc_utils.gen_corner_voxel(voxel)[source]

Converting voxel center array to voxel corner array. The size of the produced array grows by 1 on every dimension.

Parameters

voxel (torch.IntTensor, CPU) – Input voxel of three dimensions

imaginaire.model_utils.gancraft.mc_utils.load_voxel_new(voxel_path, shape=[256, 512, 512])[source]
imaginaire.model_utils.gancraft.mc_utils.rand_crop(cam_c, cam_res, target_res)[source]

Produces a new cam_c so that the effect of rendering with the new cam_c and target_res is the same as rendering with the old parameters and then crop out target_res.

imaginaire.model_utils.gancraft.mc_utils.sample_depth_batched(depth2, nsamples, deterministic=False, use_box_boundaries=True, sample_depth=4)[source]

Make best effort to sample points within the same distance for every ray. Exception: When there is not enough voxel.

Parameters
  • depth2 (N x 2 x 256 x 256 x 4 x 1 tensor) –

  • N (-) – Batch.

  • 2 (-) – Entrance / exit depth for each intersected box.

  • 256, 256 (-) – Height, Width.

  • 4 (-) – Number of intersected boxes along the ray.

  • 1 (-) – One extra dim for consistent tensor dims.

  • can include NaNs. (depth2) –

  • deterministic (bool) – Whether to use equal-distance sampling instead of random stratified sampling.

  • use_box_boundaries (bool) – Whether to add the entrance / exit points into the sample.

  • sample_depth (float) – Truncate the ray when it travels further than sample_depth inside voxels.

imaginaire.model_utils.gancraft.mc_utils.segmask_smooth(seg_mask, kernel_size=7)[source]
imaginaire.model_utils.gancraft.mc_utils.trans_vec_homo(m, v, is_vec=False)[source]

3-dimensional Homogeneous matrix and regular vector multiplication Convert v to homogeneous vector, perform M-V multiplication, and convert back Note that this function does not support autograd.

Parameters
  • m (4 x 4 tensor) – a homogeneous matrix

  • v (3 tensor) – a 3-d vector

  • vec (bool) – if true, v is direction. Otherwise v is point

imaginaire.model_utils.gancraft.mc_utils.volum_rendering_relu(sigma, dists, dim=2)[source]

Module contents