Some Pytorch layers needed for MetNet

ConvLSTM / ConvGRU layers

CGRU

In a GRU cell the outputs and hidden are the same, last output must be equal to last hidden.

class ConvGRUCell[source]

ConvGRUCell(input_dim, hidden_dim, kernel_size=(3, 3), bias=True, activation=tanh, batchnorm=False) :: Module

Same as nn.Module, but no need for subclasses to call super().__init__

cgru_cell = ConvGRUCell(16, 32, 3)
cgru_cell(torch.rand(1, 16, 16, 16)).shape
torch.Size([1, 32, 16, 16])

Let's check:

class ConvGRU[source]

ConvGRU(input_dim, hidden_dim, kernel_size, n_layers, batch_first=True, bias=True, activation=tanh, input_p=0.2, hidden_p=0.1, batchnorm=False) :: Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:to, etc.

:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool

cgru = ConvGRU(16, 32, (3, 3), 2)
cgru
ConvGru(in=16, out=32, ks=(3, 3), n_layers=2, input_p=0.2, hidden_p=0.1)
layer_output, last_state_list = cgru(torch.rand(1,10,16,6,6))
layer_output.shape
torch.Size([1, 10, 32, 6, 6])
last_state_list.shape
torch.Size([2, 1, 32, 6, 6])
layer_output, last_state_list = cgru(torch.rand(1,10,16,6,6), last_state_list)