

Assuming we need each output of each layer in the decoder, we can store it by:ĭef conv_block( in_f, out_f, activation = 'relu', * args, ** kwargs): The main difference between Sequential is that ModuleList have not a forward method so the inner layers are not connected. It can be useful when you need to iterate through layer and store/use some information, like in U-net. ModuleList allows you to store Module as a list. I prefer to use the first pattern for models and the second for building blocks.īy diving our module into submodules it is easier to share the code, debug it and test it. (last): Linear(in_features=512, out_features=10, bias=True)īe aware that MyEncoder and MyDecoder could also be functions that returns a nn.Sequential. (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) Linear( dec_sizes, n_classes)ĭef _init_( self, in_c, enc_sizes, dec_sizes, n_classes): Sequential( *) Sequential( *)ĭef _init_( self, dec_sizes, n_classes): We can use Sequential to improve our code. You can notice that we have to store into self everything. Sequential is a container of Modules that can be stacked together and run at the same time. the 3x3 conv + batchnorm + relu, we have to write it again. Also, if we have some common block that we want to use in another model, e.g.

#Pytorch nn sequential call code
If we want to add a layer we have to again write lots of code in the _init_ and in the forward function.
If you are not new to PyTorch you may have seen this type of coding before, but there are two problems. This is a very simple classifier with an encoding part that uses two layers with 3x3 convs + batchnorm + relu and a decoding part with two linear layers. (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (bn1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
