dtorchvision.models

Description

This module contains model definitions and pretrained weights for popular models.

Here’s an example of how to use a pretrained model Autoencoder:

from dtorchvision.datasets import MNISTDataset
import dtorchvision.models
from matplotlib import pyplot as plt
import random

autoencoder = dtorchvision.models.MNISTAutoEncoder_128_32()

dataset = MNISTDataset(download=True)
(x, _), _ = dataset.data

a = autoencoder(x)
img = random.randint(0, len(a) - 1)
plt.imshow(a[img].reshape(28, 28))
plt.show()
plt.imshow(x[img].reshape(28, 28))
plt.show()
exit()

This code snippet will show you the autoencoder’s reconstruction of a random image from the MNIST dataset and then the original image.

Models

class AutoEncoder(dtorch.nn.Module)

An autoencoder model.

__init__(input_size, hidden_size, dp: float = 0.0)
Parameters:
  • input_size (int) – The size of the input layer.

  • hidden_size (int) – The size of the hidden layer.

  • dp (float) – Defaults to 0.0. The dropout probability.

For instance, the following model:

autoencoder = dtorchvision.models.AutoEncoder(784, 128)

Produces the following architecture:

AutoEncoder(
  Sequential(
    (0): Linear(in_features=784, out_features=128, bias=True)
    (1): ReLU()
    (2): Linear(in_features=128, out_features=128, bias=True)
    (3): ReLU()
    (4): Linear(in_features=128, out_features=784, bias=True)
    (5): ReLU()
  )
)

Pretrained Models

class MNISTAutoEncoder_128_32(JPretrainedModel)

An autoencoder model pretrained on the MNIST dataset.

__init__(root: str = './models')
Parameters:

root (str) – Defaults to ‘./models’. The root directory to store the model.