:py:mod:`python.dgbpy.torch_classes` ==================================== .. py:module:: python.dgbpy.torch_classes Module Contents --------------- Classes ~~~~~~~ .. autoapisummary:: python.dgbpy.torch_classes.OnnxModel python.dgbpy.torch_classes.Net python.dgbpy.torch_classes.Trainer python.dgbpy.torch_classes.ResidualBlock python.dgbpy.torch_classes.Concatenate python.dgbpy.torch_classes.DownBlock python.dgbpy.torch_classes.UpBlock python.dgbpy.torch_classes.UNet python.dgbpy.torch_classes.SeismicTrainDataset python.dgbpy.torch_classes.SeismicTestDataset python.dgbpy.torch_classes.DatasetApply python.dgbpy.torch_classes.DataPredType python.dgbpy.torch_classes.OutputType python.dgbpy.torch_classes.DimType python.dgbpy.torch_classes.TorchUserModel Functions ~~~~~~~~~ .. autoapisummary:: python.dgbpy.torch_classes.Tensor2Numpy python.dgbpy.torch_classes.Numpy2tensor python.dgbpy.torch_classes.create_resnet_block python.dgbpy.torch_classes.autocrop python.dgbpy.torch_classes.conv_layer python.dgbpy.torch_classes.get_conv_layer python.dgbpy.torch_classes.conv_transpose_layer python.dgbpy.torch_classes.get_up_layer python.dgbpy.torch_classes.maxpool_layer python.dgbpy.torch_classes.get_maxpool_layer python.dgbpy.torch_classes.get_activation python.dgbpy.torch_classes.get_normalization Attributes ~~~~~~~~~~ .. autoapisummary:: python.dgbpy.torch_classes.mlmodels .. py:function:: Tensor2Numpy(tensor) .. py:function:: Numpy2tensor(nparray) .. py:class:: OnnxModel(filepath: str) .. py:method:: __call__(self, inputs) .. py:method:: eval(self) .. py:class:: Net(model_shape, output_classes, dim, nrattribs) Bases: :py:obj:`torch.nn.Module` .. py:method:: after_cnn(self, x) .. py:method:: forward(self, x) .. py:class:: Trainer(model: torch.nn.Module, device: torch.device, criterion: torch.nn.Module, optimizer: torch.optim.Optimizer, training_DataLoader: torch.utils.data.Dataset, validation_DataLoader: torch.utils.data.Dataset = None, lr_scheduler: torch.optim.lr_scheduler = None, epochs: int = 100, epoch: int = 0, notebook: bool = False, earlystopping: int = 5, imgdp=None) .. py:method:: run_trainer(self) .. py:method:: _train(self) .. py:method:: _validate(self) .. py:class:: ResidualBlock(input_channels, num_channels, use_1x1_conv=False, strides=1, ndims=3) Bases: :py:obj:`torch.nn.Module` Residual Block within a ResNet CNN model .. py:method:: forward(self, X) .. py:method:: shape_computation(self, X) .. py:method:: initialize_weights(self) .. py:function:: create_resnet_block(input_filters, output_filters, num_residuals, ndims, first_block=False) .. py:function:: autocrop(encoder_layer: torch.Tensor, decoder_layer: torch.Tensor) Center-crops the encoder_layer to the size of the decoder_layer, so that merging (concatenation) between levels/blocks is possible. This is only necessary for input sizes != 2**n for 'same' padding and always required for 'valid' padding. .. py:function:: conv_layer(dim: int) .. py:function:: get_conv_layer(in_channels: int, out_channels: int, kernel_size: int = 3, stride: int = 1, padding: int = 1, bias: bool = True, dim: int = 2) .. py:function:: conv_transpose_layer(dim: int) .. py:function:: get_up_layer(in_channels: int, out_channels: int, kernel_size: int = 2, stride: int = 2, dim: int = 3, up_mode: str = 'transposed') .. py:function:: maxpool_layer(dim: int) .. py:function:: get_maxpool_layer(kernel_size: int = 2, stride: int = 2, padding: int = 0, dim: int = 2) .. py:function:: get_activation(activation: str) .. py:function:: get_normalization(normalization: str, num_channels: int, dim: int) .. py:class:: Concatenate Bases: :py:obj:`torch.nn.Module` .. py:method:: forward(self, layer_1, layer_2) .. py:class:: DownBlock(in_channels: int, out_channels: int, pooling: bool = True, activation: str = 'relu', normalization: str = None, dim: str = 2, conv_mode: str = 'same') Bases: :py:obj:`torch.nn.Module` A helper Module that performs 2 Convolutions and 1 MaxPool. An activation follows each convolution. A normalization layer follows each convolution. .. py:method:: forward(self, x) .. py:class:: UpBlock(in_channels: int, out_channels: int, activation: str = 'relu', normalization: str = None, dim: int = 3, conv_mode: str = 'same', up_mode: str = 'transposed') Bases: :py:obj:`torch.nn.Module` A helper Module that performs 2 Convolutions and 1 UpConvolution/Upsample. An activation follows each convolution. A normalization layer follows each convolution. .. py:method:: forward(self, encoder_layer, decoder_layer) Forward pass Arguments: encoder_layer: Tensor from the encoder pathway decoder_layer: Tensor from the decoder pathway (to be up'd) .. py:class:: UNet(in_channels: int = 1, out_channels: int = 2, n_blocks: int = 1, start_filters: int = 32, activation: str = 'relu', normalization: str = 'batch', conv_mode: str = 'same', dim: int = 2, up_mode: str = 'transposed') Bases: :py:obj:`torch.nn.Module` activation: 'relu', 'leaky', 'elu' normalization: 'batch', 'instance', 'group{group_size}' conv_mode: 'same', 'valid' dim: 2, 3 up_mode: 'transposed', 'nearest', 'linear', 'bilinear', 'bicubic', 'trilinear' .. py:method:: weight_init(module, method, **kwargs) :staticmethod: .. py:method:: bias_init(module, method, **kwargs) :staticmethod: .. py:method:: initialize_parameters(self, method_weights=nn.init.xavier_uniform_, method_bias=nn.init.zeros_, kwargs_weights={}, kwargs_bias={}) .. py:method:: forward(self, x: torch.tensor) .. py:method:: __repr__(self) .. py:class:: SeismicTrainDataset(X, y, info, im_ch, ndims) .. py:method:: __len__(self) .. py:method:: __getitem__(self, index) .. py:class:: SeismicTestDataset(X, y, info, im_ch, ndims) .. py:method:: __len__(self) .. py:method:: __getitem__(self, index) .. py:class:: DatasetApply(X, isclassification, im_ch, ndims) Bases: :py:obj:`torch.utils.data.Dataset` .. py:method:: __len__(self) .. py:method:: __getitem__(self, index) .. py:class:: DataPredType Bases: :py:obj:`enum.Enum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: Continuous :annotation: = Continuous Data .. py:attribute:: Classification :annotation: = Classification Data .. py:attribute:: Segmentation :annotation: = Segmentation .. py:attribute:: Any :annotation: = Any .. py:class:: OutputType Bases: :py:obj:`enum.Enum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: Pixel :annotation: = 1 .. py:attribute:: Image :annotation: = 2 .. py:attribute:: Any :annotation: = 3 .. py:class:: DimType Bases: :py:obj:`enum.Enum` Generic enumeration. Derive from this class to define new enumerations. .. py:attribute:: D1 :annotation: = 1 .. py:attribute:: D2 :annotation: = 2 .. py:attribute:: D3 :annotation: = 3 .. py:attribute:: Any :annotation: = 4 .. py:class:: TorchUserModel Bases: :py:obj:`abc.ABC` Abstract base class for user defined Torch machine learning models This module provides support for users to add their own machine learning models to OpendTect. It defines an abstract base class. Users derive there own model classes from this base class and implement the _make_model static method to define the structure of the torch model. The users model definition should be saved in a file name with "mlmodel_" as a prefix and be at the top level of the module search path so it can be discovered. The "mlmodel_" class should also define some class variables describing the class: uiname : str - this is the name that will appear in the user interface uidescription : str - this is a short description which may be displayed to help the user predtype : DataPredType enum - type of prediction (must be member of DataPredType enum) outtype: OutputType enum - output shape type (OutputType.Pixel or OutputType.Image) dimtype : DimType enum - the input dimensions supported by model (must be member of DimType enum) .. py:attribute:: mlmodels :annotation: = [] .. py:method:: findModels() :staticmethod: Static method that searches the PYTHONPATH for modules containing user defined torch machine learning models (TorchUserModels). The module name must be prefixed by "mlmodel_". All subclasses of the TorchUserModel base class is each found module will be added to the mlmodels class variable. .. py:method:: findName(modname) :staticmethod: Static method that searches the found TorchUserModel's for a match with the uiname class variable Parameters ---------- modname : str Name (i.e. uiname) of the TorchUserModel to search for. Returns ------- an instance of the class with the first matching name in the mlmodels list or None if no match is found .. py:method:: getModelsByType(pred_type, out_type, dim_type) :staticmethod: Static method that returns a list of the TorchUserModels filtered by the given prediction, output and dimension types Parameters ---------- pred_type: DataPredType enum The prediction type of the model to filter by out_type: OutputType enum The output shape type of the model to filter by dim_type: DimType enum The dimensions that the model must support Returns ------- a list of matching model or None if no match found .. py:method:: getNamesByType(pred_type, out_type, dim_type) :staticmethod: .. py:method:: isPredType(modelnm, pred_type) :staticmethod: .. py:method:: isOutType(modelnm, out_type) :staticmethod: .. py:method:: isClassifier(modelnm) :staticmethod: .. py:method:: isRegressor(modelnm) :staticmethod: .. py:method:: isImg2Img(modelnm) :staticmethod: .. py:method:: _make_model(self, model_shape, nroutputs, nrattribs) :abstractmethod: Abstract static method that defines a machine learning model. Must be implemented in the user's derived class Parameters ---------- input_shape : tuple nroutputs : int (number of discrete classes for a classification) Number of outputs learnrate : float Returns ------- a compiled torch model .. py:method:: model(self, model_shape, nroutputs, nrattribs) Creates/returns a compiled torch model instance Parameters ---------- nroutputs : int (number of discrete classes for a classification) Number of outputs Returns ------- a pytorch model architecture .. py:data:: mlmodels