dgbpy.dgbtorch
Module Contents
Classes
Functions
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Transfer learning utility function for fine-tuning a Torch model. |
|
|
|
|
|
|
|
|
|
Attributes
- dgbpy.dgbtorch.device
- dgbpy.dgbtorch.hasTorch()
- dgbpy.dgbtorch.platform
- dgbpy.dgbtorch.withtensorboard
- dgbpy.dgbtorch.default_transforms = []
- dgbpy.dgbtorch.defbatchstr = defaultbatchsz
- dgbpy.dgbtorch.torch_infos
- dgbpy.dgbtorch.torch_dict
- dgbpy.dgbtorch.getMLPlatform()
- dgbpy.dgbtorch.cudacores = ['1', '2', '4', '8', '16', '32', '48', '64', '96', '128', '144', '192', '256', '288', '384',...
- dgbpy.dgbtorch.can_use_gpu()
- dgbpy.dgbtorch.set_compute_device(prefercpu)
- dgbpy.dgbtorch.get_torch_infos()
- dgbpy.dgbtorch.getParams(nntype=torch_dict['type'], dodec=torch_dict[dgbkeys.decimkeystr], nbchunk=torch_dict['nbchunk'], learnrate=torch_dict['learnrate'], epochdrop=torch_dict['epochdrop'], epochs=torch_dict['epochs'], patience=torch_dict['patience'], batch=torch_dict['batch'], prefercpu=torch_dict['prefercpu'], validation_split=torch_dict['split'], nbfold=torch_dict['nbfold'], scale=torch_dict['scale'], transform=torch_dict['transform'], withtensorboard=torch_dict['withtensorboard'], tofp16=torch_dict['tofp16'])
- dgbpy.dgbtorch.getDefaultModel(setup, type=torch_dict['type'])
- dgbpy.dgbtorch.getModelsByType(learntype, classification, ndim)
- dgbpy.dgbtorch.getModelsByInfo(infos)
- dgbpy.dgbtorch.get_model_shape(shape, nrattribs, attribfirst=True)
- dgbpy.dgbtorch.getModelDims(model_shape, data_format)
- dgbpy.dgbtorch.savetypes = ['onnx', 'joblib', 'pickle']
- dgbpy.dgbtorch.defsavetype
- dgbpy.dgbtorch.load(modelfnm)
- dgbpy.dgbtorch.onnx_from_torch(model, infos)
- dgbpy.dgbtorch.save(model, outfnm, infos, save_type=defsavetype)
- dgbpy.dgbtorch.train(model, imgdp, params, cbfn=None, logdir=None, silent=False, metrics=False)
- dgbpy.dgbtorch.transfer(model)
Transfer learning utility function for fine-tuning a Torch model.
This function takes a Torch model and prepares it for transfer learning by selectively setting layers to be trainable. The layers to be made trainable are determined as follows:
All layers before the first Conv1D, Conv2D, or Conv3D layer (or a Sequential containing such layers) are set to trainable.
All layers after the last Conv1D, Conv2D, Conv3D, or Dense layer (or a Sequential containing such layers) are set to trainable.
- All layers between the first and last Conv1D, Conv2D, Conv3D, or Dense layer (or a Sequential
containing such layers) are set to non-trainable.
- dgbpy.dgbtorch.apply(model, info, samples, scaler, isclassification, withpred, withprobs, withconfidence, doprobabilities)
- dgbpy.dgbtorch.getDataLoader(dataset, batch_size=torch_dict['batch'], drop_last=False)
- class dgbpy.dgbtorch.ChunkedDataLoader(*args, **kwargs)
Bases:
torch.utils.data.DataLoader
- set_chunk(self, ichunk)
- set_fold(self, ichunk, ifold)
- set_transform_seed(self)
- get_batchsize(self)
- __iter__(self)
- dgbpy.dgbtorch.getDataLoaders(traindataset, testdataset, batchsize=torch_dict['batch'])
- dgbpy.dgbtorch.getDatasetPars(imgdp, _forvalid)
- dgbpy.dgbtorch.DataGenerator(imgdp, batchsize, scaler=None, transform=list())