dgbpy.hdf5
Attributes
Classes
Create a collection of name/value pairs. |
|
Create a collection of name/value pairs. |
Functions
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Check if the model file should be stored or retrieved from S3. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Module Contents
- dgbpy.hdf5.hdf5ext = 'h5'
- dgbpy.hdf5.dictAddIfNew(newset, toadd)
- dgbpy.hdf5.getCubeLetNames(info)
- dgbpy.hdf5.getCubeLetNamesByGroup(info, groupnm, example)
- dgbpy.hdf5.getCubeLetNamesByGroupByItem(info, groupnm, collnm, idx)
- dgbpy.hdf5.getGroupSize(filenm, groupnm)
- dgbpy.hdf5.getNrAttribs(info)
- dgbpy.hdf5.getNrGroupInputs(info)
- dgbpy.hdf5.getNrOutputs(info)
- dgbpy.hdf5.getSeed(info)
- dgbpy.hdf5.get_np_shape(shape, nrpts=None, nrattribs=None)
- dgbpy.hdf5.getTrainingConfig(h5file)
- dgbpy.hdf5.isRegression(info)
- dgbpy.hdf5.isClassification(info)
- dgbpy.hdf5.isSegmentation(info)
- dgbpy.hdf5.isSeisClass(info)
- dgbpy.hdf5.hasUnlabeled(info)
- dgbpy.hdf5.isLogInput(info)
- dgbpy.hdf5.isLogOutput(info)
- dgbpy.hdf5.isImg2Img(info)
- dgbpy.hdf5.isZipModel(info)
- dgbpy.hdf5.isCrossValidation(info)
- dgbpy.hdf5.unscaleOutput(info)
- dgbpy.hdf5.applyGlobalStd(info)
- dgbpy.hdf5.applyLocalStd(info)
- dgbpy.hdf5.applyNormalization(info)
- dgbpy.hdf5.applyMinMaxScaling(info)
- dgbpy.hdf5.applyRangeScaling(info)
- dgbpy.hdf5.applyArrTranspose(info)
- class dgbpy.hdf5.StorageType(*args, **kwds)
Bases:
enum.EnumCreate a collection of name/value pairs.
Example enumeration:
>>> class Color(Enum): ... RED = 1 ... BLUE = 2 ... GREEN = 3
Access them by:
attribute access:
>>> Color.RED <Color.RED: 1>
value lookup:
>>> Color(1) <Color.RED: 1>
name lookup:
>>> Color['RED'] <Color.RED: 1>
Enumerations can be iterated over, and know how many members they have:
>>> len(Color) 3
>>> list(Color) [<Color.RED: 1>, <Color.BLUE: 2>, <Color.GREEN: 3>]
Methods can be added to enumerations, and members can have their own attributes – see the documentation for details.
- AWS = 'AWS'
- LOCAL = 'LOCAL'
- class dgbpy.hdf5.Scaler(*args, **kwds)
Bases:
enum.EnumCreate a collection of name/value pairs.
Example enumeration:
>>> class Color(Enum): ... RED = 1 ... BLUE = 2 ... GREEN = 3
Access them by:
attribute access:
>>> Color.RED <Color.RED: 1>
value lookup:
>>> Color(1) <Color.RED: 1>
name lookup:
>>> Color['RED'] <Color.RED: 1>
Enumerations can be iterated over, and know how many members they have:
>>> len(Color) 3
>>> list(Color) [<Color.RED: 1>, <Color.BLUE: 2>, <Color.GREEN: 3>]
Methods can be added to enumerations, and members can have their own attributes – see the documentation for details.
- GlobalScaler = 'Global Standardization'
- StandardScaler = 'Local Standardization'
- Normalization = 'Normalization'
- MinMaxScaler = 'MinMax'
- dgbpy.hdf5.isDefaultScaler(scaler, info, uselearntype=True)
- dgbpy.hdf5.updateScaleInfo(scaler, info)
- dgbpy.hdf5.getScalerStr(info)
- dgbpy.hdf5.doOutputScaling(info)
- dgbpy.hdf5.isModel(info)
- dgbpy.hdf5.isMultiLabelRegression(info)
- dgbpy.hdf5.hasboto3(auth=False)
- dgbpy.hdf5.isS3Uri(uri)
- dgbpy.hdf5.shouldUseS3(modelfnm, params=None, relaxed=True, kwargs=None)
Check if the model file should be stored or retrieved from S3. :param modelfnm: The model file name. :param params: The parameters dictionary. :param relaxed: If True, the function will not raise an exception if the model file is not an S3 URI. :param kwargs: The keyword arguments dictionary.
- dgbpy.hdf5.rm_tree(pth)
- dgbpy.hdf5.getLogDir(withtensorboard, examplenm, platform, basedir, clearlogs, args)
- dgbpy.hdf5.getOutdType(classinfo, hasunlabels)
- dgbpy.hdf5.getCubeLets_img2img_multitarget(infos, collection, groupnm)
- dgbpy.hdf5.getCubeLets(infos, collection, groupnm)
- dgbpy.hdf5.getDatasets_(infos, datasets, fortrain)
- dgbpy.hdf5.getDatasets(infos, dsetsel=None, train=True, validation=True)
- dgbpy.hdf5.validInfo(info)
- dgbpy.hdf5.getInfo(filenm, quick)
- dgbpy.hdf5.getAttribInfo(info, filenm)
- dgbpy.hdf5.getWellInfo(info, filenm)
- dgbpy.hdf5.getNrClasses(info)
- dgbpy.hdf5.arroneitemsize(dtype)
- dgbpy.hdf5.getTotalSize(info)
- dgbpy.hdf5.modeloutstr = 'Model.Output.'
- dgbpy.hdf5.modelIdxStr(idx)
- dgbpy.hdf5.odsetBoolValue(value)
- dgbpy.hdf5.addInfo(inpfile, plfnm, filenm, infos, clssnm)
- dgbpy.hdf5.getClassIndices(info, filternms=None)
- dgbpy.hdf5.getClassIndicesFromData(info)
- dgbpy.hdf5.getMainOutputs(info)
- dgbpy.hdf5.getOutputs(info)
- dgbpy.hdf5.getOutputNames(filenm, indices)
- dgbpy.hdf5.translateFnm(modfnm, modelfnm)