pyNNsMD.utils package

Submodules

pyNNsMD.utils.activ module

class pyNNsMD.utils.activ.leaky_softplus(*args, **kwargs)[source]

Bases: keras.engine.base_layer.Layer

Leaky soft-plus activation function similar to tf.nn.leaky_relu but smooth.

__init__(alpha=0.05, **kwargs)[source]

Initialize with optionally learnable parameter.

Parameters

alpha (float, optional) – Leak parameter alpha. Default is 0.05.

call(inputs, **kwargs)[source]

Compute leaky_softplus activation from inputs.

get_config()[source]

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Returns

Python dictionary.

pyNNsMD.utils.activ.shifted_softplus(x)[source]

Soft-plus function from tf.keras shifted downwards.

Parameters

x (tf.tensor) – Activation input.

Returns

Activation.

Return type

tf.tensor

pyNNsMD.utils.callbacks module

class pyNNsMD.utils.callbacks.EarlyStopping(max_time=inf, epochs=inf, learning_rate_start=0.001, epostep=1, loss_monitor='val_loss', min_delta=1e-05, patience=100, epomin=0, factor_lr=0.5, learning_rate_stop=1e-06, store_weights=False, restore_weights_on_lr_decay=False, use=None)[source]

Bases: keras.callbacks.Callback

This Callback does basic monitoring of the learning process.

Provides functionality such as learning rate decay and early stopping with custom logic as opposed to the callbacks provided by Keras by default which are generic. By André Eberhard https://github.com/patchmeifyoucan

__init__(max_time=inf, epochs=inf, learning_rate_start=0.001, epostep=1, loss_monitor='val_loss', min_delta=1e-05, patience=100, epomin=0, factor_lr=0.5, learning_rate_stop=1e-06, store_weights=False, restore_weights_on_lr_decay=False, use=None)[source]

Initialize callback instance for early stopping.

Parameters
  • max_time (int) – Duration in minutes of training, stops training even if number of epochs is not reached yet.

  • epochs (int) – Number of epochs to train. stops training even if number of max_time is not reached yet.

  • learning_rate_start (float) – The learning rate for the optimizer.

  • epostep (int) – Step to check for monitor loss.

  • loss_monitor (str) – The loss quantity to monitor for early stopping operations.

  • min_delta (float) – Minimum improvement to reach after ‘patience’ epochs of training.

  • patience (int) – Number of epochs to wait before decreasing learning rate by a factor of ‘factor’.

  • epomin (int) – Minimum Number of epochs to run before decreasing learning rate

  • factor_lr (float) – new_lr = old_lr * factor

  • learning_rate_stop (float) – Learning rate is not decreased any further after learning_rate_stop is reached.

  • store_weights (bool) – If True, stores parameters of best run so far when learning rate is decreased.

  • restore_weights_on_lr_decay (bool) – If True, restores parameters of best run so far when learning rate is decreased.

get_config()[source]
on_epoch_end(epoch, logs=None)[source]

Called at the end of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Parameters
  • epoch – Integer, index of epoch.

  • logs – Dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

on_train_begin(logs=None)[source]

Called at the beginning of training.

Subclasses should override for any actions to run.

Parameters

logs – Dict. Currently no data is passed to this argument for this method but that may change in the future.

on_train_end(logs=None)[source]

Called at the end of training.

Subclasses should override for any actions to run.

Parameters

logs – Dict. Currently the output of the last call to on_epoch_end() is passed to this argument for this method but that may change in the future.

class pyNNsMD.utils.callbacks.LinearLearningRateScheduler(learning_rate_start: float = 0.001, learning_rate_stop: float = 1e-05, epo_min: int = 0, epo: int = 500, verbose: int = 0)[source]

Bases: keras.callbacks.LearningRateScheduler

Callback for linear change of the learning rate.

This class inherits from tf.keras.callbacks.LearningRateScheduler.

__init__(learning_rate_start: float = 0.001, learning_rate_stop: float = 1e-05, epo_min: int = 0, epo: int = 500, verbose: int = 0)[source]

Set the parameters for the learning rate scheduler.

Parameters
  • learning_rate_start (float) – Initial learning rate. Default is 1e-3.

  • learning_rate_stop (float) – End learning rate. Default is 1e-5.

  • epo_min (int) – Minimum number of epochs to keep the learning-rate constant before decrease. Default is 0.

  • epo (int) – Total number of epochs. Default is 500.

  • verbose (int) – Verbosity. Default is 0.

get_config()[source]
schedule_epoch_lr(epoch, lr)[source]
class pyNNsMD.utils.callbacks.LinearWarmupExponentialLearningRateScheduler(lr_start: float, decay_gamma: float, epo_warmup: int = 10, lr_min: float = 0.0, verbose: int = 0)[source]

Bases: keras.callbacks.LearningRateScheduler

Callback for linear change of the learning rate.

This class inherits from tf.keras.callbacks.LearningRateScheduler.

__init__(lr_start: float, decay_gamma: float, epo_warmup: int = 10, lr_min: float = 0.0, verbose: int = 0)[source]

Set the parameters for the learning rate scheduler. :param lr_start: Learning rate at the start of the exp. decay. :type lr_start: float :param decay_gamma: Gamma parameter in the exponential. :type decay_gamma: float :param epo_warmup: Number of warm-up steps. Default is 10. :type epo_warmup: int :param lr_min: Minimum learning rate allowed during the decay. Default is 0.0. :type lr_min: float :param verbose: Verbosity. Default is 0. :type verbose: int

get_config()[source]
schedule_epoch_lr(epoch, lr)[source]

Reduce the learning rate.

class pyNNsMD.utils.callbacks.StepWiseLearningScheduler(learning_rate_step: Optional[list] = None, epoch_step_reduction: Optional[list] = None, verbose: int = 0, use: Optional[bool] = None)[source]

Bases: keras.callbacks.LearningRateScheduler

Callback for step-wise change of the learning rate.

This class inherits from tf.keras.callbacks.LearningRateScheduler.

__init__(learning_rate_step: Optional[list] = None, epoch_step_reduction: Optional[list] = None, verbose: int = 0, use: Optional[bool] = None)[source]

Set the parameters for the learning rate scheduler.

Parameters
  • learning_rate_step (list, optional) – List of learning rates for each step. The default is [1e-3,1e-4,1e-5].

  • epoch_step_reduction (list, optional) – The length of each step to keep learning rate. The default is [500,1000,5000].

get_config()[source]
schedule_epoch_lr(epoch, lr)[source]

pyNNsMD.utils.data module

pyNNsMD.utils.data.load_hyper_file(file_name)[source]

Load hyper-parameters from file. File type can be ‘.yaml’, ‘.json’, ‘.pickle’ or ‘.py’ :param file_name: Path or name of the file containing hyper-parameter. :type file_name: str

Returns

Dictionary of hyper-parameters.

Return type

hyper (dict)

pyNNsMD.utils.data.load_json_file(filepath)[source]

Load json file.

pyNNsMD.utils.data.load_pickle_file(filepath)[source]

Load pickle file.

pyNNsMD.utils.data.load_yaml_file(fname)[source]

Load yaml file.

pyNNsMD.utils.data.parse_list_to_xyz_str(mol: list, comment: str = '')[source]

Convert list of atom and coordinates list into xyz-string. :param mol: Tuple or list of [[‘C’, ‘H’, …], [[0.0, 0.0, 0.0], [1.0, 1.0, 1.0], … ]]. :type mol: list :param comment: Comment for comment line in xyz string. Default is “”. :type comment: str

Returns

Information in xyz-string format.

Return type

str

pyNNsMD.utils.data.read_xyz_file(file_path, delimiter: Optional[str] = None, line_by_line=False)[source]

Simple python script to read xyz-file and parse into a nested python list. Always returns a list with the geometries in xyz file.

Parameters
  • file_path (str) – Full path to xyz-file.

  • delimiter (str) – Delimiter for xyz separation. Default is ‘ ‘.

  • line_by_line (bool) – Whether to read XYZ file line by line.

Returns

Nested coordinates from xyz-file.

Return type

list

pyNNsMD.utils.data.save_json_file(outlist, filepath)[source]

Save to json file.

pyNNsMD.utils.data.save_pickle_file(outlist, filepath)[source]

Save to pickle file.

pyNNsMD.utils.data.save_yaml_file(outlist, fname)[source]

Save to yaml file.

pyNNsMD.utils.data.write_list_to_xyz_file(filepath: str, mol_list: list)[source]

Write a list of nested list of atom and coordinates into xyz-string. Uses parse_list_to_xyz_str. :param filepath: Full path to file including name. :type filepath: str :param mol_list: List of molecules, which is a list of pairs of atoms and coordinates of

[[[‘C’, ‘H’, … ], [[0.0, 0.0, 0.0], [1.0, 1.0, 1.0], … ]], … ].

pyNNsMD.utils.loss module

class pyNNsMD.utils.loss.NACphaselessLoss(name='phaseless_loss', number_state=2, shape_nac=(1, 1), **kwargs)[source]

Bases: keras.losses.Loss

call(y_true, y_pred)[source]

Invokes the Loss instance.

Parameters
  • y_true – Ground truth values. shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1]

  • y_pred – The predicted values. shape = [batch_size, d0, .. dN]

Returns

Loss values with the shape [batch_size, d0, .. dN-1].

get_config()[source]

Return the config dictionary for a Loss instance.

class pyNNsMD.utils.loss.ScaledMeanAbsoluteError(*args, **kwargs)[source]

Bases: keras.metrics.metrics.MeanAbsoluteError

get_config()[source]

Returns the serializable config of the metric.

reset_states()[source]
set_scale(scale)[source]
update_state(y_true, y_pred, sample_weight=None)[source]

Accumulates metric statistics.

For sparse categorical metrics, the shapes of y_true and y_pred are different.

Parameters
  • y_true – Ground truth label values. shape = [batch_size, d0, .. dN-1] or shape = [batch_size, d0, .. dN-1, 1].

  • y_pred – The predicted probability values. shape = [batch_size, d0, .. dN].

  • sample_weight – Optional sample_weight acts as a coefficient for the metric. If a scalar is provided, then the metric is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the metric for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each metric element of y_pred is scaled by the corresponding value of sample_weight. (Note on dN-1: all metric functions reduce by 1 dimension, usually the last axis (-1)).

Returns

Update op.

class pyNNsMD.utils.loss.ZeroEmptyLoss(**kwargs)[source]

Bases: keras.losses.Loss

Empty constant zero loss.

call(y_true, y_pred)[source]
Returns

tf.constant(0)

pyNNsMD.utils.loss.get_lr_metric(optimizer)[source]

Obtian learning rate from optimizer.

Parameters

optimizer (tf.kears.optimizer) – Optimizer used for training.

Returns

learning rate.

Return type

float

pyNNsMD.utils.loss.merge_hist(hist1, hist2)[source]

Merge two hist-dicts.

Parameters
  • hist1 (dict) – Hist dict from fit.

  • hist2 (dict) – Hist dict from fit.

Returns

hist1 + hist2.

Return type

outhist (dict)

pyNNsMD.utils.loss.r2_metric(y_true, y_pred)[source]

Compute r2 metric.

Parameters
  • y_true (tf.tensor) – True y-values.

  • y_pred (tf.tensor) – Predicted y-values.

Returns

r2 metric.

Return type

tf.tensor

Module contents