pyNNsMD.models package

Submodules

pyNNsMD.models.mlp_e module

Tensorflow keras model definitions for energy and gradient.

There are two definitions: the subclassed EnergyModel and a precomputed model to train energies. The subclassed Model will also predict gradients.

class pyNNsMD.models.mlp_e.EnergyModel(*args, **kwargs)[source]

Bases: keras.engine.training.Model

Subclassed tf.keras.model for energy/gradient which outputs both energy and gradient from coordinates.

It can also

call(data, training=False, **kwargs)[source]

Call the model output, forward pass.

Parameters
  • data (tf.tensor) – Coordinates.

  • training (bool, optional) – Training Mode. Defaults to False.

Returns

List of tf.tensor for predicted [energy,gradient]

Return type

y_pred (list)

call_to_numpy_output(y)[source]
call_to_tensor_input(x)[source]
fit(**kwargs)[source]

Trains the model for a fixed number of epochs (iterations on a dataset).

Parameters
  • x

    Input data. It could be: - A Numpy array (or array-like), or a list of arrays

    (in case the model has multiple inputs).

    • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

    • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

    • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

    • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

    • A tf.keras.utils.experimental.DatasetCreator, which wraps a callable that takes a single argument of type tf.distribute.InputContext, and returns a tf.data.Dataset. DatasetCreator should be used when users prefer to specify the per-replica batching and sharding logic for the Dataset. See tf.keras.utils.experimental.DatasetCreator doc for more information.

    A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given below. If these include sample_weights as a third component, note that sample weighting applies to the weighted_metrics argument but not the metrics argument in compile(). If using tf.distribute.experimental.ParameterServerStrategy, only DatasetCreator type is supported for x.

  • y – Target data. Like the input data x, it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator, or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from x).

  • batch_size – Integer or None. Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

  • epochs – Integer. Number of epochs to train the model. An epoch is an iteration over the entire x and y data provided (unless the steps_per_epoch flag is set to something other than None). Note that in conjunction with initial_epoch, epochs is to be understood as “final epoch”. The model is not trained for a number of iterations given by epochs, but merely until the epoch of index epochs is reached.

  • verbose – ‘auto’, 0, 1, or 2. Verbosity mode. 0 = silent, 1 = progress bar, 2 = one line per epoch. ‘auto’ defaults to 1 for most cases, but 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (eg, in a production environment).

  • callbacks – List of keras.callbacks.Callback instances. List of callbacks to apply during training. See tf.keras.callbacks. Note tf.keras.callbacks.ProgbarLogger and tf.keras.callbacks.History callbacks are created automatically and need not be passed into model.fit. tf.keras.callbacks.ProgbarLogger is created or not based on verbose argument to model.fit. Callbacks with batch-level calls are currently unsupported with tf.distribute.experimental.ParameterServerStrategy, and users are advised to implement epoch-level calls instead with an appropriate steps_per_epoch value.

  • validation_split – Float between 0 and 1. Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. This argument is not supported when x is a dataset, generator or keras.utils.Sequence instance. If both validation_data and validation_split are provided, validation_data will override validation_split. validation_split is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

  • validation_data

    Data on which to evaluate the loss and any model metrics at the end of each epoch. The model will not be trained on this data. Thus, note the fact that the validation loss of data provided using validation_split or validation_data is not affected by regularization layers like noise and dropout. validation_data will override validation_split. validation_data could be:

    • A tuple (x_val, y_val) of Numpy arrays or tensors.

    • A tuple (x_val, y_val, val_sample_weights) of NumPy arrays.

    • A tf.data.Dataset.

    • A Python generator or keras.utils.Sequence returning

    (inputs, targets) or (inputs, targets, sample_weights).

    validation_data is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

  • shuffle – Boolean (whether to shuffle the training data before each epoch) or str (for ‘batch’). This argument is ignored when x is a generator or an object of tf.data.Dataset. ‘batch’ is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not None.

  • class_weight – Optional dictionary mapping class indices (integers) to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to “pay more attention” to samples from an under-represented class.

  • sample_weight – Optional Numpy array of weights for the training samples, used for weighting the loss function (during training only). You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, generator, or keras.utils.Sequence instance, instead provide the sample_weights as the third element of x. Note that sample weighting does not apply to metrics specified via the metrics argument in compile(). To apply sample weighting to your metrics, you can specify them via the weighted_metrics in compile() instead.

  • initial_epoch – Integer. Epoch at which to start training (useful for resuming a previous training run).

  • steps_per_epoch

    Integer or None. Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. When training with input tensors such as TensorFlow data tensors, the default None is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a tf.data dataset, and ‘steps_per_epoch’ is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the steps_per_epoch argument. If steps_per_epoch=-1 the training will run indefinitely with an infinitely repeating dataset. This argument is not supported with array inputs. When using tf.distribute.experimental.ParameterServerStrategy:

    • steps_per_epoch=None is not supported.

  • validation_steps – Only relevant if validation_data is provided and is a tf.data dataset. Total number of steps (batches of samples) to draw before stopping when performing validation at the end of every epoch. If ‘validation_steps’ is None, validation will run until the validation_data dataset is exhausted. In the case of an infinitely repeated dataset, it will run into an infinite loop. If ‘validation_steps’ is specified and only part of the dataset will be consumed, the evaluation will start from the beginning of the dataset at each epoch. This ensures that the same validation samples are used every time.

  • validation_batch_size – Integer or None. Number of samples per validation batch. If unspecified, will default to batch_size. Do not specify the validation_batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

  • validation_freq – Only relevant if validation data is provided. Integer or collections.abc.Container instance (e.g. list, tuple, etc.). If an integer, specifies how many training epochs to run before a new validation run is performed, e.g. validation_freq=2 runs validation every 2 epochs. If a Container, specifies the epochs on which to run validation, e.g. validation_freq=[1, 2, 10] runs validation at the end of the 1st, 2nd, and 10th epochs.

  • max_queue_size – Integer. Used for generator or keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

  • workers – Integer. Used for generator or keras.utils.Sequence input only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

  • use_multiprocessing – Boolean. Used for generator or keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-picklable arguments to the generator as they can’t be passed easily to children processes.

Unpacking behavior for iterator-like inputs:

A common pattern is to pass a tf.data.Dataset, generator, or

tf.keras.utils.Sequence to the x argument of fit, which will in fact yield not only features (x) but optionally targets (y) and sample weights. Keras requires that the output of such iterator-likes be unambiguous. The iterator should return a tuple of length 1, 2, or 3, where the optional second and third elements will be used for y and sample_weight respectively. Any other type provided will be wrapped in a length one tuple, effectively treating everything as ‘x’. When yielding dicts, they should still adhere to the top-level tuple structure. e.g. ({“x0”: x0, “x1”: x1}, y). Keras will not attempt to separate features, targets, and weights from the keys of a single dict.

A notable unsupported data type is the namedtuple. The reason is

that it behaves like both an ordered datatype (tuple) and a mapping datatype (dict). So given a namedtuple of the form:

namedtuple(“example_tuple”, [“y”, “x”])

it is ambiguous whether to reverse the order of the elements when interpreting the value. Even worse is a tuple of the form:

namedtuple(“other_tuple”, [“x”, “y”, “z”])

where it is unclear if the tuple was intended to be unpacked into x, y, and sample_weight or passed through as a single element to x. As a result the data processing code will simply raise a ValueError if it encounters a namedtuple. (Along with instructions to remedy the issue.)

Returns

A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

Raises
  • RuntimeError

    1. If the model was never compiled or,

  • 2. If model.fit is wrapped in tf.function.

  • ValueError – In case of mismatch between the provided input data and what the model expects or when the input data is empty.

get_config()[source]

Returns the config of the Model.

Config is a Python dictionary (serializable) containing the configuration of an object, which in this case is a Model. This allows the Model to be be reinstantiated later (without its trained weights) from this configuration.

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Developers of subclassed Model are advised to override this method, and continue to update the dict from super(MyModel, self).get_config() to provide the proper configuration of this Model. The default config is an empty dict. Optionally, raise NotImplementedError to allow Keras to attempt a default serialization.

Returns

Python dictionary containing the configuration of this Model.

precompute_feature_in_chunks(x, batch_size, training=False)[source]
predict_chunk_feature(tf_x, training=False)[source]
save(filepath, **kwargs)[source]

Saves the model to Tensorflow SavedModel or a single HDF5 file.

Please see tf.keras.models.save_model or the [Serialization and Saving guide]( https://keras.io/guides/serialization_and_saving/) for details.

Parameters
  • filepath – String, PathLike, path to SavedModel or H5 file to save the model.

  • overwrite – Whether to silently overwrite any existing file at the target location, or provide the user with a manual prompt.

  • include_optimizer – If True, save optimizer’s state together.

  • save_format – Either ‘tf’ or ‘h5’, indicating whether to save the model to Tensorflow SavedModel or HDF5. Defaults to ‘tf’ in TF 2.X, and ‘h5’ in TF 1.X.

  • signatures – Signatures to save with the SavedModel. Applicable to the ‘tf’ format only. Please see the signatures argument in tf.saved_model.save for details.

  • options – (only applies to SavedModel format) tf.saved_model.SaveOptions object that specifies options for saving to SavedModel.

  • save_traces – (only applies to SavedModel format) When enabled, the SavedModel will store the function traces for each layer. This can be disabled, so that only the configs of each layer are stored. Defaults to True. Disabling this will decrease serialization time and reduce file size, but it requires that all custom layers/models implement a get_config() method.

Example:

```python from keras.models import load_model

model.save(‘my_model.h5’) # creates a HDF5 file ‘my_model.h5’ del model # deletes the existing model

# returns a compiled model # identical to the previous one model = load_model(‘my_model.h5’) ```

pyNNsMD.models.mlp_eg module

Tensorflow keras model definitions for energy and gradient.

There are two definitions: the subclassed EnergyGradientModel and a precomputed model to multiply with the feature derivative for training, which overwrites training/predict step.

class pyNNsMD.models.mlp_eg.EnergyGradientModel(*args, **kwargs)[source]

Bases: keras.engine.training.Model

Subclassed tf.keras.model for energy/gradient which outputs both energy and gradient from coordinates.

The model is supposed to be saved and exported for MD code.

__init__(states=1, atoms=2, invd_index=None, angle_index=None, dihed_index=None, nn_size=100, depth=3, activ='selu', use_reg_activ=None, use_reg_weight=None, use_reg_bias=None, use_dropout=False, dropout=0.01, normalization_mode=1, energy_only=False, precomputed_features=False, output_as_dict=False, model_module='mlp_e', **kwargs)[source]

Initialize Layer.

Parameters
  • states

  • atoms

  • invd_index

  • angle_index

  • dihed_index

  • nn_size

  • depth

  • activ

  • use_reg_activ

  • use_reg_weight

  • use_reg_bias

  • use_dropout

  • dropout

  • **kwargs

call(data, training=False, **kwargs)[source]

Call the model output, forward pass.

Parameters
  • data (tf.tensor) – Coordinates.

  • training (bool, optional) – Training Mode. Defaults to False.

Returns

List of tf.tensor for predicted [energy,gradient]

Return type

y_pred (list)

call_to_numpy_output(y)[source]
call_to_tensor_input(x)[source]
fit(**kwargs)[source]

Trains the model for a fixed number of epochs (iterations on a dataset).

Parameters
  • x

    Input data. It could be: - A Numpy array (or array-like), or a list of arrays

    (in case the model has multiple inputs).

    • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

    • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

    • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

    • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

    • A tf.keras.utils.experimental.DatasetCreator, which wraps a callable that takes a single argument of type tf.distribute.InputContext, and returns a tf.data.Dataset. DatasetCreator should be used when users prefer to specify the per-replica batching and sharding logic for the Dataset. See tf.keras.utils.experimental.DatasetCreator doc for more information.

    A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given below. If these include sample_weights as a third component, note that sample weighting applies to the weighted_metrics argument but not the metrics argument in compile(). If using tf.distribute.experimental.ParameterServerStrategy, only DatasetCreator type is supported for x.

  • y – Target data. Like the input data x, it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator, or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from x).

  • batch_size – Integer or None. Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

  • epochs – Integer. Number of epochs to train the model. An epoch is an iteration over the entire x and y data provided (unless the steps_per_epoch flag is set to something other than None). Note that in conjunction with initial_epoch, epochs is to be understood as “final epoch”. The model is not trained for a number of iterations given by epochs, but merely until the epoch of index epochs is reached.

  • verbose – ‘auto’, 0, 1, or 2. Verbosity mode. 0 = silent, 1 = progress bar, 2 = one line per epoch. ‘auto’ defaults to 1 for most cases, but 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (eg, in a production environment).

  • callbacks – List of keras.callbacks.Callback instances. List of callbacks to apply during training. See tf.keras.callbacks. Note tf.keras.callbacks.ProgbarLogger and tf.keras.callbacks.History callbacks are created automatically and need not be passed into model.fit. tf.keras.callbacks.ProgbarLogger is created or not based on verbose argument to model.fit. Callbacks with batch-level calls are currently unsupported with tf.distribute.experimental.ParameterServerStrategy, and users are advised to implement epoch-level calls instead with an appropriate steps_per_epoch value.

  • validation_split – Float between 0 and 1. Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. This argument is not supported when x is a dataset, generator or keras.utils.Sequence instance. If both validation_data and validation_split are provided, validation_data will override validation_split. validation_split is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

  • validation_data

    Data on which to evaluate the loss and any model metrics at the end of each epoch. The model will not be trained on this data. Thus, note the fact that the validation loss of data provided using validation_split or validation_data is not affected by regularization layers like noise and dropout. validation_data will override validation_split. validation_data could be:

    • A tuple (x_val, y_val) of Numpy arrays or tensors.

    • A tuple (x_val, y_val, val_sample_weights) of NumPy arrays.

    • A tf.data.Dataset.

    • A Python generator or keras.utils.Sequence returning

    (inputs, targets) or (inputs, targets, sample_weights).

    validation_data is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

  • shuffle – Boolean (whether to shuffle the training data before each epoch) or str (for ‘batch’). This argument is ignored when x is a generator or an object of tf.data.Dataset. ‘batch’ is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not None.

  • class_weight – Optional dictionary mapping class indices (integers) to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to “pay more attention” to samples from an under-represented class.

  • sample_weight – Optional Numpy array of weights for the training samples, used for weighting the loss function (during training only). You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, generator, or keras.utils.Sequence instance, instead provide the sample_weights as the third element of x. Note that sample weighting does not apply to metrics specified via the metrics argument in compile(). To apply sample weighting to your metrics, you can specify them via the weighted_metrics in compile() instead.

  • initial_epoch – Integer. Epoch at which to start training (useful for resuming a previous training run).

  • steps_per_epoch

    Integer or None. Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. When training with input tensors such as TensorFlow data tensors, the default None is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a tf.data dataset, and ‘steps_per_epoch’ is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the steps_per_epoch argument. If steps_per_epoch=-1 the training will run indefinitely with an infinitely repeating dataset. This argument is not supported with array inputs. When using tf.distribute.experimental.ParameterServerStrategy:

    • steps_per_epoch=None is not supported.

  • validation_steps – Only relevant if validation_data is provided and is a tf.data dataset. Total number of steps (batches of samples) to draw before stopping when performing validation at the end of every epoch. If ‘validation_steps’ is None, validation will run until the validation_data dataset is exhausted. In the case of an infinitely repeated dataset, it will run into an infinite loop. If ‘validation_steps’ is specified and only part of the dataset will be consumed, the evaluation will start from the beginning of the dataset at each epoch. This ensures that the same validation samples are used every time.

  • validation_batch_size – Integer or None. Number of samples per validation batch. If unspecified, will default to batch_size. Do not specify the validation_batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

  • validation_freq – Only relevant if validation data is provided. Integer or collections.abc.Container instance (e.g. list, tuple, etc.). If an integer, specifies how many training epochs to run before a new validation run is performed, e.g. validation_freq=2 runs validation every 2 epochs. If a Container, specifies the epochs on which to run validation, e.g. validation_freq=[1, 2, 10] runs validation at the end of the 1st, 2nd, and 10th epochs.

  • max_queue_size – Integer. Used for generator or keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

  • workers – Integer. Used for generator or keras.utils.Sequence input only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

  • use_multiprocessing – Boolean. Used for generator or keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-picklable arguments to the generator as they can’t be passed easily to children processes.

Unpacking behavior for iterator-like inputs:

A common pattern is to pass a tf.data.Dataset, generator, or

tf.keras.utils.Sequence to the x argument of fit, which will in fact yield not only features (x) but optionally targets (y) and sample weights. Keras requires that the output of such iterator-likes be unambiguous. The iterator should return a tuple of length 1, 2, or 3, where the optional second and third elements will be used for y and sample_weight respectively. Any other type provided will be wrapped in a length one tuple, effectively treating everything as ‘x’. When yielding dicts, they should still adhere to the top-level tuple structure. e.g. ({“x0”: x0, “x1”: x1}, y). Keras will not attempt to separate features, targets, and weights from the keys of a single dict.

A notable unsupported data type is the namedtuple. The reason is

that it behaves like both an ordered datatype (tuple) and a mapping datatype (dict). So given a namedtuple of the form:

namedtuple(“example_tuple”, [“y”, “x”])

it is ambiguous whether to reverse the order of the elements when interpreting the value. Even worse is a tuple of the form:

namedtuple(“other_tuple”, [“x”, “y”, “z”])

where it is unclear if the tuple was intended to be unpacked into x, y, and sample_weight or passed through as a single element to x. As a result the data processing code will simply raise a ValueError if it encounters a namedtuple. (Along with instructions to remedy the issue.)

Returns

A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

Raises
  • RuntimeError

    1. If the model was never compiled or,

  • 2. If model.fit is wrapped in tf.function.

  • ValueError – In case of mismatch between the provided input data and what the model expects or when the input data is empty.

get_config()[source]

Returns the config of the Model.

Config is a Python dictionary (serializable) containing the configuration of an object, which in this case is a Model. This allows the Model to be be reinstantiated later (without its trained weights) from this configuration.

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Developers of subclassed Model are advised to override this method, and continue to update the dict from super(MyModel, self).get_config() to provide the proper configuration of this Model. The default config is an empty dict. Optionally, raise NotImplementedError to allow Keras to attempt a default serialization.

Returns

Python dictionary containing the configuration of this Model.

precompute_feature_in_chunks(x, batch_size, training=False)[source]
predict_chunk_feature(tf_x, training=False)[source]
save(filepath, **kwargs)[source]

Saves the model to Tensorflow SavedModel or a single HDF5 file.

Please see tf.keras.models.save_model or the [Serialization and Saving guide]( https://keras.io/guides/serialization_and_saving/) for details.

Parameters
  • filepath – String, PathLike, path to SavedModel or H5 file to save the model.

  • overwrite – Whether to silently overwrite any existing file at the target location, or provide the user with a manual prompt.

  • include_optimizer – If True, save optimizer’s state together.

  • save_format – Either ‘tf’ or ‘h5’, indicating whether to save the model to Tensorflow SavedModel or HDF5. Defaults to ‘tf’ in TF 2.X, and ‘h5’ in TF 1.X.

  • signatures – Signatures to save with the SavedModel. Applicable to the ‘tf’ format only. Please see the signatures argument in tf.saved_model.save for details.

  • options – (only applies to SavedModel format) tf.saved_model.SaveOptions object that specifies options for saving to SavedModel.

  • save_traces – (only applies to SavedModel format) When enabled, the SavedModel will store the function traces for each layer. This can be disabled, so that only the configs of each layer are stored. Defaults to True. Disabling this will decrease serialization time and reduce file size, but it requires that all custom layers/models implement a get_config() method.

Example:

```python from keras.models import load_model

model.save(‘my_model.h5’) # creates a HDF5 file ‘my_model.h5’ del model # deletes the existing model

# returns a compiled model # identical to the previous one model = load_model(‘my_model.h5’) ```

pyNNsMD.models.mlp_g2 module

class pyNNsMD.models.mlp_g2.GradientModel2(*args, **kwargs)[source]

Bases: keras.engine.training.Model

Subclassed tf.keras.model for NACs which outputs NACs from coordinates.

The model is supposed to be saved and exported.

__init__(atoms, states, invd_index=None, angle_index=None, dihed_index=None, nn_size=100, depth=3, activ='selu', use_reg_activ=None, use_reg_weight=None, use_reg_bias=None, use_dropout=False, dropout=0.01, normalization_mode=1, precomputed_features=False, model_module='mlp_g2', **kwargs)[source]

Initialize a Gradient with hyperparameters.

Parameters
  • hyper (dict) – Hyperparamters.

  • **kwargs (dict) – Additional keras.model parameters.

call(data, training=False, **kwargs)[source]

Call the model output, forward pass.

Parameters
  • data (tf.tensor) – Coordinates.

  • training (bool, optional) – Training Mode. Defaults to False.

Returns

predicted NACs.

Return type

y_pred (tf.tensor)

call_to_numpy_output(y)[source]
call_to_tensor_input(x)[source]
fit(**kwargs)[source]

Trains the model for a fixed number of epochs (iterations on a dataset).

Parameters
  • x

    Input data. It could be: - A Numpy array (or array-like), or a list of arrays

    (in case the model has multiple inputs).

    • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

    • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

    • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

    • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

    • A tf.keras.utils.experimental.DatasetCreator, which wraps a callable that takes a single argument of type tf.distribute.InputContext, and returns a tf.data.Dataset. DatasetCreator should be used when users prefer to specify the per-replica batching and sharding logic for the Dataset. See tf.keras.utils.experimental.DatasetCreator doc for more information.

    A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given below. If these include sample_weights as a third component, note that sample weighting applies to the weighted_metrics argument but not the metrics argument in compile(). If using tf.distribute.experimental.ParameterServerStrategy, only DatasetCreator type is supported for x.

  • y – Target data. Like the input data x, it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator, or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from x).

  • batch_size – Integer or None. Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

  • epochs – Integer. Number of epochs to train the model. An epoch is an iteration over the entire x and y data provided (unless the steps_per_epoch flag is set to something other than None). Note that in conjunction with initial_epoch, epochs is to be understood as “final epoch”. The model is not trained for a number of iterations given by epochs, but merely until the epoch of index epochs is reached.

  • verbose – ‘auto’, 0, 1, or 2. Verbosity mode. 0 = silent, 1 = progress bar, 2 = one line per epoch. ‘auto’ defaults to 1 for most cases, but 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (eg, in a production environment).

  • callbacks – List of keras.callbacks.Callback instances. List of callbacks to apply during training. See tf.keras.callbacks. Note tf.keras.callbacks.ProgbarLogger and tf.keras.callbacks.History callbacks are created automatically and need not be passed into model.fit. tf.keras.callbacks.ProgbarLogger is created or not based on verbose argument to model.fit. Callbacks with batch-level calls are currently unsupported with tf.distribute.experimental.ParameterServerStrategy, and users are advised to implement epoch-level calls instead with an appropriate steps_per_epoch value.

  • validation_split – Float between 0 and 1. Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. This argument is not supported when x is a dataset, generator or keras.utils.Sequence instance. If both validation_data and validation_split are provided, validation_data will override validation_split. validation_split is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

  • validation_data

    Data on which to evaluate the loss and any model metrics at the end of each epoch. The model will not be trained on this data. Thus, note the fact that the validation loss of data provided using validation_split or validation_data is not affected by regularization layers like noise and dropout. validation_data will override validation_split. validation_data could be:

    • A tuple (x_val, y_val) of Numpy arrays or tensors.

    • A tuple (x_val, y_val, val_sample_weights) of NumPy arrays.

    • A tf.data.Dataset.

    • A Python generator or keras.utils.Sequence returning

    (inputs, targets) or (inputs, targets, sample_weights).

    validation_data is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

  • shuffle – Boolean (whether to shuffle the training data before each epoch) or str (for ‘batch’). This argument is ignored when x is a generator or an object of tf.data.Dataset. ‘batch’ is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not None.

  • class_weight – Optional dictionary mapping class indices (integers) to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to “pay more attention” to samples from an under-represented class.

  • sample_weight – Optional Numpy array of weights for the training samples, used for weighting the loss function (during training only). You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, generator, or keras.utils.Sequence instance, instead provide the sample_weights as the third element of x. Note that sample weighting does not apply to metrics specified via the metrics argument in compile(). To apply sample weighting to your metrics, you can specify them via the weighted_metrics in compile() instead.

  • initial_epoch – Integer. Epoch at which to start training (useful for resuming a previous training run).

  • steps_per_epoch

    Integer or None. Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. When training with input tensors such as TensorFlow data tensors, the default None is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a tf.data dataset, and ‘steps_per_epoch’ is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the steps_per_epoch argument. If steps_per_epoch=-1 the training will run indefinitely with an infinitely repeating dataset. This argument is not supported with array inputs. When using tf.distribute.experimental.ParameterServerStrategy:

    • steps_per_epoch=None is not supported.

  • validation_steps – Only relevant if validation_data is provided and is a tf.data dataset. Total number of steps (batches of samples) to draw before stopping when performing validation at the end of every epoch. If ‘validation_steps’ is None, validation will run until the validation_data dataset is exhausted. In the case of an infinitely repeated dataset, it will run into an infinite loop. If ‘validation_steps’ is specified and only part of the dataset will be consumed, the evaluation will start from the beginning of the dataset at each epoch. This ensures that the same validation samples are used every time.

  • validation_batch_size – Integer or None. Number of samples per validation batch. If unspecified, will default to batch_size. Do not specify the validation_batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

  • validation_freq – Only relevant if validation data is provided. Integer or collections.abc.Container instance (e.g. list, tuple, etc.). If an integer, specifies how many training epochs to run before a new validation run is performed, e.g. validation_freq=2 runs validation every 2 epochs. If a Container, specifies the epochs on which to run validation, e.g. validation_freq=[1, 2, 10] runs validation at the end of the 1st, 2nd, and 10th epochs.

  • max_queue_size – Integer. Used for generator or keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

  • workers – Integer. Used for generator or keras.utils.Sequence input only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

  • use_multiprocessing – Boolean. Used for generator or keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-picklable arguments to the generator as they can’t be passed easily to children processes.

Unpacking behavior for iterator-like inputs:

A common pattern is to pass a tf.data.Dataset, generator, or

tf.keras.utils.Sequence to the x argument of fit, which will in fact yield not only features (x) but optionally targets (y) and sample weights. Keras requires that the output of such iterator-likes be unambiguous. The iterator should return a tuple of length 1, 2, or 3, where the optional second and third elements will be used for y and sample_weight respectively. Any other type provided will be wrapped in a length one tuple, effectively treating everything as ‘x’. When yielding dicts, they should still adhere to the top-level tuple structure. e.g. ({“x0”: x0, “x1”: x1}, y). Keras will not attempt to separate features, targets, and weights from the keys of a single dict.

A notable unsupported data type is the namedtuple. The reason is

that it behaves like both an ordered datatype (tuple) and a mapping datatype (dict). So given a namedtuple of the form:

namedtuple(“example_tuple”, [“y”, “x”])

it is ambiguous whether to reverse the order of the elements when interpreting the value. Even worse is a tuple of the form:

namedtuple(“other_tuple”, [“x”, “y”, “z”])

where it is unclear if the tuple was intended to be unpacked into x, y, and sample_weight or passed through as a single element to x. As a result the data processing code will simply raise a ValueError if it encounters a namedtuple. (Along with instructions to remedy the issue.)

Returns

A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

Raises
  • RuntimeError

    1. If the model was never compiled or,

  • 2. If model.fit is wrapped in tf.function.

  • ValueError – In case of mismatch between the provided input data and what the model expects or when the input data is empty.

get_config()[source]

Returns the config of the Model.

Config is a Python dictionary (serializable) containing the configuration of an object, which in this case is a Model. This allows the Model to be be reinstantiated later (without its trained weights) from this configuration.

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Developers of subclassed Model are advised to override this method, and continue to update the dict from super(MyModel, self).get_config() to provide the proper configuration of this Model. The default config is an empty dict. Optionally, raise NotImplementedError to allow Keras to attempt a default serialization.

Returns

Python dictionary containing the configuration of this Model.

precompute_feature_in_chunks(x, batch_size, training=False)[source]
predict_chunk_feature(tf_x, training=False)[source]
save(filepath, **kwargs)[source]

Saves the model to Tensorflow SavedModel or a single HDF5 file.

Please see tf.keras.models.save_model or the [Serialization and Saving guide]( https://keras.io/guides/serialization_and_saving/) for details.

Parameters
  • filepath – String, PathLike, path to SavedModel or H5 file to save the model.

  • overwrite – Whether to silently overwrite any existing file at the target location, or provide the user with a manual prompt.

  • include_optimizer – If True, save optimizer’s state together.

  • save_format – Either ‘tf’ or ‘h5’, indicating whether to save the model to Tensorflow SavedModel or HDF5. Defaults to ‘tf’ in TF 2.X, and ‘h5’ in TF 1.X.

  • signatures – Signatures to save with the SavedModel. Applicable to the ‘tf’ format only. Please see the signatures argument in tf.saved_model.save for details.

  • options – (only applies to SavedModel format) tf.saved_model.SaveOptions object that specifies options for saving to SavedModel.

  • save_traces – (only applies to SavedModel format) When enabled, the SavedModel will store the function traces for each layer. This can be disabled, so that only the configs of each layer are stored. Defaults to True. Disabling this will decrease serialization time and reduce file size, but it requires that all custom layers/models implement a get_config() method.

Example:

```python from keras.models import load_model

model.save(‘my_model.h5’) # creates a HDF5 file ‘my_model.h5’ del model # deletes the existing model

# returns a compiled model # identical to the previous one model = load_model(‘my_model.h5’) ```

pyNNsMD.models.mlp_nac module

class pyNNsMD.models.mlp_nac.NACModel(*args, **kwargs)[source]

Bases: keras.engine.training.Model

Subclassed tf.keras.model for NACs which outputs NACs from coordinates.

The model is supposed to be saved and exported.

__init__(states, atoms, invd_index, angle_index=None, dihed_index=None, nn_size=100, depth=3, activ='selu', use_reg_activ=None, use_reg_weight=None, use_reg_bias=None, use_dropout=False, dropout=0.01, normalization_mode=1, precomputed_features=False, model_module='mlp_nac', **kwargs)[source]

Initialize a NACModel with hyperparameters.

Parameters
  • hyper (dict) – Hyperparamters.

  • **kwargs (dict) – Additional keras.model parameters.

call(data, training=False, **kwargs)[source]

Call the model output, forward pass.

Parameters
  • data (tf.tensor) – Coordinates.

  • training (bool, optional) – Training Mode. Defaults to False.

Returns

predicted NACs.

Return type

y_pred (tf.tensor)

call_to_numpy_output(y)[source]
call_to_tensor_input(x)[source]
fit(**kwargs)[source]

Trains the model for a fixed number of epochs (iterations on a dataset).

Parameters
  • x

    Input data. It could be: - A Numpy array (or array-like), or a list of arrays

    (in case the model has multiple inputs).

    • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

    • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

    • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

    • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

    • A tf.keras.utils.experimental.DatasetCreator, which wraps a callable that takes a single argument of type tf.distribute.InputContext, and returns a tf.data.Dataset. DatasetCreator should be used when users prefer to specify the per-replica batching and sharding logic for the Dataset. See tf.keras.utils.experimental.DatasetCreator doc for more information.

    A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given below. If these include sample_weights as a third component, note that sample weighting applies to the weighted_metrics argument but not the metrics argument in compile(). If using tf.distribute.experimental.ParameterServerStrategy, only DatasetCreator type is supported for x.

  • y – Target data. Like the input data x, it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator, or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from x).

  • batch_size – Integer or None. Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

  • epochs – Integer. Number of epochs to train the model. An epoch is an iteration over the entire x and y data provided (unless the steps_per_epoch flag is set to something other than None). Note that in conjunction with initial_epoch, epochs is to be understood as “final epoch”. The model is not trained for a number of iterations given by epochs, but merely until the epoch of index epochs is reached.

  • verbose – ‘auto’, 0, 1, or 2. Verbosity mode. 0 = silent, 1 = progress bar, 2 = one line per epoch. ‘auto’ defaults to 1 for most cases, but 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (eg, in a production environment).

  • callbacks – List of keras.callbacks.Callback instances. List of callbacks to apply during training. See tf.keras.callbacks. Note tf.keras.callbacks.ProgbarLogger and tf.keras.callbacks.History callbacks are created automatically and need not be passed into model.fit. tf.keras.callbacks.ProgbarLogger is created or not based on verbose argument to model.fit. Callbacks with batch-level calls are currently unsupported with tf.distribute.experimental.ParameterServerStrategy, and users are advised to implement epoch-level calls instead with an appropriate steps_per_epoch value.

  • validation_split – Float between 0 and 1. Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. This argument is not supported when x is a dataset, generator or keras.utils.Sequence instance. If both validation_data and validation_split are provided, validation_data will override validation_split. validation_split is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

  • validation_data

    Data on which to evaluate the loss and any model metrics at the end of each epoch. The model will not be trained on this data. Thus, note the fact that the validation loss of data provided using validation_split or validation_data is not affected by regularization layers like noise and dropout. validation_data will override validation_split. validation_data could be:

    • A tuple (x_val, y_val) of Numpy arrays or tensors.

    • A tuple (x_val, y_val, val_sample_weights) of NumPy arrays.

    • A tf.data.Dataset.

    • A Python generator or keras.utils.Sequence returning

    (inputs, targets) or (inputs, targets, sample_weights).

    validation_data is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

  • shuffle – Boolean (whether to shuffle the training data before each epoch) or str (for ‘batch’). This argument is ignored when x is a generator or an object of tf.data.Dataset. ‘batch’ is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not None.

  • class_weight – Optional dictionary mapping class indices (integers) to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to “pay more attention” to samples from an under-represented class.

  • sample_weight – Optional Numpy array of weights for the training samples, used for weighting the loss function (during training only). You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, generator, or keras.utils.Sequence instance, instead provide the sample_weights as the third element of x. Note that sample weighting does not apply to metrics specified via the metrics argument in compile(). To apply sample weighting to your metrics, you can specify them via the weighted_metrics in compile() instead.

  • initial_epoch – Integer. Epoch at which to start training (useful for resuming a previous training run).

  • steps_per_epoch

    Integer or None. Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. When training with input tensors such as TensorFlow data tensors, the default None is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a tf.data dataset, and ‘steps_per_epoch’ is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the steps_per_epoch argument. If steps_per_epoch=-1 the training will run indefinitely with an infinitely repeating dataset. This argument is not supported with array inputs. When using tf.distribute.experimental.ParameterServerStrategy:

    • steps_per_epoch=None is not supported.

  • validation_steps – Only relevant if validation_data is provided and is a tf.data dataset. Total number of steps (batches of samples) to draw before stopping when performing validation at the end of every epoch. If ‘validation_steps’ is None, validation will run until the validation_data dataset is exhausted. In the case of an infinitely repeated dataset, it will run into an infinite loop. If ‘validation_steps’ is specified and only part of the dataset will be consumed, the evaluation will start from the beginning of the dataset at each epoch. This ensures that the same validation samples are used every time.

  • validation_batch_size – Integer or None. Number of samples per validation batch. If unspecified, will default to batch_size. Do not specify the validation_batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

  • validation_freq – Only relevant if validation data is provided. Integer or collections.abc.Container instance (e.g. list, tuple, etc.). If an integer, specifies how many training epochs to run before a new validation run is performed, e.g. validation_freq=2 runs validation every 2 epochs. If a Container, specifies the epochs on which to run validation, e.g. validation_freq=[1, 2, 10] runs validation at the end of the 1st, 2nd, and 10th epochs.

  • max_queue_size – Integer. Used for generator or keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

  • workers – Integer. Used for generator or keras.utils.Sequence input only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

  • use_multiprocessing – Boolean. Used for generator or keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-picklable arguments to the generator as they can’t be passed easily to children processes.

Unpacking behavior for iterator-like inputs:

A common pattern is to pass a tf.data.Dataset, generator, or

tf.keras.utils.Sequence to the x argument of fit, which will in fact yield not only features (x) but optionally targets (y) and sample weights. Keras requires that the output of such iterator-likes be unambiguous. The iterator should return a tuple of length 1, 2, or 3, where the optional second and third elements will be used for y and sample_weight respectively. Any other type provided will be wrapped in a length one tuple, effectively treating everything as ‘x’. When yielding dicts, they should still adhere to the top-level tuple structure. e.g. ({“x0”: x0, “x1”: x1}, y). Keras will not attempt to separate features, targets, and weights from the keys of a single dict.

A notable unsupported data type is the namedtuple. The reason is

that it behaves like both an ordered datatype (tuple) and a mapping datatype (dict). So given a namedtuple of the form:

namedtuple(“example_tuple”, [“y”, “x”])

it is ambiguous whether to reverse the order of the elements when interpreting the value. Even worse is a tuple of the form:

namedtuple(“other_tuple”, [“x”, “y”, “z”])

where it is unclear if the tuple was intended to be unpacked into x, y, and sample_weight or passed through as a single element to x. As a result the data processing code will simply raise a ValueError if it encounters a namedtuple. (Along with instructions to remedy the issue.)

Returns

A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

Raises
  • RuntimeError

    1. If the model was never compiled or,

  • 2. If model.fit is wrapped in tf.function.

  • ValueError – In case of mismatch between the provided input data and what the model expects or when the input data is empty.

get_config()[source]

Returns the config of the Model.

Config is a Python dictionary (serializable) containing the configuration of an object, which in this case is a Model. This allows the Model to be be reinstantiated later (without its trained weights) from this configuration.

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Developers of subclassed Model are advised to override this method, and continue to update the dict from super(MyModel, self).get_config() to provide the proper configuration of this Model. The default config is an empty dict. Optionally, raise NotImplementedError to allow Keras to attempt a default serialization.

Returns

Python dictionary containing the configuration of this Model.

precompute_feature_in_chunks(x, batch_size, training=False)[source]
predict_chunk_feature(tf_x, training=False)[source]
save(filepath, **kwargs)[source]

Saves the model to Tensorflow SavedModel or a single HDF5 file.

Please see tf.keras.models.save_model or the [Serialization and Saving guide]( https://keras.io/guides/serialization_and_saving/) for details.

Parameters
  • filepath – String, PathLike, path to SavedModel or H5 file to save the model.

  • overwrite – Whether to silently overwrite any existing file at the target location, or provide the user with a manual prompt.

  • include_optimizer – If True, save optimizer’s state together.

  • save_format – Either ‘tf’ or ‘h5’, indicating whether to save the model to Tensorflow SavedModel or HDF5. Defaults to ‘tf’ in TF 2.X, and ‘h5’ in TF 1.X.

  • signatures – Signatures to save with the SavedModel. Applicable to the ‘tf’ format only. Please see the signatures argument in tf.saved_model.save for details.

  • options – (only applies to SavedModel format) tf.saved_model.SaveOptions object that specifies options for saving to SavedModel.

  • save_traces – (only applies to SavedModel format) When enabled, the SavedModel will store the function traces for each layer. This can be disabled, so that only the configs of each layer are stored. Defaults to True. Disabling this will decrease serialization time and reduce file size, but it requires that all custom layers/models implement a get_config() method.

Example:

```python from keras.models import load_model

model.save(‘my_model.h5’) # creates a HDF5 file ‘my_model.h5’ del model # deletes the existing model

# returns a compiled model # identical to the previous one model = load_model(‘my_model.h5’) ```

pyNNsMD.models.mlp_nac2 module

class pyNNsMD.models.mlp_nac2.NACModel2(*args, **kwargs)[source]

Bases: keras.engine.training.Model

Subclassed tf.keras.model for NACs which outputs NACs from coordinates.

The model is supposed to be saved and exported.

__init__(atoms, states, invd_index=None, angle_index=None, dihed_index=None, nn_size=100, depth=3, activ='selu', use_reg_activ=None, use_reg_weight=None, use_reg_bias=None, use_dropout=False, dropout=0.01, normalization_mode=1, precomputed_features=False, model_module='mlp_nac2', **kwargs)[source]

Initialize a NACModel with hyperparameters.

Parameters
  • hyper (dict) – Hyperparamters.

  • **kwargs (dict) – Additional keras.model parameters.

Returns

tf.keras.model.

call(data, training=False, **kwargs)[source]

Call the model, forward pass.

Parameters
  • data (tf.tensor) – Coordinates.

  • training (bool, optional) – Training Mode. Defaults to False.

Returns

predicted NACs.

Return type

tf.tensor

call_to_numpy_output(y)[source]
call_to_tensor_input(x)[source]
fit(**kwargs)[source]

Trains the model for a fixed number of epochs (iterations on a dataset).

Parameters
  • x

    Input data. It could be: - A Numpy array (or array-like), or a list of arrays

    (in case the model has multiple inputs).

    • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

    • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

    • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

    • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

    • A tf.keras.utils.experimental.DatasetCreator, which wraps a callable that takes a single argument of type tf.distribute.InputContext, and returns a tf.data.Dataset. DatasetCreator should be used when users prefer to specify the per-replica batching and sharding logic for the Dataset. See tf.keras.utils.experimental.DatasetCreator doc for more information.

    A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given below. If these include sample_weights as a third component, note that sample weighting applies to the weighted_metrics argument but not the metrics argument in compile(). If using tf.distribute.experimental.ParameterServerStrategy, only DatasetCreator type is supported for x.

  • y – Target data. Like the input data x, it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator, or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from x).

  • batch_size – Integer or None. Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

  • epochs – Integer. Number of epochs to train the model. An epoch is an iteration over the entire x and y data provided (unless the steps_per_epoch flag is set to something other than None). Note that in conjunction with initial_epoch, epochs is to be understood as “final epoch”. The model is not trained for a number of iterations given by epochs, but merely until the epoch of index epochs is reached.

  • verbose – ‘auto’, 0, 1, or 2. Verbosity mode. 0 = silent, 1 = progress bar, 2 = one line per epoch. ‘auto’ defaults to 1 for most cases, but 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (eg, in a production environment).

  • callbacks – List of keras.callbacks.Callback instances. List of callbacks to apply during training. See tf.keras.callbacks. Note tf.keras.callbacks.ProgbarLogger and tf.keras.callbacks.History callbacks are created automatically and need not be passed into model.fit. tf.keras.callbacks.ProgbarLogger is created or not based on verbose argument to model.fit. Callbacks with batch-level calls are currently unsupported with tf.distribute.experimental.ParameterServerStrategy, and users are advised to implement epoch-level calls instead with an appropriate steps_per_epoch value.

  • validation_split – Float between 0 and 1. Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. This argument is not supported when x is a dataset, generator or keras.utils.Sequence instance. If both validation_data and validation_split are provided, validation_data will override validation_split. validation_split is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

  • validation_data

    Data on which to evaluate the loss and any model metrics at the end of each epoch. The model will not be trained on this data. Thus, note the fact that the validation loss of data provided using validation_split or validation_data is not affected by regularization layers like noise and dropout. validation_data will override validation_split. validation_data could be:

    • A tuple (x_val, y_val) of Numpy arrays or tensors.

    • A tuple (x_val, y_val, val_sample_weights) of NumPy arrays.

    • A tf.data.Dataset.

    • A Python generator or keras.utils.Sequence returning

    (inputs, targets) or (inputs, targets, sample_weights).

    validation_data is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

  • shuffle – Boolean (whether to shuffle the training data before each epoch) or str (for ‘batch’). This argument is ignored when x is a generator or an object of tf.data.Dataset. ‘batch’ is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not None.

  • class_weight – Optional dictionary mapping class indices (integers) to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to “pay more attention” to samples from an under-represented class.

  • sample_weight – Optional Numpy array of weights for the training samples, used for weighting the loss function (during training only). You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, generator, or keras.utils.Sequence instance, instead provide the sample_weights as the third element of x. Note that sample weighting does not apply to metrics specified via the metrics argument in compile(). To apply sample weighting to your metrics, you can specify them via the weighted_metrics in compile() instead.

  • initial_epoch – Integer. Epoch at which to start training (useful for resuming a previous training run).

  • steps_per_epoch

    Integer or None. Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. When training with input tensors such as TensorFlow data tensors, the default None is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a tf.data dataset, and ‘steps_per_epoch’ is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the steps_per_epoch argument. If steps_per_epoch=-1 the training will run indefinitely with an infinitely repeating dataset. This argument is not supported with array inputs. When using tf.distribute.experimental.ParameterServerStrategy:

    • steps_per_epoch=None is not supported.

  • validation_steps – Only relevant if validation_data is provided and is a tf.data dataset. Total number of steps (batches of samples) to draw before stopping when performing validation at the end of every epoch. If ‘validation_steps’ is None, validation will run until the validation_data dataset is exhausted. In the case of an infinitely repeated dataset, it will run into an infinite loop. If ‘validation_steps’ is specified and only part of the dataset will be consumed, the evaluation will start from the beginning of the dataset at each epoch. This ensures that the same validation samples are used every time.

  • validation_batch_size – Integer or None. Number of samples per validation batch. If unspecified, will default to batch_size. Do not specify the validation_batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

  • validation_freq – Only relevant if validation data is provided. Integer or collections.abc.Container instance (e.g. list, tuple, etc.). If an integer, specifies how many training epochs to run before a new validation run is performed, e.g. validation_freq=2 runs validation every 2 epochs. If a Container, specifies the epochs on which to run validation, e.g. validation_freq=[1, 2, 10] runs validation at the end of the 1st, 2nd, and 10th epochs.

  • max_queue_size – Integer. Used for generator or keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

  • workers – Integer. Used for generator or keras.utils.Sequence input only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

  • use_multiprocessing – Boolean. Used for generator or keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-picklable arguments to the generator as they can’t be passed easily to children processes.

Unpacking behavior for iterator-like inputs:

A common pattern is to pass a tf.data.Dataset, generator, or

tf.keras.utils.Sequence to the x argument of fit, which will in fact yield not only features (x) but optionally targets (y) and sample weights. Keras requires that the output of such iterator-likes be unambiguous. The iterator should return a tuple of length 1, 2, or 3, where the optional second and third elements will be used for y and sample_weight respectively. Any other type provided will be wrapped in a length one tuple, effectively treating everything as ‘x’. When yielding dicts, they should still adhere to the top-level tuple structure. e.g. ({“x0”: x0, “x1”: x1}, y). Keras will not attempt to separate features, targets, and weights from the keys of a single dict.

A notable unsupported data type is the namedtuple. The reason is

that it behaves like both an ordered datatype (tuple) and a mapping datatype (dict). So given a namedtuple of the form:

namedtuple(“example_tuple”, [“y”, “x”])

it is ambiguous whether to reverse the order of the elements when interpreting the value. Even worse is a tuple of the form:

namedtuple(“other_tuple”, [“x”, “y”, “z”])

where it is unclear if the tuple was intended to be unpacked into x, y, and sample_weight or passed through as a single element to x. As a result the data processing code will simply raise a ValueError if it encounters a namedtuple. (Along with instructions to remedy the issue.)

Returns

A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

Raises
  • RuntimeError

    1. If the model was never compiled or,

  • 2. If model.fit is wrapped in tf.function.

  • ValueError – In case of mismatch between the provided input data and what the model expects or when the input data is empty.

get_config()[source]

Returns the config of the Model.

Config is a Python dictionary (serializable) containing the configuration of an object, which in this case is a Model. This allows the Model to be be reinstantiated later (without its trained weights) from this configuration.

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Developers of subclassed Model are advised to override this method, and continue to update the dict from super(MyModel, self).get_config() to provide the proper configuration of this Model. The default config is an empty dict. Optionally, raise NotImplementedError to allow Keras to attempt a default serialization.

Returns

Python dictionary containing the configuration of this Model.

precompute_feature_in_chunks(x, batch_size, training=False)[source]
predict_chunk_feature(tf_x, training=False)[source]
save(filepath, **kwargs)[source]

Saves the model to Tensorflow SavedModel or a single HDF5 file.

Please see tf.keras.models.save_model or the [Serialization and Saving guide]( https://keras.io/guides/serialization_and_saving/) for details.

Parameters
  • filepath – String, PathLike, path to SavedModel or H5 file to save the model.

  • overwrite – Whether to silently overwrite any existing file at the target location, or provide the user with a manual prompt.

  • include_optimizer – If True, save optimizer’s state together.

  • save_format – Either ‘tf’ or ‘h5’, indicating whether to save the model to Tensorflow SavedModel or HDF5. Defaults to ‘tf’ in TF 2.X, and ‘h5’ in TF 1.X.

  • signatures – Signatures to save with the SavedModel. Applicable to the ‘tf’ format only. Please see the signatures argument in tf.saved_model.save for details.

  • options – (only applies to SavedModel format) tf.saved_model.SaveOptions object that specifies options for saving to SavedModel.

  • save_traces – (only applies to SavedModel format) When enabled, the SavedModel will store the function traces for each layer. This can be disabled, so that only the configs of each layer are stored. Defaults to True. Disabling this will decrease serialization time and reduce file size, but it requires that all custom layers/models implement a get_config() method.

Example:

```python from keras.models import load_model

model.save(‘my_model.h5’) # creates a HDF5 file ‘my_model.h5’ del model # deletes the existing model

# returns a compiled model # identical to the previous one model = load_model(‘my_model.h5’) ```

pyNNsMD.models.schnet_e module

Module contents