vindy.networks package

Submodules

vindy.networks.autoencoder_sindy module

class AutoencoderSindy(*args: Any, **kwargs: Any)[source]

Bases: BaseModel

Autoencoder with SINDy dynamics in the latent space.

This model learns a reduced-order representation using an autoencoder and identifies latent dynamics using a provided SINDy layer.

Parameters:
  • sindy_layer (SindyLayer) – Instance of a SINDy-compatible layer that computes latent dynamics and associated losses.

  • reduced_order (int) – Dimensionality of the latent space.

  • x (array-like) – Example input data used to infer shapes and build the model.

  • mu (array-like, optional) – Optional parameter/control inputs associated with the data.

  • scaling ({'individual', ...}, optional) – Method used to scale inputs before encoding.

  • layer_sizes (list of int, optional) – Hidden layer sizes for the encoder/decoder networks.

  • activation (str or callable, optional) – Activation function for encoder/decoder hidden layers.

  • second_order (bool, optional) – If True, the model treats dynamics as second-order.

  • l1 (float, optional) – Kernel regularization coefficients.

  • l2 (float, optional) – Kernel regularization coefficients.

  • l_rec (float, optional) – Weights for different loss components (reconstruction, derivative, state derivative, integration consistency).

  • l_dz (float, optional) – Weights for different loss components (reconstruction, derivative, state derivative, integration consistency).

  • l_dx (float, optional) – Weights for different loss components (reconstruction, derivative, state derivative, integration consistency).

  • l_int (float, optional) – Weights for different loss components (reconstruction, derivative, state derivative, integration consistency).

  • dt (float, optional) – Time-step used for finite-difference approximations.

  • dtype (str, optional) – Floating point precision used by Keras backend.

  • **kwargs – Additional keyword arguments forwarded to the base model.

__init__(sindy_layer, reduced_order, x, mu=None, scaling='individual', layer_sizes=None, activation='selu', second_order=True, l1: float = 0, l2: float = 0, l_rec: float = 1, l_dz: float = 1, l_dx: float = 1, l_int: float = 0, dt=0, dtype='float32', **kwargs)[source]

Autoencoder with SINDy dynamics in the latent space.

This model learns a reduced-order representation using an autoencoder and identifies latent dynamics using a provided SINDy layer.

Parameters:
  • sindy_layer (SindyLayer) – Instance of a SINDy-compatible layer that computes latent dynamics and associated losses.

  • reduced_order (int) – Dimensionality of the latent space.

  • x (array-like) – Example input data used to infer shapes and build the model.

  • mu (array-like, optional) – Optional parameter/control inputs associated with the data.

  • scaling ({'individual', ...}, optional) – Method used to scale inputs before encoding.

  • layer_sizes (list of int, optional) – Hidden layer sizes for the encoder/decoder networks.

  • activation (str or callable, optional) – Activation function for encoder/decoder hidden layers.

  • second_order (bool, optional) – If True, the model treats dynamics as second-order.

  • l1 (float, optional) – Kernel regularization coefficients.

  • l2 (float, optional) – Kernel regularization coefficients.

  • l_rec (float, optional) – Weights for different loss components (reconstruction, derivative, state derivative, integration consistency).

  • l_dz (float, optional) – Weights for different loss components (reconstruction, derivative, state derivative, integration consistency).

  • l_dx (float, optional) – Weights for different loss components (reconstruction, derivative, state derivative, integration consistency).

  • l_int (float, optional) – Weights for different loss components (reconstruction, derivative, state derivative, integration consistency).

  • dt (float, optional) – Time-step used for finite-difference approximations.

  • dtype (str, optional) – Floating point precision used by Keras backend.

  • **kwargs – Additional keyword arguments forwarded to the base model.

assert_arguments(arguments)[source]

Validate initialization arguments.

Parameters:

arguments (dict) – Mapping of argument names to values (locals() from initializer).

build_decoder(z)[source]

Build a fully connected decoder with reversed layer sizes.

Parameters:

z (tf.Tensor) – Latent representation.

Returns:

Reconstructed output.

Return type:

tf.Tensor

build_encoder(x)[source]

Build a fully connected encoder with layers of specified sizes.

Parameters:

x (tf.Tensor or array-like) – Input to the autoencoder.

Returns:

Tuple containing (x_input, z) where x_input is the input layer and z is the latent representation.

Return type:

tuple of tf.Tensor

build_loss(inputs)[source]

Build and compute the loss for the autoencoder-SINDy model.

Splits input into state, its derivative and the parameters, performs the forward pass, calculates the loss, and updates the weights.

Parameters:

inputs (list of array-like) – List of input arrays containing states, derivatives, and parameters.

Returns:

Dictionary of computed losses.

Return type:

dict

build_model(x, mu)[source]

Assemble the encoder, decoder and SINDy sub-models.

Parameters:
  • x (array-like) – Example input used to determine shapes.

  • mu (array-like, optional) – Parameter inputs for the SINDy layer.

calc_latent_time_derivatives(x, dx_dt, dx_ddt=None, mean_or_sample='mean')[source]

Calculate time derivatives of latent variables given time derivatives of the inputs.

Parameters:
  • x (array-like) – Full state of shape (n_samples, n_features, ...).

  • dx_dt (array-like) – First time derivative of the full state.

  • dx_ddt (array-like, optional) – Second time derivative of the full state, if available.

  • mean_or_sample ({'mean', 'sample'}, optional) – Whether to use the mean or a sample from the encoder distribution.

Returns:

(z, dz_dt[, dz_ddt]) where the last item is returned only if dx_ddt is provided.

Return type:

tuple

compile(optimizer=tensorflow.keras.optimizers.Adam, loss=tensorflow.keras.losses.BinaryCrossentropy, sindy_optimizer=None, **kwargs)[source]

Compile the model and optionally configure a separate optimizer for the SINDy part.

Parameters:
  • optimizer (tf.keras.optimizers.Optimizer or compatible, optional) – Optimizer for the autoencoder parameters.

  • loss (tf.keras.losses.Loss or callable, optional) – Loss function for reconstruction.

  • sindy_optimizer (tf.keras.optimizers.Optimizer or compatible, optional) – Optimizer for the SINDy parameters. If None, the main optimizer will be used to build a SINDy optimizer with the same configuration.

create_loss_trackers()[source]

Initialize Keras metric objects for logging losses during training.

Adds trackers depending on which loss components are enabled.

decode(z)[source]

Decode latent variable to full state.

Parameters:

z (array-like of shape (n_samples, reduced_order)) – Latent variable.

Returns:

Reconstructed full state.

Return type:

array-like of shape (n_samples, n_features, n_dof_per_feature)

encode(x, training=False, mean_or_sample='mean')[source]

Encode full state to latent variables.

Parameters:
  • x (array-like) – Full state input of shape (n_samples, n_features, ...).

  • training (bool, optional) – If True, run under training mode.

  • mean_or_sample ({'mean', 'sample'}, optional) – Return either the posterior mean or a sampled latent vector.

Returns:

Latent representation with shape (n_samples, reduced_order).

Return type:

tf.Tensor

get_loss(x, dx_dt, mu, x_int=None, mu_int=None)[source]

Calculate loss for first order system.

Parameters:
  • x (array-like of shape (n_samples, n_features)) – Full state.

  • dx_dt (array-like of shape (n_samples, n_features)) – Time derivative of state.

  • mu (array-like of shape (n_samples, n_features)) – Control input.

  • x_int (array-like of shape (n_samples, n_features, n_integrationsteps), optional) – Full state at {t+1,…,t+n_integrationsteps}.

  • mu_int (array-like of shape (n_samples, n_param, n_integrationsteps), optional) – Control input at {t+1,…,t+n_integrationsteps}.

Returns:

Dictionary of individual losses (rec_loss, dz_loss, dx_loss, int_loss, loss).

Return type:

dict

get_loss_2nd(x, dx_dt, dx_ddt, mu, x_int=None, dx_dt_int=None, mu_int=None)[source]

Calculate loss for second order system.

Parameters:
  • x (array-like of shape (n_samples, n_features)) – Full state.

  • dx_dt (array-like of shape (n_samples, n_features)) – Time derivative of state.

  • dx_ddt (array-like of shape (n_samples, n_features)) – Second time derivative of state.

  • mu (array-like of shape (n_samples, n_param)) – Control input.

  • x_int (array-like of shape (n_samples, n_features, n_integrationsteps), optional) – Full state at {t+1,…,t+n_integrationsteps}.

  • dx_dt_int (array-like of shape (n_samples, n_features, n_integrationsteps), optional) – Time derivative of state at {t+1,…,t+n_integrationsteps}.

  • mu_int (array-like of shape (n_samples, n_param, n_integrationsteps), optional) – Control input at {t+1,…,t+n_integrationsteps}.

Returns:

Dictionary of individual losses (rec_loss, dz_loss, dx_loss, int_loss, loss).

Return type:

dict

get_loss_rec(x)[source]

Calculate reconstruction loss of autoencoder.

Parameters:

x (array-like of shape (n_samples, n_features)) – Full state.

Returns:

Dictionary of losses including ‘rec’, ‘reg’, and ‘loss’.

Return type:

dict

get_trainable_weights()[source]

Return trainable variables for optimizer updates.

Returns:

List of trainable TensorFlow variables for encoder, decoder and SINDy.

Return type:

list

reconstruct(x, _=None)[source]

Reconstruct full state from inputs.

Parameters:
  • x (array-like) – Full state input of shape (n_samples, n_features, ...).

  • _ (optional) – Placeholder for API compatibility.

Returns:

Reconstructed full state with the original shape.

Return type:

array-like

static reconstruction_loss(x, x_pred)[source]

Calculate the reconstruction loss as mean squared error.

Parameters:
  • x (array-like) – Ground-truth inputs.

  • x_pred (array-like) – Reconstructed inputs.

Returns:

Mean squared error between x and x_pred.

Return type:

tf.Tensor

vindy.networks.base_model module

class BaseModel(*args: Any, **kwargs: Any)[source]

Bases: Model, ABC

Parameters:
  • args (Any)

  • kwargs (Any)

Return type:

Any

assert_arguments(arguments)[source]

Validate that the arguments passed to the model are valid.

Parameters:

arguments (dict) – All arguments passed to the model.

build_sindy(z, mu)[source]

Build the model for the forward pass of the SINDy layer.

Parameters:
  • z (array-like of shape (n_samples, reduced_order)) – Latent state.

  • mu (array-like of shape (n_samples, n_params), optional) – Parameters.

Returns:

(z_sindy, z_dot) - SINDy input and predicted derivative.

Return type:

tuple of tf.Tensor

concatenate_sindy_input(z, dzdt=None, mu=None)[source]

Concatenate state, derivative, and parameters for SINDy layer input.

Parameters:
  • z (tf.Tensor) – Latent state.

  • dzdt (tf.Tensor, optional) – Time derivative of latent state.

  • mu (tf.Tensor, optional) – Parameters.

Returns:

Concatenated input tensor for SINDy layer.

Return type:

tf.Tensor

define_scaling(x)[source]

Define the scaling factor for given training data.

Parameters:

x (tf.Tensor) – Training data.

evaluate_sindy_layer(z, dz_dt, mu)[source]

Evaluate the SINDy layer.

Parameters:
  • z (tf.Tensor) – Latent variable.

  • dz_dt (tf.Tensor, optional) – Time derivative of the latent variable (only for second order models).

  • mu (tf.Tensor, optional) – Parameters.

Returns:

(sindy_pred, sindy_mean, sindy_log_var) - prediction and optional variational parameters.

Return type:

tuple

fit(x, y=None, validation_data=None, **kwargs)[source]

Wrapper for the fit function to flatten the data if necessary.

Parameters:
  • x (array-like) – Training data.

  • y (array-like, optional) – Target data.

  • validation_data (tuple or array-like, optional) – Validation data.

  • **kwargs – Additional keyword arguments passed to tf.keras.Model.fit.

Returns:

Training history object.

Return type:

History

flatten3d(x)[source]
static flatten_dummy(x)[source]
get_int_loss(inputs)

Integrate the identified dynamical system and compare to true dynamics.

Parameters:

inputs (list) – Input data containing state trajectories and parameters.

Returns:

Integration consistency loss.

Return type:

tf.Tensor

integrate(z0, t, mu=None, method='RK45', sindy_fcn=None)[source]

Integrate the model using scipy.integrate.solve_ivp.

Parameters:
  • z0 (array-like) – Initial state.

  • t (array-like) – Time points to evaluate the solution at.

  • mu (array-like or callable, optional) – Parameters to use in the model.

  • method (str, default='RK45') – Integration method to use.

  • sindy_fcn (callable, optional) – Custom SINDy function.

Returns:

Solution from scipy.integrate.solve_ivp.

Return type:

OdeResult

static load(aesindy, x=None, mu=None, mask=None, fixed_coeffs=None, path: str = None, kwargs_overwrite: dict = {})[source]

Load a model from the given path.

Parameters:
  • aesindy (class) – The model class to instantiate.

  • x (array-like, optional) – Data needed to initialize the model.

  • mu (array-like, optional) – Parameters used to create the model.

  • mask (array-like, optional) – Mask for coefficients.

  • fixed_coeffs (array-like, optional) – Fixed coefficient values.

  • path (str, optional) – Path to the saved model.

  • kwargs_overwrite (dict, default={}) – Additional kwargs to overwrite the config.

Returns:

Loaded model instance.

Return type:

BaseModel

print(z=None, mu=None, precision=3)[source]
rescale(x)[source]
save(path: str = None)[source]

Save the model weights and configuration to a given path.

Parameters:

path (str, optional) – Path to the folder where the model should be saved. If None, a default path with timestamp is created.

scale(x)[source]
sindy_coeffs()[source]

Return the coefficients of the SINDy model.

Returns:

SINDy coefficient matrix.

Return type:

array-like

split_inputs(inputs)[source]

Split the inputs into state, derivative, and parameters.

Parameters:

inputs (list) – Input data containing state and optional derivatives/parameters.

Returns:

(x, dx_dt, dx_ddt, x_int, dx_int, mu, mu_int) with unpacked components.

Return type:

tuple

test_step(inputs)

Perform one test/validation step.

Parameters:

inputs (list) – Input data for the validation step.

Returns:

Dictionary of loss values.

Return type:

dict

train_step(inputs)

Perform one training step.

Parameters:

inputs (list) – Input data for the training step.

Returns:

Dictionary of loss values.

Return type:

dict

unflatten3d(x)[source]
vis_modes(x, n_modes=3)[source]

Visualize the reconstruction of the reduced coefficients.

Parameters:
  • x (array-like) – Input data.

  • n_modes (int, default=3) – Number of modes to visualize.

vindy.networks.identification_network module

class IdentificationNetwork(*args: Any, **kwargs: Any)[source]

Bases: BaseModel

Identification network using a SINDy layer.

Parameters:
  • sindy_layer (SindyLayer) – SINDy-compatible layer used to model system dynamics.

  • x (array-like) – Example input data used to infer shapes.

  • mu (array-like, optional) – Optional control/parameter inputs.

  • scaling (str, optional) – Scaling strategy for inputs.

  • second_order (bool, optional) – Whether the underlying system is second-order.

  • l_dz (float, optional) – Weight for latent derivative loss.

  • l_int (float, optional) – Weight for integration consistency loss.

  • dt (float, optional) – Time-step for finite differences.

  • dtype (str, optional) – Keras float dtype.

  • **kwargs – Forwarded to the base model.

__init__(sindy_layer, x, mu=None, scaling='individual', second_order=True, l_dz: float = 1, l_int: float = 0, dt=0, dtype='float32', **kwargs)[source]

Identification network using a SINDy layer.

Parameters:
  • sindy_layer (SindyLayer) – SINDy-compatible layer used to model system dynamics.

  • x (array-like) – Example input data used to infer shapes.

  • mu (array-like, optional) – Optional control/parameter inputs.

  • scaling (str, optional) – Scaling strategy for inputs.

  • second_order (bool, optional) – Whether the underlying system is second-order.

  • l_dz (float, optional) – Weight for latent derivative loss.

  • l_int (float, optional) – Weight for integration consistency loss.

  • dt (float, optional) – Time-step for finite differences.

  • dtype (str, optional) – Keras float dtype.

  • **kwargs – Forwarded to the base model.

build_loss(inputs)[source]

Compute training loss from inputs and apply optimizer steps.

Parameters:

inputs (list) – List containing state, derivatives, and optional parameter/integration data.

Returns:

Dictionary with individual loss components and total loss under key ‘loss’.

Return type:

dict

build_model(z, mu)[source]

Build the SINDy model mapping latent variables to their derivatives.

Parameters:
  • z (array-like) – Example latent input used to infer shapes.

  • mu (array-like, optional) – Parameter/control inputs for SINDy.

create_loss_trackers()[source]

Initialize loss trackers used during training.

get_loss(z, dz_dt, mu, z_int=None, mu_int=None)[source]

Calculate loss for first order system.

Parameters:
  • z (array-like of shape (n_samples, n_features)) – Full state.

  • dz_dt (array-like of shape (n_samples, n_features)) – Time derivative of state.

  • mu (array-like of shape (n_samples, n_features)) – Control input.

  • z_int (array-like of shape (n_samples, n_features, n_integrationsteps), optional) – Full state at {t+1,…,t+n_integrationsteps}.

  • mu_int (array-like of shape (n_samples, n_param, n_integrationsteps), optional) – Control input at {t+1,…,t+n_integrationsteps}.

Returns:

Dictionary of individual losses including ‘loss’, ‘dz’, ‘int’, ‘reg’.

Return type:

dict

get_loss_2nd(z, dz_dt, dz_ddt, mu, z_int=None, dz_dt_int=None, mu_int=None)[source]

Calculate loss for second order system.

Parameters:
  • z (array-like of shape (n_samples, n_features)) – Full state.

  • dz_dt (array-like of shape (n_samples, n_features)) – Time derivative of state.

  • dz_ddt (array-like of shape (n_samples, n_features)) – Second time derivative of state.

  • mu (array-like of shape (n_samples, n_param)) – Control input.

  • z_int (array-like of shape (n_samples, n_features, n_integrationsteps), optional) – Full state at {t+1,…,t+n_integrationsteps}.

  • dz_dt_int (array-like of shape (n_samples, n_features, n_integrationsteps), optional) – Time derivative of state at {t+1,…,t+n_integrationsteps}.

  • mu_int (array-like of shape (n_samples, n_param, n_integrationsteps), optional) – Control input at {t+1,…,t+n_integrationsteps}.

Returns:

Dictionary of individual losses including ‘loss’, ‘dz’, ‘int’, ‘reg’.

Return type:

dict

get_trainable_weights()[source]

Return trainable variables for the identification-only model.

Returns:

List of trainable TensorFlow variables (SINDy weights).

Return type:

list

vindy.networks.sindy_network module

Backwards compatibility module.

This module provides the old SindyNetwork class name as an alias to the new IdentificationNetwork class. This allows old pickled models and scripts that reference the old class name to continue working.

Deprecated since version Use: vindy.networks.identification_network.IdentificationNetwork instead.

vindy.networks.variational_autoencoder_sindy module

Backwards compatibility module.

This module provides the old VariationalAutoencoderSindy class name as an alias to the new VENI class. This allows old pickled models and scripts that reference the old class name to continue working.

Deprecated since version Use: vindy.networks.veni.VENI instead.

vindy.networks.veni module

class VENI(*args: Any, **kwargs: Any)[source]

Bases: AutoencoderSindy

Variational Encoder Network for system identification.

The VENI model combines a variational autoencoder with a SINDy layer to discover low-dimensional dynamics from high-dimensional observations.

Parameters:
  • beta (float) – Weight of the KL divergence term in the loss function.

  • **kwargs – Additional keyword arguments forwarded to AutoencoderSindy.

Autoencoder with SINDy dynamics in the latent space.

This model learns a reduced-order representation using an autoencoder and identifies latent dynamics using a provided SINDy layer.

Parameters:
  • sindy_layer (SindyLayer) – Instance of a SINDy-compatible layer that computes latent dynamics and associated losses.

  • reduced_order (int) – Dimensionality of the latent space.

  • x (array-like) – Example input data used to infer shapes and build the model.

  • mu (array-like, optional) – Optional parameter/control inputs associated with the data.

  • scaling ({'individual', ...}, optional) – Method used to scale inputs before encoding.

  • layer_sizes (list of int, optional) – Hidden layer sizes for the encoder/decoder networks.

  • activation (str or callable, optional) – Activation function for encoder/decoder hidden layers.

  • second_order (bool, optional) – If True, the model treats dynamics as second-order.

  • l1 (float, optional) – Kernel regularization coefficients.

  • l2 (float, optional) – Kernel regularization coefficients.

  • l_rec (float, optional) – Weights for different loss components (reconstruction, derivative, state derivative, integration consistency).

  • l_dz (float, optional) – Weights for different loss components (reconstruction, derivative, state derivative, integration consistency).

  • l_dx (float, optional) – Weights for different loss components (reconstruction, derivative, state derivative, integration consistency).

  • l_int (float, optional) – Weights for different loss components (reconstruction, derivative, state derivative, integration consistency).

  • dt (float, optional) – Time-step used for finite-difference approximations.

  • dtype (str, optional) – Floating point precision used by Keras backend.

  • **kwargs – Additional keyword arguments forwarded to the base model.

build_encoder(x)[source]

Build the variational encoder network.

Parameters:

x (array-like) – Example input array used to infer input shapes.

Returns:

  • x_input (tf.keras.Input) – The encoder input tensor.

  • z (tf.Tensor) – Sampled latent variable from the learned Gaussian.

call(inputs, _=None)[source]
create_loss_trackers()[source]

Create loss trackers used during training.

Extends the base trackers by adding a tracker for the KL loss.

encode(x, training=False, mean_or_sample='mean')[source]

Encode input to latent space and return mean or sample.

Parameters:
  • x (array-like) – Full state observations with shape (n_samples, n_features, ...).

  • training (bool, optional) – If True, run in training mode (unused here).

  • mean_or_sample ({'mean', 'sample'}, optional) – Return the mean of the posterior or a sample from it.

Returns:

Latent representation (mean or sample) of shape (n_samples, reduced_order).

Return type:

tf.Tensor

kl_loss(mean, log_var)[source]

Compute the KL divergence between the learned Gaussian and the unit Gaussian.

Parameters:
  • mean (tf.Tensor) – Mean of the approximate posterior.

  • log_var (tf.Tensor) – Log-variance of the approximate posterior.

Returns:

Scalar KL divergence loss scaled by self.beta.

Return type:

tf.Tensor

static reconstruction_loss(x, x_reconstruction)[source]

Reconstruction loss used for the variational autoencoder.

The implementation follows the log-MSE variant referenced in the VINDy paper.

Parameters:
  • x (array-like) – Original inputs.

  • x_reconstruction (array-like) – Reconstructed inputs from the decoder.

Returns:

Scalar reconstruction loss.

Return type:

tf.Tensor

Module contents