vindy.distributions package
Submodules
vindy.distributions.base_distribution module
- class BaseDistribution(*args: Any, **kwargs: Any)[source]
Bases:
Layer,ABCBase class for probabilistic distributions used in variational encoders.
Subclasses should implement sampling and log-probability computations.
- Parameters:
args (Any)
kwargs (Any)
- Return type:
Any
- abstractmethod KL_divergence()[source]
Compute the KL divergence between two distributions.
- Returns:
Scalar KL divergence.
- Return type:
tf.Tensor
- abstractmethod call(inputs)[source]
Sample from the distribution.
- Parameters:
inputs (array-like) – Inputs used to parameterize the distribution (for instance mean/logvar).
- Returns:
Samples (and optionally auxiliary statistics).
- Return type:
tuple or tf.Tensor
- plot(mean, scale, ax=None)[source]
Plots the probability density function of the distribution.
- Parameters:
mean (float or array-like) – Mean/loc of the distribution.
scale (float or array-like) – Scale parameter.
ax (matplotlib.axes.Axes, optional) – Axis to draw on. If None, uses current axis.
- abstractmethod prob_density_fcn(x, mean, scale)[source]
Probability density function.
- Parameters:
x (array-like) – Points at which to evaluate the density.
mean (float or array-like) – Distribution mean/loc parameter.
scale (float or array-like) – Scale parameter (std, scale, etc.).
- Returns:
Density values at x.
- Return type:
array-like
vindy.distributions.gaussian module
- class Gaussian(*args: Any, **kwargs: Any)[source]
Bases:
BaseDistributionLayer for a Gaussian distribution that can be used to perform the reparameterization trick.
This layer samples from a Gaussian distribution and computes the KL divergence between two Gaussian distributions. Uses (z_mean, z_log_var) to sample arguments from a normal distribution with mean z_mean and log variance z_log_var (the log variance is used to ensure that the variance is positive).
Initialize the Gaussian distribution layer.
- Parameters:
prior_mean (float, optional) – Mean of the prior distribution (default is 0.0).
prior_variance (float, optional) – Variance of the prior distribution (default is 1.0).
**kwargs – Additional arguments passed to tensorflow.keras.layers.Layer.
- KL_divergence(mean, log_var)[source]
Compute the KL divergence between two univariate normal distributions.
Computes the KL divergence between two univariate normal distributions p(x) ~ N(mu1, sigma1) and q(x) ~ N(mu2, sigma2) following:
KL(p,q) = log(sigma2/sigma1) + (sigma1^2 + (mu1-mu2)^2) / (2*sigma2^2) - 1/2
- In case of a unitary Gaussian q(x) = N(0,1) the KL divergence simplifies to:
KL(p,q) = log(1/sigma1) + (sigma1^2 + mu1^2 -1) / 2
- Which can be rewritten using the log variance log_var1 = log(sigma1**2) as:
KL(p,q) = -0.5 * (1 + log_var1 - mu1^2 - exp(log_var1))
- Parameters:
mean (tf.Tensor) – Mean of the first normal distribution.
log_var (tf.Tensor) – Log variance of the first normal distribution.
- Returns:
KL divergence value.
- Return type:
tf.Tensor
- __init__(prior_mean=0.0, prior_variance=1.0, **kwargs)[source]
Initialize the Gaussian distribution layer.
- Parameters:
prior_mean (float, optional) – Mean of the prior distribution (default is 0.0).
prior_variance (float, optional) – Variance of the prior distribution (default is 1.0).
**kwargs – Additional arguments passed to tensorflow.keras.layers.Layer.
- call(inputs)[source]
Draw a sample from a normal distribution using the reparameterization trick.
Sample y ~ N(z_mean, exp(z_log_var)) from a normal distribution with mean z_mean and log variance z_log_var using the reparameterization trick. Log variance is used to ensure numerical stability.
The variance relationship: variance = measurement_noise_factor^2
- Sampling formula:
x = mu + measurement_noise_factor * epsilon, where epsilon ~ N(0, 1)
- Rewritten with log variance:
x = mu + exp(0.5 * log_var) * epsilon = mu + (measurement_noise_factor^2)^0.5 * epsilon
- Parameters:
inputs (tuple of tf.Tensor) – Tuple containing (z_mean, z_log_var) where z_mean is the mean and z_log_var is the log variance.
- Returns:
Sampled values from the normal distribution.
- Return type:
tf.Tensor
- log_var_to_deviation(log_var)[source]
Convert log variance to standard deviation.
- Converts the log variance to standard deviation (variance = measurement_noise_factor^2) following:
measurement_noise_factor = exp(0.5 * log(measurement_noise_factor^2)) = (measurement_noise_factor^2)^0.5
- Parameters:
log_var (tf.Tensor) – Log variance.
- Returns:
Standard deviation.
- Return type:
tf.Tensor
- prob_density_fcn(x, mean, variance)[source]
Probability density function of the Gaussian distribution.
- Parameters:
x (array-like) – Points at which to evaluate the density.
mean (float or array-like) – Mean of the distribution.
variance (float or array-like) – Variance of the distribution.
- Returns:
Probability density at x.
- Return type:
array-like
vindy.distributions.laplace module
- class Laplace(*args: Any, **kwargs: Any)[source]
Bases:
BaseDistributionLaplace distribution layer for the reparameterization trick.
This layer samples from a Laplace distribution using the reparameterization trick and computes KL divergence between two Laplace distributions.
Initialize Laplace distribution layer.
- Parameters:
prior_mean (float, default=0.0) – Mean (location) of the prior distribution.
prior_scale (float, default=1.0) – Scale factor of the prior distribution.
**kwargs – Additional keyword arguments passed to
tf.keras.layers.Layer.
- KL_divergence(mean, log_scale)[source]
Compute KL divergence between two univariate Laplace distributions.
For p(x) ~ L(mu1, s1) and q(x) ~ L(mu2, s2), the KL divergence is: KL(p,q) = log(s2/s1) + (s1*exp(-|mu1-mu2|/s1) + |mu1-mu2|)/s2 - 1
See supplemental material of Meyer, G. P. (2021). An alternative probabilistic interpretation of the huber loss. CVPR 2021.
- Parameters:
mean (tf.Tensor) – Mean (location) of the first Laplace distribution.
log_scale (tf.Tensor) – Log scale of the first Laplace distribution.
- Returns:
KL divergence.
- Return type:
tf.Tensor
- __init__(prior_mean=0.0, prior_scale=1.0, **kwargs)[source]
Initialize Laplace distribution layer.
- Parameters:
prior_mean (float, default=0.0) – Mean (location) of the prior distribution.
prior_scale (float, default=1.0) – Scale factor of the prior distribution.
**kwargs – Additional keyword arguments passed to
tf.keras.layers.Layer.
- call(inputs)[source]
Draw a sample from a Laplace distribution using the reparameterization trick.
Sample y ~ L(loc, exp(log_scale)) using the reparameterization trick: x = mu + exp(log_scale) * epsilon, where epsilon ~ L(0, 1)
- Parameters:
inputs (list of tf.Tensor) – [loc, log_scale] where loc is the location and log_scale is the log scale of the distribution.
- Returns:
Samples from the Laplace distribution.
- Return type:
tf.Tensor
- prob_density_fcn(x, loc, scale)[source]
Probability density function of the Laplace distribution.
- Parameters:
x (array-like) – Points at which to evaluate the density.
loc (float or array-like) – Location (mean) of the distribution.
scale (float or array-like) – Scale parameter of the distribution.
- Returns:
Probability density at x.
- Return type:
array-like