vindy.distributions package
Submodules
vindy.distributions.base_distribution module
- class BaseDistribution(*args, **kwargs)[source]
Bases:
Layer
,ABC
Base class for distribution layers implementing a call function to sample from the distribution and a KL divergence function to compute the KL divergence between two distributions.
- abstract KL_divergence()[source]
Compute the KL divergence between two distributions.
- Returns:
KL divergence.
- Return type:
tf.Tensor
- abstract call(inputs)[source]
Sample from the distribution.
- Parameters:
inputs (tuple) – Inputs to the distribution.
- Returns:
Sampled values from the distribution.
- Return type:
tf.Tensor
- plot(mean, scale, ax=None)[source]
Plot the probability density function of the distribution.
- Parameters:
mean (float) – Mean of the distribution.
scale (float) – Scale of the distribution.
ax (matplotlib.axes.Axes, optional) – Matplotlib axes object to plot on. If None, a new axis is created.
- Return type:
None
- abstract prob_density_fcn(x, mean, scale)[source]
Compute the probability density function.
- Parameters:
x (float) – Input value.
mean (float) – Mean of the distribution.
scale (float) – Scale of the distribution.
- Returns:
Probability density function value.
- Return type:
float
vindy.distributions.gaussian module
- class Gaussian(*args, **kwargs)[source]
Bases:
BaseDistribution
Layer for a Gaussian distribution that can be used to perform the reparameterization trick by sampling from a unit Gaussian and to compute the KL divergence between two Gaussian distributions. Uses (z_mean, z_log_var) to sample arguments from a normal distribution with mean z_mean and log variance z_log_var (the log variance is used to ensure that the variance is positive).
- call(inputs)[source]
Draw a sample y ~ N(z_mean, exp(z_log_var)) from a normal distribution with mean z_mean and log variance z_log_var.
- KL_divergence(mean, log_var)[source]
Compute the KL divergence between two univariate normal distributions.
- KL_divergence(mean, log_var)[source]
Computes the KL divergence between two univariate normal distributions p(x) ~ N(mu1, sigma1) and q(x) ~ N(mu2, sigma2) following
KL(p,q) = log(sigma2/sigma1) + (sigma1^2 + (mu1-mu2)^2) / (2*sigma2^2) - 1/2
- in case of a unitary Gaussian q(x) = N(0,1) the KL divergence simplifies to
KL(p,q) = log(1/sigma1) + (sigma1^2 + mu1^2 -1) / 2
- which can be rewritten using the log variance log_var1 = log(sigma1**2) as
KL(p,q) = -0.5 * (1 + log_var1 - mu1^2 - exp(log_var1))
- Parameters:
mean (float) – Mean of the first normal distribution.
log_var (float) – Log variance of a given normal distribution.
- Returns:
KL divergence.
- Return type:
tf.Tensor
- call(inputs)[source]
Draw a sample y ~ N(z_mean, exp(z_log_var)) from a normal distribution with mean z_mean and log variance z_log_var using the reparameterization trick. (log variance is used to ensure numerical stability)
- Parameters:
inputs (tuple) – A tuple containing z_mean and z_log_var.
- Returns:
Sampled values from the Gaussian distribution.
- Return type:
tf.Tensor
- log_var_to_deviation(log_var)[source]
Converts the log variance to standard deviation (variance = measurement_noise_factor^2) following measurement_noise_factor = exp(0.5 * log(measurement_noise_factor^2)) = (measurement_noise_factor^2)^0.5
- Parameters:
log_var (float) – Log variance.
- Returns:
Standard deviation.
- Return type:
tf.Tensor
- prob_density_fcn(x, mean, variance)[source]
Compute the probability density function.
- Parameters:
x (float) – Input value.
mean (float) – Mean of the distribution.
variance (float) – Variance of the distribution.
- Returns:
Probability density function value.
- Return type:
float
vindy.distributions.laplace module
- class Laplace(*args, **kwargs)[source]
Bases:
BaseDistribution
Layer for a Laplace distribution that can be used to perform the reparameterization trick by sampling from an unit Laplace and to compute the KL divergence between two Laplace distributions.
- call(inputs)[source]
Draw a sample y ~ L(z_mean, exp(z_log_var)) from a Laplace distribution with location loc and log scale.
- KL_divergence(mean, log_scale)[source]
Compute the KL divergence between two univariate Laplace distributions.
- log_scale_to_deviation(log_scale)
Convert the log scale to standard deviation.
- KL_divergence(mean, log_scale)[source]
Computes the KL divergence between two univariate Laplace distributions p(x) ~ L(mu1, s1) and q(x) ~ L(mu2, s2) following KL(p,q) = log(s2/s1) + (s1*exp(-|mu1-mu2|/s1) + |mu1-mu2|)/s2 - 1 See supplemental material of Meyer, G. P. (2021). An alternative probabilistic interpretation of the huber loss. In Proceedings of the ieee/cvf conference on computer vision and pattern recognition (pp. 5261-5269). https://openaccess.thecvf.com/content/CVPR2021/supplemental/Meyer_An_Alternative_Probabilistic_CVPR_2021_supplemental.pdf
- Parameters:
mean (float) – Mean (location) of the first Laplace distribution.
log_scale (float) – Log scale of the first Laplace distribution.
- Returns:
KL divergence.
- Return type:
tf.Tensor
- call(inputs)[source]
Draw a sample y ~ L(z_mean, exp(z_log_var)) from a Laplace distribution with location loc and log scale using the reparameterization trick. (log scale is used to ensure numerical stability) y = loc + scale * epsilon with epsilon ~ L(0, 1)
- Parameters:
inputs (tuple) – A tuple containing loc and log_scale.
- Returns:
Sampled values from the Laplace distribution.
- Return type:
tf.Tensor
- prob_density_fcn(x, loc, scale)[source]
Compute the probability density function of the Laplace distribution.
- Parameters:
x (float) – Input value.
loc (float) – Mean (location) of the distribution.
scale (float) – Scale of the distribution.
- Returns:
Probability density function value.
- Return type:
float