Gaussian Gated Linear Network

G-GLN, which stands for Gaussian Gated Linear Network, is a deep neural network that extends the GLN family of deep neural networks. The GLN neuron is reformulated as a gated product of Gaussians. A Gaussian Gated Linear Network (G-GLN) is a feed-forward network of data-dependent distributions, where every neuron in the G-GLN directly predicts the target distribution.

What is G-GLN?

Gaussian Gated Linear Network, or G-GLN, is a deep neural network that extends the GLN family of neural networks. The GLN neuron is reformulated as a gated product of Gaussians. This formulation exploits the fact that exponential family densities are closed under multiplication, a property that has seen much use in Gaussian Process and related literature.

Precisely, a Gaussian Gated Linear Network (G-GLN) is a feed-forward network of data-dependent distributions. Each neuron calculates the sufficient statistics for its associated probability density function (PDF) using its active weights, given those emitted by neurons in the preceding layer. It consists of $L+1$ layers indexed by $i \in\{0, \ldots, L\}$ with $K\_{i}$ neurons in each layer. The weight space for a neuron in layer $i$ is denoted by $\mathcal{W}\_{i}$; the subscript is needed since the dimension of the weight space depends on $K_{i-1}$. Each neuron/distribution is indexed by its position in the network when laid out on a grid.

How does G-GLN work?

There are two types of input to neurons in the network. The first is the side information, which can be thought of as the input features, and is used to determine the weights used by each neuron via half-space gating. The second is the input to the neuron, which is the PDFs output by the previous layer, or in the case of layer 0, some provided base models.

To apply a G-GLN in a supervised learning setting, we need to map the sequence of input-label pairs $\left(x\_{t}, y\_{t}\right)$ for $t=1,2, \ldots$ onto a sequence of (side information, base Gaussian PDFs, label) triplets $\left(z\_{t},\left\(f\_{0 i}\right\)\_{i}, y\_{t}\right)$. The side information $z\_{t}$ is set to the (potentially normalized) input features $x\_{t}$. The Gaussian PDFs for layer 0 will generally include the necessary base Gaussian PDFs to span the target range, and optionally some base prediction PDFs that capture domain-specific knowledge.

Similar to the Bernoulli GLN, every neuron in the G-GLN directly predicts the target distribution. The Gaussian Gated Linear Network (G-GLN) formulation exploits the fact that exponential family densities are closed under multiplication, which is a property that has seen much use in Gaussian Process and related literature.

What are the features of G-GLN?

G-GLN is a deep neural network that has the following features:

  • G-GLN is a multi-variate extension to the GLN family of neural networks
  • The GLN neuron is reformulated as a gated product of Gaussians
  • G-GLN is a feed-forward network of data-dependent distributions, where every neuron in the G-GLN directly predicts the target distribution
  • G-GLN exploits the fact that exponential family densities are closed under multiplication, which is a property that has seen much use in Gaussian Process and related literature
  • There are two types of input to neurons in the network. The first is the side information, which can be thought of as the input features, and is used to determine the weights used by each neuron via half-space gating. The second is the input to the neuron, which is the PDFs output by the previous layer.
  • G-GLN can be used in a supervised learning setting by mapping the sequence of input-label pairs onto a triplet of (side information, base Gaussian PDFs, label)

What are the benefits of using G-GLN?

Using G-GLN has the following benefits:

  • G-GLN is a multi-variate extension to the GLN family of neural networks, which makes it powerful in handling complex data
  • G-GLN exploits the fact that exponential family densities are closed under multiplication, which is a property that has seen much use in Gaussian Process and related literature
  • G-GLN consists of $L+1$ layers indexed by $i \in\{0, \ldots, L\}$ with $K\_{i}$ neurons in each layer
  • G-GLN is a feed-forward network of data-dependent distributions, where every neuron in the G-GLN directly predicts the target distribution
  • To apply G-GLN in a supervised learning setting, we only need to map the sequence of input-label pairs onto a triplet of (side information, base Gaussian PDFs, label), which simplifies the process

G-GLN is a powerful neural network that extends the GLN family of deep neural networks. It is a multi-variate extension that can handle complex data. By exploiting the fact that exponential family densities are closed under multiplication, G-GLN is able to directly predict the target distribution of every neuron. It is a feed-forward network of data-dependent distributions that consists of $L+1$ layers with $K\_{i}$ neurons in each layer. To use G-GLN in a supervised learning setting, we only need to map the sequence of input-label pairs to a triplet of (side information, base Gaussian PDFs, label), which simplifies the process. G-GLN is a promising neural network that can be used in various applications.

Great! Next, complete checkout for full access to SERP AI.
Welcome back! You've successfully signed in.
You've successfully subscribed to SERP AI.
Success! Your account is fully activated, you now have access to all content.
Success! Your billing info has been updated.
Your billing was not updated.