Gated Linear Network

A Gated Linear Network, also known as GLN, is a type of neural architecture that works differently from contemporary neural networks. The credit assignment mechanism in GLN is local and distributed, meaning each neuron predicts the target directly without learning feature representations.

Structure of GLNs

GLNs are feedforward networks comprising multiple layers of gated geometric mixing neurons. Each neuron in a particular layer produces a gated geometric mixture of predictions from the previous layer. The final layer consists of just one neuron.

Training a GLN

A GLN can be trained in a supervised learning setting on triplets derived from input-label pairs. The GLN learns using Online Gradient Descent (OGD) locally at each neuron. Each neuron can model nonlinear functions via data-dependent gating combined with online convex optimization.

Types of Input for GLNs

There are two types of inputs for GLNs: side information and the input to the neuron. Side information can be thought of as the input features, while the input to the neuron is typically the predictions output by the previous layer. In layer 0, some base predictions can be provided, which are a function of the input features.

Each neuron also takes in a constant bias prediction, which enhances empirical performance and is essential for universality guarantees. The weights are learned in a GLN using Online Gradient Descent (OGD) locally at each neuron.

Loss function in GLNs

The loss function in GLNs is convex in its active weights. Given the side information, each neuron suffers a loss on a per example basis. The loss function used in GLNs is:

$$\ell\_{t}(u):=-\log \left(\operatorname{GEO}\_{u}\left(x_{t} ; p\_{i-1}\right)\right)$$

Where $u:=w\_{i k c\_{i k}(z)}$ is the active weight of each neuron $(i, k)$, and $p\_{i-1}$ is the predictions output by the previous layer.

GLNs are a unique type of neural architecture that offer rapid online learning without the need for backpropagation. Although GLNs forgo the ability to learn feature representations, they make up for it in their distributed and local nature of the credit assignment mechanism. With the advances in machine learning, GLNs have the potential to be used in various applications, such as image recognition and speech analysis.

Great! Next, complete checkout for full access to SERP AI.
Welcome back! You've successfully signed in.
You've successfully subscribed to SERP AI.
Success! Your account is fully activated, you now have access to all content.
Success! Your billing info has been updated.
Your billing was not updated.