What is the function of ReLU activation?

What is the function of ReLU activation?
6 min read

We will have a look at the most generally used relu activation function called ReLU (Rectified Linear Unit) and explain why it is selected as the default choice for Neural Networks. This page tries to cover most of the relevant topics concerning this function.

Brief Overview of Neural Networks

Like the brain, Artificial Neural Networks have "layers" with specific functions. Each layer consists of varying numbers of neurons that are identical to the biological neurons within the human body, they get activated under particular situations resulting in a relevant reaction to stimulus. These neurons are interconnected to multiple layers which are powered by activation functions.

Forward propagation sends data from input to output. Calculate the loss function after retrieving the output variable. Through the use of an optimizer, most often gradient descent, back-propagation is used to update the weights and minimize the loss function. The loss is iterated over many iterations until it reaches a local minimum somewhere in the world.

Explain the meaning of an activation function.

An activation function is a straightforward mathematical function that maps any input to any output within a specified domain. According to their definition, threshold switches turn on the neuron when the function's output reaches a predetermined value. 

They turn neurons on and off. At each layer, the neuron receives the inputs multiplied by the weights that were randomly initialized. The sum is activated to produce a new output. 

relu activation function introduces a non-linearity, to make the network learn complicated patterns in the data such as in the case of photos, text, videos, or audio. Our model will behave like a limited-learning linear regression model without an activation function.

What is ReLU?

The rectified linear relu activation function (ReLU) returns positive inputs directly and 0 otherwise.

It is widely employed in neural networks, especially Convolutional Neural Networks (CNNs) and Multilayer perceptrons, and has the highest frequency of occurrence of any activation function.

It is simpler and more effective than the sigmoid and tanh.

Mathematically, it is expressed as:

Graphically, this is

Implementing the ReLU function in Python

Using an if-else expression, we can write Python code to build a basic ReLU function as,

function ReLU(x):\s if x>0:\s return x \s else: \sreturn 0

or by utilising the x-interval-spanning max() built-in function:

the defined relu activation function relu(x) returns the maximum value (0.0, x)

Values larger than zero return 1.0, whereas values less than zero return 0.0.

We'll put our function to the test by plugging in some values and then plotting them with pyplot from the matplotlib library. Enter -10–10. Applying our defined function to these inputs.

using pyplot with the definition "relu(x)" from matplotlib:

To get the maximum value, input = [x for x in range(-5, 10)] and return max(0.0, x).

# relu each input \soutput = [relu(x) for x in input]

pyplot.plot(series in, series out) plots our result.

pyplot.show()

The graph reveals that all negative values have been reset to zero while positive values have been returned unmodified. The input was a string of ever-increasing numbers, hence the result is a linear function with a rising slope.

How come ReLU is not linear?

At first glance, the plot of the relu activation function looks like a straight line. However, detecting and comprehending complex training data linkages requires a non-linear function.

When positive, it acts linearly; when negative, it activates non-linearly.

Backpropagation using an optimizer like SGD (Stochastic Gradient Descent) simplifies the computation of the gradient because the function behaves like a linear one for positive values. In addition to facilitating the preservation of attributes, this close linearity also facilitates the optimization of linear models using gradient-based techniques.

In addition, the relu activation function increases the sensitivity of the weighted sum, which helps prevent neuronal saturation (i.e when there is little or no variation in the output).

The ReLU Analogue:

When making changes to the weights during the backpropagation of an error, the derivative of a relu activation function is needed. For positive values of x, ReLU's slope is 1, whereas, for negative values, it is 0. When x is zero, differentiation becomes impossible, however, this is usually a harmless assumption.

The benefits of ReLU are as follows.

Instead of employing Sigmoid or tanh in the hidden layers, which can cause the dreaded "Vanishing Gradient" issue, we turn to ReLU. Backpropagation in a network is inhibited by the "Vanishing Gradient," which inhibits lower levels from gaining useful knowledge. 

Given that the output of the sigmoid function, which is a logistic function, can only take on the values 0 and 1, it is best suited for application in issues involving regression or binary classification, and then only in the output layer. Sigmoid and tanh both reach saturation and lose some of their sensitivity.

Among ReLU's many benefits are:

Keeping the derivative fixed at 1, as it would be for positive input, simplifies the computation required for training a model and minimizing mistakes.

It possesses the property of representational sparsity, which allows it to provide a valid zero value.

Linear activation functions are less of a challenge to fine-tune and have a more natural feel to them. Thus, it performs best in supervised environments with multiple labels and data.

Consequences of ReLU:

The gradient accumulates, leading to explosive gradients and huge discrepancies in successive weight updates. The resulting convergence to global minima is unstable, and the learning process is similarly unsteady.

With a dying relu activation function, the problem of "dead neurons" arises when the neuron becomes trapped in the negative side and produces zero values at all times. If the gradient is 0, the neuron won't recover. This occurs when there is a considerable amount of negative bias or a fast learning rate.



In case you have found a mistake in the text, please send a message to the author by selecting the mistake and pressing Ctrl-Enter.
Comments (0)

    No comments yet

You must be logged in to comment.

Sign In / Sign Up