## Question:

In my Artificial Intelligence class, the teacher addressed the subject about neural networks, which in this case, neural networks have layers, such as: *input* , *hidden* and *output,* and the neurons that make them up.

However, he mentioned the term *bias* which seems to me to be a neuron, however, this term left me more confused in relation to neural networks and I would like to have this question clarified.

# Doubt

What would Bias be in relation to neural networks?

## Answer:

Simply put, the Bias is a "1" valued input associated with a "b" weight on each neuron. Its function is to increase or decrease the net input, in order to translate the activation function on the axis.

Example:

To approximate a set of points to a line, we use `y = a*x + b*1`

, where `a`

and `b`

are constant. `x`

an input associated with a weight `a`

and we have a weight `b`

associated with an input `1`

.

Now imagine that the network activation function is a linear function.