**What is Neural Network?**

We know deep learning by the name of Neural networks. Let’s Understand today what exactly is a Neural Network and what exactly is Deep learning.

**Biological Inspiration **

The idea of the neural network comes from the human brain, Human Brain is a very complex Neural Network, where you can understand a neuron as a cell in your brain that takes a particular decision when input is given to this cell.

As you can see in this diagram, dendrites take some electrical signals and do some computation in the nucleus and send it to the other neuron.

Let’s take this idea from the brain and try to write a mathematical form of it.

As you can see in this diagram, we have four inputs, (x1,x2,x3) and every input is going to the cell body, cell body has two things, the first is the summation function and the second is called an Activation function.

**For Free, Demo classes Call: 020-71173143**

**Registration Link: Click Here!**

You can also see some weights(w1,w2,w3), lets’s understand it like this, imagine you have an exam next week and your teacher said that most of the questions are from revision exercise so definitely you will give more weightage to revision questions and less weightage to other questions.

These weights are like that, they get multiplied by input so that we can classify more important weights and less important weights.

The summation function is just summing up all the inputs with their respective weights. So the total input to the activation function is,

**Total Input=w1x1+w2x2+w3x3 **

Now, this total input will go to the activation function, let me define a function first, the function is something that takes an input, processes it, and gives you an output. So activation function will take this Total Input, process it, and gives an output.

Let’s denote the Activation function as F

so, F(Total Input)=Output

**F**(**w1x1+w2x2+w3x3)=Output **

Here we just took one neuron, our brains have millions of neurons, if we connect a lot of neurons together in one system that’s called Neural Network

As you can see in the image, all the inputs are going to all the outputs and then those outputs are going to other neurons as their inputs.

In 1986 Geoffery Hinton(Father of Modern Deep Learning) and others successfully introduced the Backpropagation algorithm which was the game changer in Deep Learning History.

Backpropagation is just a multi-epoch where we leverage chain rule and memoization to update the weights.

I will write a separate blog on Backpropogation.

You can also understand the Neural Network of the brain through this diagram, This diagram has 3 segments, let’s understand them one by one,

**For Free, Demo classes Call: 020-71173143**

**Registration Link: Click Here!**

1st segment is the brain of a newborn baby, you can see there are very fewer connections between the neuron, This is a very simple structure with no knowledge, and no experience so there are no connections between neurons. So basically if there is learning, there is an experience and when there is an experience a neuron gets connected in the brain.

3rd segment is the brain of 2 years old boy where you can see there are a lot of connections between neurons, There is a lot of learning, and a lot of experiments, so dense connections happened in the brain. For example, If a person learns to walk, a connection forms, if a person learns to a table, a connection forms. Also, you can see some connections are very thick, which means that the experience is very strong. Mathematically weight of thicker connections will be more.

**We are quantifying the thickness of the connection to weights. **

There is no diagram or segment here which shows the brain of 90 years old person but you can imagine that there will be again very less connections, as age increases mind becomes weak and start losing connections. Enroll in the best **Data Science Course in Pune** and become a master in Data Science.

**Learning=Connection of neurons. **

**Logistic Regression Algorithm as a Single Neurone Model **

We all know that in logistic regression, we want to find a hyperplane that maximizes the sum of signed distance.

Then we take the sigmoid function to overcome the problem of outliers, we write it like

yi^=sigmoid(w1x1+w2x2+w3x3+w4x4), where yi^ is the prediction, x1, x2, x3, and x4 are the inputs and w1, w2, w3, and w4 are the respective weights.

yi^ can be considered as the output of a logistic regression model.

Now let’s compare it with the single neuron model where,

**F**(**w1x1+w2x2+w3x3+w4x4)=Output **

In logistics we have,

**sigmoid(w1x1+w2x2+w3x3+w4x4)=output **

so by comparison, we can clearly say that F is sigmoid in logistic or Logistic regression is nothing but a single neuron model with activation function as sigmoid.

**Linear Regression Algorithm as a Single Neurone Model **

Now we all know how linear regression works, it simply says that we want to find a hyperplane that minimizes the sum of ordinary square error.

**yi^=w1x1+w2x2+w3x3+w4x4 **

where yi^ is the output or the prediction,

x1,x2,x3,x4 are the inputs,

w1,w2,w3,w4, are the weights

Now let’s compare it with the Single Neurone model,

**F**(**w1x1+w2x2+w3x3+w4x4)=Output **

here in Linear regression,

**w1x1+w2x2+w3x3+w4x4=output **

So here F is an identity function where output=input

So we can say linear regression is also a Single Neurone model which takes an input, goes through an activation function, and gives you an output.

**For Free, Demo classes Call: 020-71173143**

**Registration Link: Click Here!**

**Perceptron **

Perceptron is also a single neuron model, the only thing which changes here is the activation function. The activation function in Perceptron is

**F=1 if w.xi>0 **

**and **

**F=0 if w.xi≤0 **

Whenever I write, w.x, it means the dot product between weight vector and input.

**For Logistic Regression, The activation function was—> Sigmoid Function For Linear Regression, The activation function was **→ **Identity Function For Perceptron, The activation function is **

**F=1 if w.xi>0 **

**and **

**F=0 if w.xi≤0 **

**Multi-Layer Perceptron(MLP) **

MLP is an extension of a single perceptron model where we have only one neuron, In MLP we have multiple Neurones connected in different layers.

**Linear Model **→ **Simple(We can use single neuron models). **

**Non-Linear Model-Complex Function(MLP) **

But why should we care about MLP?

**Very Strong Biological Inspiration**– If a single neuron is a so much power, a network would be very powerful. Now humans, cows, and rats, all have multiple neurons and we know we can take decisions wisely.**Mathematical argument-**MLP is like a Function Composition, all of us have studies function composition in our high-school, fog(f(g(x))

for example, f(x)=sin(x) and g(x)= x² , now f(g(x)) will be sin(x²) and g(f(x))will be (sinx)²

Having a multilayer structure results in a very powerful model but they can easily overfit, it means they can remember the data rather than understand it so we will get very good training accuracy but very bad testing accuracy.

To reduce the problem of overfitting, we can use Regularisation, we can use any Regularisation L1 or L2, just remember L1 will remove all the unimportant features.

I hope you understand the intuition of Neural Network by this blog, You can also read my previous blogs on machine learning to get a depth and breadth of this domain. **SevenMentor** is one of the best **Data Science Training in Pune **

Thank you so much for reading…

**Author:**

**Nishesh**

**Gogia**

*Call the Trainer and Book your free demo Class Call now!!!*

| SevenMentor Pvt Ltd.

**© Copyright 2021 | Sevenmentor Pvt Ltd.**