KVM these are neural network classification algorithm for neural networks. No right now is a self algorithms model after the human brain are designed for classification clustering. So, the following is a single perceptron into a neural network, he is a neuron in our brain. So, far our neural network for for propagation the we will take something like s one times the weight for s one plus s two times the way our VA s two bras bias times the way our bias buyers then we have a Apple offer Zack then from this day, we put into this activation function and we will have our output here. So, as one can be available here as two can be available here. Then ah as one can be one one time is the Wait for s one, then s two can be two two times the wait for s two, then we can have a bias bias can be zero or one one times without bias can be one.
So, one So, this is the forward propagation. So, the propagation formula is actually something that is wait time is equal to one times s one plus weight to time s two plus n times s m m Ras are way for buyers time, so, buyers, so, this is the equation for forward propagation there are after we get there, we will need to put into this same activation function which will be f equal one over one price exponential minus that. So, this is the semi activation So, after we calculate this debt here, so, after we calculate the debt, we are put into the activation function the Saima activation function and we will get our output here. So, this is the forward propagation for our propagation we will calculate the error. So, error equals the actual output y minus the predicted output. So, it will be these y minus the output here then the new way will be weight plus the differentiation or the wait time is the learning constant.
So, differentiation of the weight is equal to error times the input x. So, new way will be new way will be weight plus error times input times the learning constant learning constant we can say are any values So, wait Rasta era So arrow b r, CH output y minus our predicted output, then times the input x times the learning constant. So these are the backpropagation. So after calculating the output, we will calculate a new ways, we will then replace the waste with the new ways. So after calculating the ARPU and new ways, we have went through one iteration in the neural network training, you can specify iterations to be 1000 in neural network training. So calculating the APU is power propagation, calculating the new bases that are propagation.
So for neural network, we when we do the training, it will be forward propagation we calculate the output. Then we go back and calculate a new ways than we did a ways away. Replace the old ways with the new ways then we will continue to calculate the upper game then we will continue to calculate the new ways and we can repeat these iterations our essay 100 times or 10,000 times. So, depending on how many iterations we want the neural network to train. So, after we train the neural networks, then we if we want to predict any new data, we can just put in the s value and then the neural network will be able to predict output here. So, this is how the new one that works.
So, for multi layer perceptrons we have many perceptrons So, we have 1% or 2% or 3% Chrome's fault perceptron spies perceptron six perceptron seven perceptrons a percent loss and so on. So, in this neural network here, this is our one perceptron. So for multi layer perceptron neural network, we have many perceptrons so for deep learning simple neural network will be something like this. But for deep learning we have more layers on perceptrons or more hidden layers. So, this is the difference between the neural network and deep learning neural network