Why we need regularization
As the deep neural network becomes more and more complicated, the over-fitting problem will appear. Therefore we need some tricks to overcome the over-fitting problem. One of the solutions to tackle it is doing regularization. There are several regularization methods, the general version will be discussed in this essay.
How to do regularization
Regularization sounds very noble and mysterious, but it is just an adding item to the original cost function. So let's review what is cost function without regularization:
Then, let's view the cost function with regularization:
Inside this big equation, is called regularization parameter, apparently it's a kind of hyper-parameters. Different values of will generate different models.
The effects to gradient descent method
In Deep-learning, the Gradient Descent method is usually used to find the most optimal parameters matrix: W. Let's review the gradient descent method on W firstly:
If we want to take derivatives on the new version of the cost function, the new partial derivative is:
Now we take the equation 5 into the equation 3, we can get:
From the equation 5, we can know that is less than 1, so the final value of W will be smaller than before(without regularization). If the value of becomes larger, the final value of W would be smaller.
Why Regularization can reduce over-fitting
In order to answer this question intuitively, we start with a fundamental problem: there are only three cases for machine learning models trained by us: "High Bias", "Just right" and "High variance".
Our target is "Just right", and the regularization is used to reduce the third one: "High variance".
According to the deduction from the last section, the gets bigger, the final W would be smaller. If the becomes large enough, the value of W will approach zero. That means the whole network becomes a very simple network like Logistic Regression because the majority of network weights becomes 0. So we can find a middle value of to get the "Just right" case.