Hello everyone and welcome back. Let's continue with our training function. Just the small adjustments that were made since the previous video, the savor Delta parameter, just add it as an argument from the function and you're ready to continue with implementation. This savor Delta parameter is a very small number by default it is zero point 15. It is used to determine whether we should save the model or not. Basically, this parameter will prevent us from saving over fitted models.
Now, why is this parameter very useful? When the difference between accuracy on the test set and the accuracy on the training set exceeds this number, the model won't be saved anymore. This is a small trick that can prevent you of saving overfitting models in production. Now we need to calculate the test accuracy defined as an empty list called you guessed it, test accuracy. The way that we are going to calculate The test accuracy over our test set is very similar to one that we use in the training set. Let's copy the whole training set for loop that we defined in the previous video and paste it below.
The first thing that we need to change in the range part of the for loop is to change the length of x train to land of x test, change that in the x batch part as well. Instead of using y train, we're going to use y test. For the testing process, we only need predictions from the model. The fidic doesn't need the target parameter anymore. delete that. And now because we are not trained the model, we need to set dropout rate to zero.
This will allow us to get all weights normally without dropping any of them. Now delete this underscore and the last variable because we won't be you working with optimizer and loss functions anymore. Also renamed press D to Brad's test. In this run function deletes model dot OPT and model clause for the same reasons. And we will need to do remove brackets as well because we are only fetching one variable, namely model predictions. Let's append the output from the sparse accuracy function to the test accuracy list.
Our accuracy function first takes true targets in our case y batch and predictions as a second argument. For the end of the testing part, let's bring the average test accuracy with a simple brain statement. Now, the only thing to handle it is to saving the model. The first thing that we want to check is to see if the train accuracy is indeed higher than the test accuracy. This will make sure that we are not saving under fitted models Well just write if MP mean of train accuracy is bigger than MP mean of test accuracy. The next check is to see if absolute difference between average training accuracy and average test accuracy is smaller than our delta.
To check this, just simply type if MP dot ABS of MP dot mean of training accuracy minus MP dot mean of test accuracy is less than or equal to savor doubt. As we said in the very beginning of the video, we're doing this to avoid saving over fitted models. The very last thing to check is if the current test accuracy is better than the best accuracy that we recorded so far. This is simple check, just see if the MP dot pin of test accuracy is bigger than or equal to best test accuracy. If all of that is true said the best test accuracy to the current average test accuracy and finally, save the model itself to save the model right saver dot save, then provide current session as the first argument. And then lastly for our second argument provide the path to the model.
So for our path, we're going to write curly brackets slash model ebooks curly brackets dot c k p t shorter for checkpoint dot format, and then provide savor there which we are going to define in our arguments and ebook. And at the very end of the function, closed session by typing session dot close and with that we are done with the training function. This was big one good job for making this far. If you have any questions or comments, please post them In the comment section, otherwise have seen the next video