Hello everyone. In this video we are going to write our sparse accuracy function, which is our last helper function as well. This function is used to check how accurate our model is. Or in other words how often it predicts the correct plus. Basically, it will take the true label the probability distribution from our model. And then based on the two inputs, our goal is to check whether the highest probability is given through class or label.
This was mouthful. Let's explain this in simple plain English. For each true label, we will have an array that is length equal to number of classes in our data set. In our case, it is 10. Each element corresponds to the likelihood of a particular class. So when we put one image to our model, it produces 10 predictions.
And obviously we want the highest number to be the one overcorrect class. So our function will compare whether the index of the maximum element is equal to the true label. The first thing that will check is if there are the same number of elements in the true label and predicted labels. We can do that by simply comparing the lens and assert. If you haven't used the assert method before, here's a crash course. It takes some Boolean statement and checks if it's correct.
If the answer is negative, it will stop our code and want to execute any further. And that's it. Now let's define our variable that will store the number of correct predictions and set it to zero. The one thing I like to do is to iterate through our inputs and compare them. In this example, I will use in range of length true labels. But it doesn't really matter since we made sure that number of predicted labels and true labels has same number of elements so you can use either.
To compare the elements, we need to get the index of the highest probability. We can do it by simply using MP dot arg Max, which takes an array and returns the index of the maximum element. Now that we have the index, we will compare it with a true label. If these two are equal, we'll just add one to our correct predictions. Return the number of predictions divided by number of elements. This will give us a number between zero and one, which will indicate our accuracy.
To sum up what happened here, we compared the index of maximum element with the true label. If these two are equal, we'll increase the number of correct predictions by one and return it by dividing it with the number of elements to get the accuracy. If you have any questions so far, please post them in the comment section. Otherwise, assume the next tutorial