Hello everyone, before we start with the code for this video, there is a little change that we have to make in the cosine distance function from the previous video. The change is to add the square brackets and the zero between them after query vector, we are doing this because of the dimensionality of the neural network output. This will access the first element of the output. Now let's get back to hamming distance. If you haven't heard about this distance before, that's totally okay. It is one of the more commonly used in information theory and unlike costs and distance this one works on binary vectors only.
Hamming distance is a primarily used to compare sent signal and received signal and it tries to identify any changes that may have occurred in a send signal. If hamming distance of two vectors is zero, the vectors are identical. And as you may assume from the picture here, It works by counting errors between two vectors. In this example, right here we have 123 errors. So the hamming distance of these two vectors is free. The implementation of the hamming distance function is the same as we had in the course in distance.
So let's copy it and paste it in the hamming distance function. The only change that we have to make is to change the cosine to Hamming. And as you can see, it takes u and v factors as well, so we don't have to change anything there. And that's it for this video. If you have any questions or comments so far, please post them in the comment section. Otherwise, I see you in the next tutorial.