Hello everyone. In this video we are going to define our second major function for the model, the convolution block. Most of the parameters for this function have to do with the concept of the convolution layer. Since you are expected to have prior knowledge and experience with convolution on neural networks, we're going straight to the implementation of these functions. Our convolution block will have three parts. The first is as the name suggests, the convolution layer.
Then we're going to perform max pooling, and wrap up with batch normalization. For the purposes of implementation, we are going to use tensor flow high level API called layers. This will take care of a lot of things for us, so we only need to worry about putting the pieces together. The first thing that we have to add is the convolution layer itself. We do this by typing come features equals layer equals calm function from TensorFlow. We'll go over the syntax in more detail as at the end of this video.
To access the convolution layer in the tensor flow, write TF dot layers that come to the let's provide the arguments to this function. The first thing that we have to provide to our convolution layer is input. That is the first argument that we have in our con block function. The next argument that comes to the function takes is filters. And for us, that is number of filters. The next three arguments are the same as we have defined in our count blog function.
So we have kernel size, stride, and padding. Lastly, we have the argument for activation. This is set to none by default in Call of Duty. So we're going to change that the one that we have defined Now let's add a check for whether account block will be using a max pooling or not. Again from the layers API, use the function max pooling to the and provide layer, which is the output from the first step in our count block the convolution layer because we want to decrease the size of the layer by two, set the pool size and strides to two by two. And lastly, the argument to put here is pedic settings to say the last thing to do in our block is to perform batch normalization.
If this blog does support batch normalization, just type layer equals to tf dot layers, batch dot batch normalization and provide the layer and return the layer and comp features. As promised, let's go over the syntax that we have used for our continuity. So for the last part, you probably already No, we are using that to check for max pool and batch normalization. So this is just a normal layer. But what about these common features? Those are weird, right?
Because we are trying to create image search, we need specific representation of images. Those representations are coming from our layers in our network themselves. These are a presentation should be pure con features before using either next bool or batch normalization. That's why we have defined the second variable there to access the pure convolution of features for our image representations. And that's it. We are done, execute the cell.
And if you have any questions or comments, leave them in the comment section. Otherwise, I'll see you in the next tutorial.