Hello everyone, in the previous video we have defined our common block function. To continue, we are going to create a dense block function in this video. Side note, dense is the same as fully connected layer. Dense block has free parts as well, obviously the fully connected part, dropout part. And lastly, batch normalization. If you like to organize your code in a different way, for example, to free different layers in your model, by all means do that.
I've decided for this approach to be more clean and better code organization. Okay, now for our implementation. The first part is to define our fully connected layer. as we did for our convolutional one. We're going to define features separately from our layer for the same reason. So we can represent images with our vectors, type dense features equals layer equals TF dot layers dot dense The first argument that our fully connected layer takes is inputs, then units, or if you want to call them neurons, the first argument is the activation function.
This one is none by default. So just set it to one that we have defined. For the next part of our dense block, we have to check if the dropout rate is not not. As you can see in the function definition, dropout rate is none by default. We've done this to reduce the number of arguments. So if the dropout is anything but none, for example, 0.5 0.6 or maybe df dot placeholder, in our case, we are going to use an apply the dropout to our dense layer.
To apply the dropout on our layer, just type layer equals tf.layers.to drop out. This function takes two arguments. The first one is our layer. And the second one is the percentage of how many units we are actually Dropping. Now let's check if the batch norm is true. If so, we are going to apply the batch normalization the same way that we did in the code block, return layer and dense features execute itself.
And that's it. If you have any questions or comments so far, post them in the comment section. Otherwise, I'll see you in the next video.