Hello everyone. In this video we are going to complete the last helper function for our model. The function applause is used to define loss function and optimizer for the model. It takes three arguments logits, which are pre activated outputs from the last layer of the model, then we have our targets. As you probably know, there are just true labels. And lastly, we have our learning Great.
Let's define our loss function first. The loss function will be mean of sparse softmax cross entropy with logits. It takes two arguments, logics, or our pre activated outputs from the model and targets. And this point you may be confused and probably asked, What is the sparse Word In the Name? And why is it different from the classical softmax cross entropy? The sparse version means that we don't have to convert our targets to one hot encoding versions and the rest is working exactly the same So as we said, it takes two arguments, labels, or targets in our case and logic.
Now we need to define our optimizer. For the model we're going to use atom optimizer, which performs the best for the CNS provide learning rates to it, and called the minimize function to minimize our defined loss. Before we finish this function, we have to return loss and optimizer. And now we are all done and set to build our model that we are going to do in the next video. For now if you have any questions or comments, put them in the comment section. Otherwise, see in the next tutorial