Hello there!
Today, we are delving into a specific metrics used for evaluating model performance — the AUC score. But before we delve into the specifics, have you ever wondered why unintuitive scores are at times necessary to assess the performance of our models?
Whether our model handles a single class or multiple classes, the underlying objective remains constant: optimizing accurate predictions while minimizing incorrect ones. To explore this basic objective, let’s first look at the obligatory confusion matrix encompassing True Positives, False Positives, True Negatives, and False Negatives.