Performance Analysis of Classification Models

In a machine learning Algorithm, once the model is built, the next step is the use of various performance criteria to evaluate Machine learning Models. In the Classification model output is a discrete value therefore for classification performance analysis following metrics are used Confusion matrix Accuracy Precision Recall (sensitivity) Specificity ROC curve (AUC) ROC Area Under Curve is useful when we are not concerned about whether the small dataset/class of dataset is positive or not, in contrast to the F1 score where the class being positive is important. F-score(F1 score is useful when the size of the positive class is relatively small) Performance metrics should be chosen based on the problem domain, project goals, and objectives. A confusion matrix A confusion matrix is a table that is used to describe the performance of an algorithm (or "classifier") on a set of test data for whic...