From: Application of machine learning in identifying risk factors for low APGAR scores
Metrics | Formula | Definition |
---|---|---|
Accuracy | \(\frac{TP + TN }{TP+TN+FP+FN}^{\quad (\text {a})}\) | It is the ratio of correctly predicted instances (both true positives and true negatives) to the total number of instances in the dataset, measuring the overall correctness of the model |
Precision | \(\frac{TP}{TP+FP}\) | It is the ratio of true positive predictions to the total number of positive predictions made by the model, indicating how accurate the model is when it predicts a positive class |
Recall | \(\frac{TN}{TN+FP}\) | It is the ratio of true positive predictions to the total actual positives in the dataset, measuring the model’s ability to identify all relevant instances of the positive class |
Jaccard index | \(\frac{TP}{TP+FN+FP}\) | It measures the similarity between two sets and is calculated as the size of the intersection divided by the size of the union of the predicted and true labels |
F1-score | \(\frac{2\cdot TP}{2\cdot TP+FP+FN}\) | It is used as it emphasizes the lowest recall and precision values within each category |