Error, loss functions, and why they are needed
Case 3. Log loss and classifier confidence
In classification tasks, we often care not only about what class the model predicts, but also about how confident it is in that prediction. Log loss (cross-entropy loss) is a natural way to penalize overconfident mistakes and reward well-calibrated probabilities.
Two models may give the same final class labels after thresholding at 0.5, but one of them can still be much better calibrated. Log loss makes this difference visible in a single number.
Example of use
<?php
require_once __DIR__ . '/code.php';
// Ground-truth labels for a binary classification task (1 = positive class).
$yTrue = [1, 0, 1, 0, 1];
// Scenario A: classifier is fairly confident and mostly correct.
$probsA = [
$probsAValues[0] ?? 0.95,
$probsAValues[1] ?? 0.10,
$probsAValues[2] ?? 0.90,
$probsAValues[3] ?? 0.20,
$probsAValues[4] ?? 0.85,
];
// Scenario B: classifier gives similar class predictions by threshold 0.5,
// but with much less calibrated probabilities (overconfident on mistakes).
$probsB = [0.55, 0.05, 0.6, 0.1, 0.51];
$logLossA = logLoss($yTrue, $probsA);
$logLossB = logLoss($yTrue, $probsB);
echo 'Log loss (model A, more confident and accurate): ' . round($logLossA, 4) . PHP_EOL;
echo 'Log loss (model B, less confident): ' . round($logLossB, 4) . PHP_EOL;
Per-sample contribution to Log loss (model A)
Log loss as a function of probability (y = 1)
Even when the predicted class labels coincide, a model that assigns probabilities closer to the true outcomes will achieve a lower log loss. Overconfident wrong predictions are punished especially strongly.
Result:
Memory: 0.007 Mb
Time running: < 0.001 sec.
Log loss (model A, more confident and accurate): 0.1295
Log loss (model B, less confident): 0.3877