When predicted values are wrong and equals 0 or 1 we obtain infinity in the logloss. To prevent this probabilities=p are clipped to max(eps, min(1 - eps, p)), for example with eps=1e-15 like in sklearn. What eps do you use?
When predicted values are wrong and equals 0 or 1 we obtain infinity in the logloss. To prevent this probabilities=p are clipped to max(eps, min(1 - eps, p)), for example with eps=1e-15 like in sklearn. What eps do you use?
Hey @1aguschin, we use 1e-16
.