deepblink.losses module

Functions to calculate training loss on batches of images.

While functions are comparable to the ones found in the module metrics, these rely on keras’ backend and do not take raw numpy as input.

deepblink.losses.binary_crossentropy(y_true, y_pred)[source]

Keras’ binary crossentropy loss.

deepblink.losses.categorical_crossentropy(y_true, y_pred)[source]

Keras’ categorical crossentropy loss.

deepblink.losses.combined_bce_rmse(y_true, y_pred)[source]

Loss that combines binary cross entropy for probability and rmse for coordinates.

The optimal values for binary crossentropy (bce) and rmse are both 0.

deepblink.losses.combined_dice_rmse(y_true, y_pred)[source]

Loss that combines dice for probability and rmse for coordinates.

The optimal values for dice and rmse are both 0.

deepblink.losses.combined_f1_rmse(y_true, y_pred)[source]

Difference between F1 score and root mean square error (rmse).

The optimal values for F1 score and rmse are 1 and 0 respectively. Therefore, the combined optimal value is 1.

deepblink.losses.dice_loss(y_true, y_pred)[source]

Dice score loss corresponding to deepblink.losses.dice_score.

deepblink.losses.dice_score(y_true, y_pred, smooth: int = 1)[source]

Computes the dice coefficient on a batch of tensors.

\[\textrm{Dice} = \frac{2 * {\lvert X \cup Y\rvert}}{\lvert X\rvert +\lvert Y\rvert}\]

ref: https://arxiv.org/pdf/1606.04797v1.pdf

Parameters:
  • y_true – Ground truth masks.
  • y_pred – Predicted masks.
  • smooth – Epslion value to avoid division by zero.
deepblink.losses.f1_loss(y_true, y_pred)[source]

F1 score loss corresponding to deepblink.losses.f1_score.

deepblink.losses.f1_score(y_true, y_pred)[source]

F1 score metric.

\[F1 = \frac{2 * \textrm{precision} * \textrm{recall}}{\textrm{precision} + \textrm{recall}}\]

The equally weighted average of precision and recall. The best value is 1 and the worst value is 0.

deepblink.losses.precision_score(y_true, y_pred)[source]

Precision score metric.

Defined as tp / (tp + fp) where tp is the number of true positives and fp the number of false positives. Can be interpreted as the accuracy to not mislabel samples or how many selected items are relevant. The best value is 1 and the worst value is 0.

deepblink.losses.recall_score(y_true, y_pred)[source]

Recall score metric.

Defined as tp / (tp + fn) where tp is the number of true positives and fn the number of false negatives. Can be interpreted as the accuracy of finding positive samples or how many relevant samples were selected. The best value is 1 and the worst value is 0.

deepblink.losses.rmse(y_true, y_pred)[source]

Calculate root mean square error (rmse) between true and predicted coordinates.