No task is without uncertainty, and in a medical imaging setting, where model predictions can inform clinical decisions, it is that much more important to be able to quantify it appropriately.
Uncertainty modelling is about acknowledging that there are irreducible errors when any problem is undertaken with limited models (as all models are), with limited training data (as all training data are), and finding the best way to measure them, and account for them.
There exist two main types of uncertainty, epistemic, which relates to your model and its parameters, and aleatoric which is specific to the task data. While seemingly distinct, it is not uncommon for these to overlap.
Uncertainty modelling can provide boundaries for segmentations, upper and lower bounds for quantification, and crucially, increased model performance. A model that is cognizant of its and the data’s “shortcomings” can learn to account for them, tempering its predictions and providing more sensible, if uncertain, predictions.
Eaton-Rosen, Z., Varsavsky, T., Ourselin, S. and Cardoso, M.J., 2019, October. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 356-364). Springer, Cham.