This thesis investigates the use of deep learning for classifying wound infections from photographic images, using colony-forming unit (CFU) counts as a quantitative labeling standard. Leveraging the visual information in wound photographs and the clinical relevance of bacterial burden, the study implements a multi-task U-Net architecture for both image reconstruction and binary classification in a shared-encoder framework. Three experimental conditions were explored: one using original images with positive class weighting, one incorporating data augmentation to enhance visual diversity, and one employing 5-fold cross-validation with augmentation to improve validation reliability. The non-augmented model achieved 91.7% accuracy at a threshold of 0.8, correctly identifying 4 of 5 infected cases, while Experiment 2 achieved 87.5% accuracy at a moderate threshold of 0.5 but became more conservative at higher thresholds. The third experiment reached 79.6% accuracy at a threshold of 0.3, detecting all 11 infected cases despite signs of overfitting. These results highlight the model's strong performance in minimizing false negatives, particularly in the non-augmented setting, but also reflect limitations from the small dataset, class imbalance, and reliance on a small validation set. These factors suggest results should be interpreted cautiously and motivate further study with larger datasets, improved regularization, and more varied clinical scenarios.
Event Host: Sebastian Osorio, M.S. Candidate, Scientific Computing & Applied Mathematics
Advsior: Marcella Gomez