New Publication in Applied Food Research [19.11.25]
Daniel Einsiedel from the Department of Food Informatics is author of the publication "Poultry perfection — Comparison of computer vision models to detect and classify poultry products in a production setting" in Applied Food Research (Impact Factor 6.2).The publication "Poultry perfection — Comparison of computer vision models to detect and classify poultry products in a production setting" from Daniel Einsiedel (Department of Food Informatics, University of Hohenheim) with co-authors Marco Vita, Dana Jox (both Department of Food Informatics, University of Hohenheim), Bertus Dunnewind, Johan Meulendijks (both Marel Further Processing B.V.) and Christian Krupitzer (Department of Food Informatics, University of Hohenheim) was publiched in Elsevier's Applied Food Research (Impact Factor 6.2).
This study explores the use of computer vision, specifically object detection, for quality control in ready-to-eat meat products. We focused on a single process step, labeling products as “good” or “imperfect”. An “imperfect product” constitutes a product that deviates from the norm regarding shape, size, or color (having a hole, missing edges, dark particles, etc.). Imperfect does not mean the product is inedible or a risk to food safety, but it affects the overall product quality. Various object detectors, such as YOLO, including YOLO12, were compared using the mAP50-95 metric. Most models achieved mAP scores over 0.9, with YOLO12 reaching a peak score of 0.9359. The precision and recall curves indicated that the model learned the “imperfect product” class better, most likely due to its higher representation. This underscores the importance of a balanced dataset, which is challenging to achieve in real-world settings. The confusion matrix revealed false positives, suggesting that increasing dataset volume or hyperparameter tuning could help. However, increasing the dataset volume is usually the more difficult path since data acquisition and especially labeling are by far the most time-consuming steps of the whole process.
Overall, current models can be applied to quality control tasks with some margin of error. Our experiments show that high-quality, consistently labeled datasets are potentially more important than the choice of the model for achieving good results. The applied hyperparameter tuning on the YOLO12 model did not outperform the default model in this case. Future work could involve training models on a multi-class dataset with hyperparameter optimization. A multi-class dataset could contain more specific classes than just “good” and “imperfect,” making trained models capable of actually predicting specific quality deviations.
The publication is available at https://doi.org/10.1016/j.afres.2025.101528.

