Robust classification approach for segmentation of blood defects in cod fillets based on deep convolutional neural networks and support vector machines and calculation of gripper vectors for robotic processing
Peer reviewed, Journal article
MetadataVis full innførsel
OriginalversjonComputers and Electronics in Agriculture. 2017, 139 138-152. 10.1016/j.compag.2017.05.021
Despite advances in computer vision and segmentation techniques, the segmentation of food defects such as blood spots, exhibiting a high degree of randomness and biological variation in size and coloration degree, has proven to be extremely challenging and it is not successfully resolved. Therefore, in this paper, we propose an approach for robust automated pixel-wise classification for segmentation of blood spots, focusing specifically on challenging texture-uniform cod fish fillets. A multimodal vision system, described in this paper, enables perfectly aligned RGB and D-depth images for localization of segmented blood spots in 3D. Classification models based on (1) Convolutional Neural Networks - CNN and (2) Support Vector Machines - SVM for the classification of defective fillets were developed. A colour-based, pixel-wise and SVM-based model was developed for accurate segmentation and localisation of blood spots resulting in 96% overall accuracy when tested on whole fillet images. Classification between normal and defective fillets based on GPU (Graphical Processing Unit) - accelerated CNN classification model achieved 100% accuracy, versus the SVM-based model achieving 99%. We present a novel data augmentation approach that desensitizes the CNN towards shape features and makes the CNN to focus more on colour. We show how pixel-wise classification is used for an accurate localization of blood spots in 3D space and calculation of resulting 3D gripper vectors, as an input to robotic processing.