نبذة مختصرة : Abstract The task of segmentation is integral to computer‐aided surgery systems. Given the privacy concerns associated with medical data, collecting a large amount of annotated data for training is challenging. Unsupervised learning techniques, such as contrastive learning, have shown powerful capabilities in learning image‐level representations from unlabelled data. This study leverages classification labels to enhance the accuracy of the segmentation model trained on limited annotated data. The method uses a multi‐scale projection head to extract image features at various scales. The partitioning method for positive sample pairs is then improved to perform contrastive learning on the extracted features at each scale to effectively represent the differences between positive and negative samples in contrastive learning. Furthermore, the model is trained simultaneously with both segmentation labels and classification labels. This enables the model to extract features more effectively from each segmentation target class and further accelerates the convergence speed. The method was validated using the publicly available CholecSeg8k dataset for comprehensive abdominal cavity surgical segmentation. Compared to select existing methods, the proposed approach significantly enhances segmentation performance, even with a small labelled subset (1–10%) of the dataset, showcasing a superior intersection over union (IoU) score.
No Comments.