نبذة مختصرة : Self‐supervised learning is a method that learns the data representation through unlabeled data. It is efficient because it learns from large‐scale unlabeled data and through continuous research, performance comparable to supervised learning has been reached. Contrastive learning, a type of self‐supervised learning algorithm, utilizes data similarity to perform instance‐level learning within an embedding space. However, it suffers from the problem of false‐negatives, which are the misclassification of data class during training the data representation. They result in loss of information and deteriorate the performance of the model. This study employed cosine similarity and temperature simultaneously to identify false‐negatives and mitigate their impact to improve the performance of the contrastive learning model. The proposed method exhibited a performance improvement of up to 2.7% compared with the existing algorithm on the CIFAR‐100 dataset. Improved performance on other datasets such as CIFAR‐10 and ImageNet was also observed.
No Comments.