نبذة مختصرة : One of the key applications of Natural Language Processing (NLP) is to automatically extract topics from large volumes of text. Latent Dirichlet Allocation (LDA) technique is commonly used to extract topics based on word frequency from the pre-processed documents. A major issue of LDA is that the quality of topics extracted are poor if the document do not coherently discuss a single topic. However, Gibbs sampling uses word by word basis which changes the topic assignment of one word and can be used on documents having different topics. Hence, this paper proposed a hybrid based semantic similarity measure for topic modelling using LDA and Gibbs sampling to exploit the strength of automatic text extraction and improve coherence score. Unstructured dataset was obtained from a public repository to validate the performance of the proposed model. The evaluation carried out shows that the proposed LDA-Gibbs had a coherence score of 0. 52650 as against LDA coherence score 0.46504. The proposed multi-level model provides better quality of topics extracted.
No Comments.