Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

A novel hybrid deep learning IChOA-CNN-LSTM model for modality-enriched and multilingual emotion recognition in social media.

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • معلومة اضافية
    • المصدر:
      Publisher: Nature Publishing Group Country of Publication: England NLM ID: 101563288 Publication Model: Electronic Cited Medium: Internet ISSN: 2045-2322 (Electronic) Linking ISSN: 20452322 NLM ISO Abbreviation: Sci Rep Subsets: MEDLINE
    • بيانات النشر:
      Original Publication: London : Nature Publishing Group, copyright 2011-
    • الموضوع:
    • نبذة مختصرة :
      In the rapidly evolving field of artificial intelligence, the importance of multimodal sentiment analysis has never been more evident, especially amid the ongoing COVID-19 pandemic. Our research addresses the critical need to understand public sentiment across various dimensions of this crisis by integrating data from multiple modalities, such as text, images, audio, and videos sourced from platforms like Twitter. Conventional methods, which primarily focus on text analysis, often fall short in capturing the nuanced intricacies of emotional states, necessitating a more comprehensive approach. To tackle this challenge, our proposed framework introduces a novel hybrid model, IChOA-CNN-LSTM, which leverages Convolutional Neural Networks (CNNs) for precise image feature extraction, Long Short-Term Memory (LSTM) networks for sequential data analysis, and an Improved Chimp Optimization Algorithm for effective feature fusion. Remarkably, our model achieves an impressive accuracy rate of 97.8%, outperforming existing approaches in the field. Additionally, by integrating the GeoCoV19 dataset, we facilitate a comprehensive analysis that spans linguistic and geographical boundaries, enriching our understanding of global pandemic discourse and providing critical insights for informed decision-making in public health crises. Through this holistic approach and innovative techniques, our research significantly advances multimodal sentiment analysis, offering a robust framework for deciphering the complex interplay of emotions during unprecedented global challenges like the COVID-19 pandemic.
      (© 2024. The Author(s).)
    • References:
      Schukla, A. Sentiment analysis of document based on annotation, CORR J. (2011).
      Sheikh, H. A. & Jaiswal, J. Implementing sentiment analysis on real-time twitter data. J ETIR, 2020, 7, 9.
      Hassena, R. P. Challenges and applications, Int. J. Appl. Innov. Eng. Manag. (IJAIEM), 3(5). (2014).
      Tshimula, J. M., Chikhaoui, B., & Wang, S. COVID-19 detecting depression signals during stay-at-home period (2021).
      Cabezas, J., Moctezuma, D., Fernández-Isabel, A. & Martin de Diego, I. Detecting emotional evolution on Twitter during the covid-19 pandemic using text analysis. Int. J. Environ. Res. Public Health18(13), 6981 (2021). (PMID: 10.3390/ijerph18136981342099778297321)
      Fountoulakis, K. et al. Self-reported changes in anxiety, depression, and suicidality during the COVID-19 lockdown in Greece. J. Affect. Disord.279, 624–629 (2021). (PMID: 10.1016/j.jad.2020.10.06133190113)
      Zammit, O., Smith, S., Windridge, D., & De Raffaele, C. Exposing students to new terminologies while collecting browsing search data (best technical paper). In International Conference on Innovative Techniques and Applications of Artificial Intelligence, pp 3–17. Springer (2020).
      Yin, H., Yang, S., & Li, J. Detecting topic and sentiment dynamics due to COVID-19 pandemic using social media, International Conference on Advanced Data Mining and Applications, Springer, Lecture Notes in Computer Science, vol. 12447 Cham, pp 610–623 (2020).
      Viviani, M. et al. Assessing vulnerability to psychological distress during the COVID-19 pandemic through the analysis of microblogging content. Futur. Gener. Comput. Syst.125, 446–459 (2021). (PMID: 10.1016/j.future.2021.06.044)
      Ghosh, S., & Anwar, T. Depression intensity estimation via social media: A deep learning approach. IEEE Trans. Comput. Soc. Syst.8(6), 1465–1474 (2021). (PMID: 10.1109/TCSS.2021.3084154)
      Pasumpon, P.A. Performance evaluation and comparison using deep learning techniques in sentiment analysis. J. Soft Comput. Paradigm3(2), 123–134 (2021). (PMID: 10.36548/jscp.2021.2.006)
      Tripathi, M. Sentiment analysis of Nepali COVID19 Tweets using NB, SVM AND LSTM. J. Artif. Intell.3(03), 151–168 (2021).
      Medhat, W., Hassan, A., & Korashy, H. Sentiment analysis algorithms and applications, 19 April (2014).
      Gonçalves, P., Araújo, M., Benevenuto, F., Cha, M. Comparing and combining sentiment analysis methods (2013).
      Pereira, M.H.R., Pádua, F.L.C., Pereira, A.C.M., Benevenuto, F., & Dalip, D.H. Fusing audio, textual and visual features for sentiment analysis of news videos. Available at: https://doi.org/10.48550/arXiv.1604.0261 (2016).
      Liu, M., Wang, R., Huang, Z., Shan, S., & Chen, X. Partial least squares regression on grassmannian manifold for emotion recognition. In: Proceedings of the 15th ACM on international conference on multimodal interaction. ACM, pp 525–530 (2013).
      Zafeiriou, S., Zhang, C. & Zhang, Z. A survey on face detection in the wild: Past, present and future. Comput. Vis. Image Underst.138, 1–24 (2015). (PMID: 10.1016/j.cviu.2015.03.015)
      Peng, Y., Ganesh, A., Wright, J., Xu, W. & Ma, Y. RASL: Robust alignment by sparse and low-rank decomposition for linearly correlated images. Pattern Anal. Mach. Intell. IEEE Trans.34(11), 2233–2246 (2012). (PMID: 10.1109/TPAMI.2011.282)
      Hassner, T., Harel, S., Paz, E., & Enbar, R. Effective face frontalization in unconstrained images. (2014).
      Geethanjali, R., & Valarmathi, A. “Issues and Future Challenges of Sentiment Analysis for Social Networks- A Survey,” 2022 International Conference on Automation, Computing and Renewable Systems (ICACRS), Pudukkottai, India, 2022, pp 332–339, https://doi.org/10.1109/ICACRS55517.2022.10029070.
      Kahou, S.E., Pal, C., Bouthillier, X., Froumenty, P., Gülçehre, Ç., Memisevic, R., & Wu, Z. Combining modality specific deep neural networks for emotion recognition in video. In: Proceedings of the 15th ACM on international conference on multimodal interaction. ACM, pp 543–550 (2013).
      Liu, M., Wang, R., Li, S., Shan, S., Huang, Z., & Chen, X. Combining multiple kernel methods on iemannian manifold for emotion recognition in the wild. In: Proceedings of the 16th international conference on multimodal interaction. ACM, pp 494–501 (2014).
      Chen, J., Chen, Z., Chi, Z., & Fu, H. Emotion recognition in the wild with feature fusion and multiple kernel learning. In: Proceedings of the 16th international conference on multimodal interaction. ACM, pp 508–513 (2014).
      El Ayadi, M., Kamel, M. S. & Karray, F. Survey on speech emotion recognition: Features, classification schemes, and databases. Pattern Recognit.44(3), 572–587 (2011).
      De la Torre, F. & Cohn, J. F. Facial expression analysis. In Visual analysis ofhumans (eds Moeslund, T. B. et al.) 377–409 (Springer, London, 2011). (PMID: 10.1007/978-0-85729-997-0_19)
      Huang, X., He, Q., Hong, X., Zhao, G., & Pietikainen, M. Improved spatiotemporal local monogenic binary pattern for emotion recognition in the wild. In: Proceedings of the 16th international conference on multimodal interaction. ACM, pp 514–520 (2014).
      Xia, H. & Hoi, S. C. Mkboost: A framework of multiple kernelsboosting. IEEE Trans. Knowl. Data Eng. 25(7), 1574–1586 (2013). (PMID: 10.1109/TKDE.2012.89)
      Bucak, S. S., Jin, R. & Jain, A. K. Multiple kernel learning for visual object recognition: A review. Pattern Anal. Mach. Intell. IEEE Trans.36(7), 1354–1369 (2014). (PMID: 10.1109/TPAMI.2013.212)
      Valstar, M., Girard, J., Almaev, T., McKeown, G., Mehu, M., Yin, L., & Cohn, J. Fera 2015-second facial expression recognition and analysis challenge. Proceeding of the IEEE ICFG (2015).
      Almaev, T.R., & Valstar, M.F. (2013). Local gabor binary patterns from three orthogonal planes for automatic facial expression recognition. In: Affective computing and intelligent interaction (ACII), humaineassociation conference on IEEE, pp 356–361.
      Yue, T., Mao, R., Wang, H., Hu, Z. & Cambria, E. KnowleNet: Knowledge fusion network for multimodal sarcasm detection. Inf. Fusion100, 101921 (2023). (PMID: 10.1016/j.inffus.2023.101921)
      Frenda, S., Cignarella, A., Basile, V., Bosco, C., Patti, V., & Rosso, P. (2022) The unbearable hurtfulness of sarcasm, Expert Syst. Appl. , 193, 116398.
      Gandhi, A., Adhvaryu, K., Poria, S., Cambria, E. & Hussain, A. Multimodal sentiment analysis: A systematic review of history, datasets, multimodal fusion methods, applications, challenges and future directions. Inf. Fusion91, 424–444 (2023). (PMID: 10.1016/j.inffus.2022.09.025)
      Qazi, U., Imran, M. & Ofli, F. GeoCoV19: A dataset of hundreds of millions of multilingual COVID-19 Tweets with location information. SIGSPATIAL Spec.12(1), 6–15. https://doi.org/10.1145/3404820.3404823 (2020). (PMID: 10.1145/3404820.3404823)
      https://www.kaggle.com/datasets/datatattle/covid-19-nlp-text-classification.
      https://www.kaggle.com/datasets/tawsifurrahman/covid19-radiography-database.
      https://www.kaggle.com/datasets/andrewmvd/covid19-cough-audio-classification.
      https://twitter.com/COVIDNewsByMIB.
      Zhou, X., Feng, J. & Li, Y. Non-intrusive load decomposition based on CNN-LSTM hybrid deep learning model. Energy Rep.7, 1234–1245. https://doi.org/10.1016/j.egyr.2021.09.001 (2021). (PMID: 10.1016/j.egyr.2021.09.001)
      Li, X., Du, Y., & Wang, Y. Time-Enhanced Neighbor-Aware network on irregular time series for sentiment prediction in social networks. Inf. Process. Manag.60(6), 103500 (2023). (PMID: 10.1016/j.ipm.2023.103500)
      Sikka, K., Dykstra, K., Sathyanarayana, S., Littlewort, G., & Bartlett, M. Multiple kernel learning for emotion recognition in the wild. In: Proceedings of the 15th ACM on international conference on multimodal interaction. ACM, pp 517–524 (2013).
      Amiriparian, S., Christ, L., Kathan, A., Gerczuk, M., Müller, N., Klug, S., Stappen, L., König, A., Cambria, E., Schuller, B., & Eulitz, S. “The MuSe 2024 Multimodal Sentiment Analysis Challenge: Social Perception and Humor Recognition”. Available at: https://doi.org/10.48550/arXiv.2406.07753 (2024).
      Amiriparian, S., Christ, L., König, A., Messner, E.-M., Cowen, A., Cambria, E., & Schuller, B.W. MuSe 2023 Challenge: Multimodal Prediction of Mimicked Emotions, Cross-Cultural Humour, and Personalised Recognition of Affects. In Proceedings of the 31st ACM International Conference on Multimedia (MM’23), October 29-November 2, 2023, Ottawa, Canada. Association for Computing Machinery, Ottawa, Canada. (2023).
      Christ, L., Amiriparian, S., Baird, A., Kathan, A., Müller, N., Klug, S., Gagne, C., Tzirakis, P., Stappen, L., Meßner, E.-M., et al. The muse 2023 multimodal sentiment analysis challenge: Mimicked emotions, cross-cultural humour, and personalisation. In Proceedings of the 4th on Multimodal Sentiment Analysis Challenge and Workshop: Mimicked Emotions,Humour and Personalisation. 1–10 (2023).
    • Contributed Indexing:
      Keywords: COVID-19; Multilingual sentiment analysis; Multimodal sentiment analysis; Proposed IChOA-CNN-LSTM; Twitter
    • الموضوع:
      Date Created: 20240927 Date Completed: 20240927 Latest Revision: 20241002
    • الموضوع:
      20241002
    • الرقم المعرف:
      PMC11436932
    • الرقم المعرف:
      10.1038/s41598-024-73452-2
    • الرقم المعرف:
      39333289