Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

Um m??todo para classifica????o de opini??o em v??deo combinando express??es faciais e gestos

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • معلومة اضافية
    • Contributors:
      Santos, Eulanda Miranda dos; Carvalho, Jos?? Reginaldo Hughes; Pio, Jos?? Luiz de Souza; Silva Junior, Waldir Sabino da
    • بيانات النشر:
      Universidade Federal do Amazonas, 2017.
    • الموضوع:
      2017
    • نبذة مختصرة :
      Submitted by Divis??o de Documenta????o/BC Biblioteca Central (ddbc@ufam.edu.br) on 2017-05-18T15:12:34Z No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Disserta????o - Airton Gaio.pdf: 1793686 bytes, checksum: 443aabacb435022dc01af024845b5282 (MD5) Approved for entry into archive by Divis??o de Documenta????o/BC Biblioteca Central (ddbc@ufam.edu.br) on 2017-05-18T15:14:15Z (GMT) No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Disserta????o - Airton Gaio.pdf: 1793686 bytes, checksum: 443aabacb435022dc01af024845b5282 (MD5) Approved for entry into archive by Divis??o de Documenta????o/BC Biblioteca Central (ddbc@ufam.edu.br) on 2017-05-18T15:16:08Z (GMT) No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Disserta????o - Airton Gaio.pdf: 1793686 bytes, checksum: 443aabacb435022dc01af024845b5282 (MD5) Made available in DSpace on 2017-05-18T15:16:09Z (GMT). No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Disserta????o - Airton Gaio.pdf: 1793686 bytes, checksum: 443aabacb435022dc01af024845b5282 (MD5) Previous issue date: 2017-04-05 A large amount of people share their opinions through videos, generates huge volume of data. This phenomenon has lead companies to be highly interested on obtaining from videos the perception of the degree of feeling involved in people???s opinion. It has also been a new trend in the field of sentiment analysis, with important challenges involved. Most of the researches that address this problem propose solutions based on the combination of data provided by three different sources: video, audio and text. Therefore, these solutions are complex and language-dependent. In addition, these solutions achieve low performance. In this context, this work focus on answering the following question: is it possible to develop an opinion classification method that uses only video as data source and still achieving superior or equivalent accuracy rates obtained by current methods that use more than one data source? In response to this question, a multimodal opinion classification method that combines facial expressions and body gestures information extracted from online videos is presented in this work. The proposed method uses a feature coding process to improve data representation in order to improve the classification task, leading to the prediction of the opinion expressed by the user with high precision and independent of the language used in the videos. In order to test the proposed method experiments were performed with three public datasets and three baselines. The results of the experiments show that the proposed method is on average 16% higher that baselines in terms of accuracy and precision, although it uses only video data, while the baselines employ information from video, audio and text. In order to verify whether or not the proposed method is portable and language-independent, the proposed method was trained with instances of a dataset whose language is exclusively English and tested using a dataset whose videos are exclusively in Spanish, applied in the conduct of the tests. The 82% of accuracy achieved in this test indicates that the proposed method may be assumed to be language-independent. Um grande n??mero de pessoas compartilha suas opini??es atrav??s de v??deos, gerando uma gama de dados incalcul??vel. Esse fen??meno tem despertado elevado interesse de empresas em obter, a partir de v??deos a percep????o do grau de sentimento envolvido na opini??o das pessoas. E tamb??m tem sido uma nova tend??ncia no campo de an??lise de sentimentos, com importantes desafios envolvidos. A maioria das pesquisas que abordam essa problem??tica utiliza em suas solu????es a combina????o de dados de tr??s fontes diferentes: v??deo, ??udio e texto. Portanto, s??o solu????es baseadas em modelos complexos e dependentes do idioma, ainda assim, apresentam baixo desempenho. Nesse contexto, este trabalho busca responder a seguinte pergunta: ?? poss??vel desenvolver um m??todo de classifica????o de opini??o que utilize somente v??deo como fonte de dados, e que obtenha resultados superiores ou equivalente aos resultados obtidos por m??todos correntes que usam mais de uma fonte de dados? Como resposta a essa pergunta, ?? apresentado neste trabalho um m??todo de classifica????o de opini??o multimodal que combina informa????es de express??o facial e de gesto do corpo extra??das de v??deos on-line. O m??todo proposto utiliza codifica????o de caracter??sticas para melhorar a representa????o dos dados e facilitar a tarefa de classifica????o, a fim de predizer a opini??o exposta pelo usu??rio com elevada precis??o e de forma independente do idioma utilizado nos v??deos. Com objetivo de testar o m??todo proposto foram realizados experimentos com tr??s bases de dados p??blicas e com tr??s baselines. Os resultados dos experimentos mostram que o m??todo proposto ?? em m??dia 16% superior aos baselines em termos de acur??cia e ou precis??o, apesar de utilizar apenas dados de v??deo, enquanto os baselines utilizam v??deo, ??udio e texto. Como forma de demonstrar portabilidade e independ??ncia de idiomas do m??todo proposto, este foi treinado com inst??ncias de uma base de dados que tem opini??es expressas exclusivamente em ingl??s, e testado em uma base de dados cujas opini??es s??o expressas exclusivamente no idioma espanhol. O percentual de 82% de acur??cia alcan??ado nesse teste indica que o m??todo proposto pode ser considerado independente do idioma falado nos v??deos.
    • File Description:
      application/pdf
    • Rights:
      OPEN
    • الرقم المعرف:
      edsair.od......3056..23cafda2b18fb7cc1825e7729e79983c