Social Media Toxicity Classification Using Deep Learning: Real-World Application UK Brexit / Fan Hong, Du Wu, A. Dahou [et al.]

Уровень набора: ElectronicsАльтернативный автор-лицо: Fan Hong;Du Wu;Dahou, A., Abdelghani;Ewees, A. A., Ahmed;Yousri, D., Dalia;Mokhamed Elsaed, A. M., Specialist in the field of informatics and computer technology, Professor of Tomsk Polytechnic University, 1987-, Akhmed Mokhamed;Elsheikh, A. H., Ammar;Abualigah, L., Lait;Al-qaness Mohammed, A. A.Коллективный автор (вторичный): Национальный исследовательский Томский политехнический университет, Инженерная школа информационных технологий и робототехники, Отделение информационных технологийЯзык: английский.Страна: .Резюме или реферат: Social media has become an essential facet of modern society, wherein people share their opinions on a wide variety of topics. Social media is quickly becoming indispensable for a majority of people, and many cases of social media addiction have been documented. Social media platforms such as Twitter have demonstrated over the years the value they provide, such as connecting people from all over the world with different backgrounds. However, they have also shown harmful side effects that can have serious consequences. One such harmful side effect of social media is the immense toxicity that can be found in various discussions. The word toxic has become synonymous with online hate speech, internet trolling, and sometimes outrage culture. In this study, we build an efficient model to detect and classify toxicity in social media from user-generated content using the Bidirectional Encoder Representations from Transformers (BERT). The BERT pre-trained model and three of its variants has been fine-tuned on a well-known labeled toxic comment dataset, Kaggle public dataset (Toxic Comment Classification Challenge). Moreover, we test the proposed models with two datasets collected from Twitter from two different periods to detect toxicity in user-generated content (tweets) using hashtages belonging to the UK Brexit. The results showed that the proposed model can efficiently classify and analyze toxic tweets..Примечания о наличии в документе библиографии/указателя: [References: 73 tit.].Тематика: труды учёных ТПУ | электронный ресурс | toxic | social media | brexit | Twitter | BERT | sentiment analysis | социальные медиа | социальные сети | глубокое обучение Ресурсы он-лайн:Щелкните здесь для доступа в онлайн
Тэги из этой библиотеки: Нет тэгов из этой библиотеки для этого заглавия. Авторизуйтесь, чтобы добавить теги.
Оценка
    Средний рейтинг: 0.0 (0 голосов)
Нет реальных экземпляров для этой записи

Title screen

[References: 73 tit.]

Social media has become an essential facet of modern society, wherein people share their opinions on a wide variety of topics. Social media is quickly becoming indispensable for a majority of people, and many cases of social media addiction have been documented. Social media platforms such as Twitter have demonstrated over the years the value they provide, such as connecting people from all over the world with different backgrounds. However, they have also shown harmful side effects that can have serious consequences. One such harmful side effect of social media is the immense toxicity that can be found in various discussions. The word toxic has become synonymous with online hate speech, internet trolling, and sometimes outrage culture. In this study, we build an efficient model to detect and classify toxicity in social media from user-generated content using the Bidirectional Encoder Representations from Transformers (BERT). The BERT pre-trained model and three of its variants has been fine-tuned on a well-known labeled toxic comment dataset, Kaggle public dataset (Toxic Comment Classification Challenge). Moreover, we test the proposed models with two datasets collected from Twitter from two different periods to detect toxicity in user-generated content (tweets) using hashtages belonging to the UK Brexit. The results showed that the proposed model can efficiently classify and analyze toxic tweets.

Для данного заглавия нет комментариев.

оставить комментарий.