Contextual Analysis of Immoral Social Media Posts Using Self-attention-based Transformer Model
DOI:
https://doi.org/10.59461/ijdiic.v3i4.146Keywords:
Immoral content, Contextual Analysis, Social media, NLP, Transformer modelAbstract
Immoral posts detection on social media is a serious issue in this digital era. This matter wants advanced natural language processing (NLP) methods to address user-generated text's difficult semantics and context. Incorporating advanced deep learning (DL) techniques improves the model's aptitude to handle challenges such as slang, sarcasm, and vague expressions. This work suggests a deep contextual analysis framework using a self-attention-based transformer model to detect immoral contents on soil networks efficiently. The model captures complex contextual associations and semantic nuances by harnessing the strength of self-attention mechanisms. The proposed technique enables proper differentiation between moral and immoral content. The framework is assessed on two benchmark datasets, SARC and HatEval. The experiment shows the highest F1-score, 98.10%, on the SARC dataset. While on HatEval, the model achieved 97.34%, representing greater performance than state-of-the-art approaches. The results highlight the efficiency of self-attention-based DL models in delivering efficient, scalable, and ethical answers for observing and modifying harmful content on social media networks.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Bibi Saqia, Khairullah Khan, Atta Ur Rahman, Wahab Khan
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.