EMONET: A PARALLEL CNN ARCHITECTURE ACHIEVING STATE-OF-THE-ART ACCURACY IN TEXT-BASED EMOTION RECOGNITION

Authors

  • Usman Masood, Arfan Jaffar, Fawad Nasim

DOI:

https://doi.org/10.63878/cjssr.v3i4.1620

Abstract

Text-based emotion recognition is a key issue in natural language processing with a wide-ranging application to human-computer interaction, mental health diagnostics, and social media analysis. This paper describes EmoNet a new architecture of convolutional neural network that is specifically created to classify the emotions of a text. Our model uses the benchmark Emotion dataset when it is performed with the highest accuracy of 91.05% only in the state-of-the-art. The architecture proposed uses several parallel convolutional filters with varying sizes (3,4,5) to reproduce the various n-gram features along with the use of selective dropout regularization and fine-tuned hyper parameters to ensure steady training with Intel CPU resources. In a series of experiments, we prove our method to be effective to solve the subtle dilemma of not blurring 6 basic emotions, sadness, joy, love, anger, fear, and surprise. Importantly, our model has outstanding results on sadness (95.01 percent), joy (94.53 percent) and fear (92.86 percent), but it has certain difficulties in dealing with surprise (51.52 percent) because it is a historical context (only). The study adds a computationally predictive structure to preserve high performance without needing the use of the GPU acceleration feature to make emotion recognition more convenient to practical implementation. Our work sets new standards in the text-based classification of emotions and offers us some ideas of all the architectural issues that should be considered to reach a strong sense of emotions in the computational systems.

Downloads

Download data is not yet available.

Downloads

Published

2025-12-09

How to Cite

EMONET: A PARALLEL CNN ARCHITECTURE ACHIEVING STATE-OF-THE-ART ACCURACY IN TEXT-BASED EMOTION RECOGNITION. (2025). Contemporary Journal of Social Science Review, 3(4), 1321-1332. https://doi.org/10.63878/cjssr.v3i4.1620