Emotion Recognition Using Speech and Facial Expression

Main Article Content

Prof. V.G. Bharane
Vishal Dhotre
Rohit Kale
Omkar Kokate
Rohan Narwade

Abstract

This paper presents a multimodal emotion recognition system designed for next- generation personal assistant technologies, integrating both speech signals and facial expressions to enable affective computing capabilities. The proposed framework addresses limitations in conventional unimodal approaches by combining auditory and visual cues to create a more robust and accurate emotion detection system.


The system processes speech signals by extracting comprehensive acoustic features including prosodic characteristics, spectral properties, and Mel-Frequency Cepstral Coefficients (MFCCs), while simultaneously analyzing facial images through deep learning architectures to capture spatial patterns and micro-expressions. These multimodal features are fused at the feature level to create a comprehensive emotional representation, enabling the recognition of key emotional states including happiness, sadness, anger, surprise, fear, and neutrality.


Implemented using Python with deep learning frameworks, the system demonstrates significant improvement in recognition accuracy compared to single-modality approaches, particularly in challenging real-world conditions where one modality may be compromised. The architecture supports real-time processing and maintains flexibility for integration with broader AI assistant ecosystems.


This research contributes to the field of human-computer interaction by providing an effective, multimodal solution for emotion recognition that enhances contextual understanding in personal assistant systems, paving the way for more natural, empathetic, and responsive human-machine interactions.

Article Details

How to Cite
Bharane, P. V., Dhotre , V., Kale , R., Kokate , O., & Narwade , R. (2025). Emotion Recognition Using Speech and Facial Expression. International Journal on Advanced Computer Engineering and Communication Technology, 14(1), 727–730. https://doi.org/10.65521/ijacect.v14i1.813
Section
Articles

Similar Articles

<< < 1 2 3 4 5 6 7 > >> 

You may also start an advanced similarity search for this article.