Emotion Based Music Recommendation System
Main Article Content
Abstract
The growing demand for personalized music recommendations has highlighted the importance of integrating emotional intelligence into music recommendation systems. Traditional systems primarily rely on content- based or collaborative filtering approaches, which fail to consider users' emotional states. This research aims to design and develop an emotion-based music recommendation system that matches music tracks with the user's current emotional state. The system uses natural language processing and machine learning techniques to analyze user input, such as text, voice, or facial expressions, to detect emotions. It then suggests music that aligns with the detected mood, providing a more tailored listening experience. A hybrid approach employing convolutional neural networks for emotion detection from facial expressions and recurrent neural networks for processing textual or speech input was used. The emotion detection dataset was drawn from diverse, publicly available sources. The recommendation engine combined collaborative filtering with user emotion data to improve the accuracy of suggestions. Results show that the emotion-based system outperforms traditional algorithms in user satisfaction, with users reporting a stronger emotional connection to the recommended tracks. The system's ability to understand and adapt to emotional states enhances the music discovery experience. The proposed emotion-based system offers a personalized, emotionally intelligent alternative to traditional methods, with applications in music streaming, mental health support, and entertainment.