Abstract: Music has long been recognized as a powerful tool for influencing emotional states and enhancing psychological well-being. With the advent of artificial intelligence and computer vision, it is now possible to tailor music experiences dynamically based on a user's current mood. This paper presents a novel music recommendation system that leverages facial emotion recognition to make accurate emotion-specific music suggestions. The system utilizes the CK+48 dataset, which comprises grayscale facial images classified into seven emotional states: anger, contempt, disgust, fear, happiness, sadness, and surprise. Two deep learning approaches were integrated: a Convolutional Neural Network (CNN) optimized for real-time webcam input and a ResNet-based transfer learning model for image uploads. The CNN model achieved an accuracy of 99.49%, whereas the ResNet model achieved 97.46%. Built with a Flask backend and responsive web frontend, the system enables seamless emotion detection and music playback. The proposed solution offers a more empathetic and context-aware alternative to conventional music players by aligning the musical output with the user's emotions in real-time.

Keywords: Facial Emotion Recognition, Music Recommendation System, Deep Learning, Convolutional Neural Network (CNN), Transfer Learning, Affective Computing.


Downloads: PDF | DOI: 10.17148/IARJSET.2025.12804

How to Cite:

[1] CHAITHRA P, Dr Leena Giri G, "Intelligent Music Recommendation System Based on Facial Emotion Recognition," International Advanced Research Journal in Science, Engineering and Technology (IARJSET), DOI: 10.17148/IARJSET.2025.12804

Open chat