Abstract: In recent years, emotion-based music recommendation systems have gained considerable attention due to their potential to enhance user experiences by personalizing music selection based on the user's emotional condition. This project aims to develop an innovative Emotion Echoes system utilizing advanced deep learning and computer vision techniques. The system employs a convolutional neural network (CNN) to detect seven types of emotions from facial expressions captured through a webcam. By analyzing sequences of video frames in real-time, the system accurately interprets the user's emotional mood and recommends music that aligns with the detected emotion. The Emotion Echoes system is designed to offer a seamless and immersive experience by continuously analyzing a sequence of 50 consecutive frames to capture the user's emotional dynamics. The system's architecture includes capturing real-time video frames, preprocessing them for optimal attribute extraction, and feeding them into the trained CNN model for sentiment detection.

Keywords:
● Emotion-Based Music Recommendation
● Deep Learning
● Computer Vision
● Convolutional Neural Network (CNN)
● Real-Time Processing
● Facial Emotion Recognition
● Personalized Music Experience


PDF | DOI: 10.17148/IARJSET.2024.11738

Open chat