Abstract: The rise of online shopping has transformed the fashion industry, yet customers often face challenges in visualizing how clothes will fit and look on their bodies. This project addresses this limitation by developing an AI-based Virtual Clothing Try-On System that allows users to digitally try on garments using just a single image. The system leverages advanced deep learning models and computer vision techniques to deliver a photo-realistic virtual dressing experience. The core architecture of the system is built around the Adaptive Content Generating and Preserving Network (ACGPN), which accurately simulates the appearance of clothing on a user's body while preserving their pose, structure, and facial features. Preprocessing tasks such as human pose estimation and body part segmentation are performed using the Open Pose and SCHP models, respectively. These components help extract critical information about the user’s body orientation and region mapping, which are essential for aligning garments correctly. Developed using Python, the system utilizes the VITON dataset for training and evaluation. It takes a person image and a clothing image as input, processes them through a pipeline of AI models, and generates a realistic try-on output. This technology can be integrated into e- commerce platforms to reduce return rates, improve customer satisfaction, and offer a more interactive shopping experience. The project demonstrates the potential of AI in revolutionizing the future of virtual fashion retail.

Keywords: Artificial intelligence, deep learning, ACGPN, pose estimation, human parsing


PDF | DOI: 10.17148/IARJSET.2025.125263

Open chat