Abstract: The rapid growth of multimedia content on digital platforms has created a need for efficient content moderation systems to prevent the spread of harmful material. This project presents an Automated Harmful Content Control and Blocking System that performs analysis of media before it is uploaded. The system allows users to upload images and videos through a web interface, where the content is processed using a Flask-based backend. For image analysis, OpenCV is used to perform preprocessing techniques such as grayscale conversion and pixel intensity evaluation. A threshold-based method is applied to determine whether the uploaded image contains potentially harmful content. If harmful content is detected, the system blocks the upload and notifies the user; otherwise, the file is stored successfully. The system also supports video uploads and provides a deletion feature for managing uploaded files. This approach ensures real-time moderation, reduces dependency on manual monitoring, and enhances platform safety. Although the current implementation uses basic image processing techniques, it can be extended with advanced machine learning models for improved accuracy in future developments.
Keywords: Harmful Content Detection,Image Processing, OpenCV,Flask Content Moderation,Social Media Safety
Downloads:
|
DOI:
10.17148/IARJSET.2026.13511
[1] D. Tejaswi, K. Anusha, G. Vanaja, K. Deepa Sri Bhramaramba, "Automated Harmful Content Control and Blocking System for Social Media," International Advanced Research Journal in Science, Engineering and Technology (IARJSET), DOI: 10.17148/IARJSET.2026.13511