Abstract: When we want any information and answers to our questions related to our college or campus, we search it on the web but we don’t always get what we were finding. And also, we discuss or share opinions on social platforms but such activities sometimes encounter threats or harassments which compel people to not express themselves properly. Many social platforms try to find out such harassments or threats in conversations so that such conversations can easily be prevented before it causes any further damage. Toxicity detection is one of such methodologies to find out the different types of conversations that can be classified as toxic in nature. To increase the efficiency in classifying such comments, we can make use of machine learning algorithms to determine the toxicity in comments. This analysis aims in developing a platform that will help us in finding the solution and we named it “CampusQueries”. In this proposed system, the authentic users will be able to ask doubts as well as clear others, also we are using a machine learning model for toxic comment classification. In this model, many toxic comments have been fed to build a Bidirectional Long Short-Term Memory (LSTM) Recurrent Neural Network (RNN) model for fulfilling the purpose. So, anyone can fearlessly put their points.
Keywords: Bidirectional Long Short-Term Memory (LSTM), Recurrent Neural Network (RNN), Machine Learning, OTP, Authentication, Toxic comment classifier, GloVe, CNN.
| DOI: 10.17148/IARJSET.2021.85102