ABSTRACT: The majority of daily activities need us to submit ID cards containing facial images, such as passports and driver's licences, to human operators in order to prove our identification. This process, however, is laborious, slow, and unreliable. An automated method is required to adapt ID document photos to real-time, highly accurate live face images (selfies1). To do this, DocFace+ is recommended in this work. We first show that gradient-based optimization techniques converge slowly when many classes have small samples, which is a defining characteristic of existing ID-selfie datasets (owing to the parameter of classifier weights). To fix this problem, we propose a method known as dynamic weight imprint, which modifies the classification weights and allows for faster convergence and more universally applicable representation. Then, utilising partially shared parameters, a pair of sister networks is created to learn an unified feature representation utilizing domain-specific traits. Cross-validation on to an ID-selfie dataset reveals that DocFace+ significantly raises the true acceptance (TAR) to 95.95 0.54 percent, while InsightFace, a publicly accessible general face matching engine, only did manage a TAR of 88.78 1.30 percent at the a false acceptance rate of 0.01 percent just on challenge.

Keywords: Images, Face verification, Digital certificate.


PDF | DOI: 10.17148/IARJSET.2022.96142

Open chat