2,582 research outputs found
Face Liveness Detection under Processed Image Attacks
Face recognition is a mature and reliable technology for identifying people. Due
to high-definition cameras and supporting devices, it is considered the fastest and
the least intrusive biometric recognition modality. Nevertheless, effective spoofing
attempts on face recognition systems were found to be possible. As a result, various anti-spoofing algorithms were developed to counteract these attacks. They are
commonly referred in the literature a liveness detection tests. In this research we highlight the effectiveness of some simple, direct spoofing attacks, and test one of
the current robust liveness detection algorithms, i.e. the logistic regression based face liveness detection from a single image, proposed by the Tan et al. in 2010, against malicious attacks using processed imposter images. In particular, we study experimentally the effect of common image processing operations such as sharpening and smoothing, as well as corruption with salt and pepper noise, on the face liveness detection algorithm, and we find that it is especially vulnerable against spoofing attempts using processed imposter images. We design and present a new facial database, the Durham Face Database, which is the first, to the best of our knowledge, to have client, imposter as well as processed imposter images. Finally, we evaluate our claim on the effectiveness of proposed imposter image attacks using transfer learning on Convolutional Neural Networks. We verify that such attacks are more difficult to detect even when using high-end, expensive machine learning techniques
Multisource and Multitemporal Data Fusion in Remote Sensing
The sharp and recent increase in the availability of data captured by
different sensors combined with their considerably heterogeneous natures poses
a serious challenge for the effective and efficient processing of remotely
sensed data. Such an increase in remote sensing and ancillary datasets,
however, opens up the possibility of utilizing multimodal datasets in a joint
manner to further improve the performance of the processing approaches with
respect to the application at hand. Multisource data fusion has, therefore,
received enormous attention from researchers worldwide for a wide variety of
applications. Moreover, thanks to the revisit capability of several spaceborne
sensors, the integration of the temporal information with the spatial and/or
spectral/backscattering information of the remotely sensed data is possible and
helps to move from a representation of 2D/3D data to 4D data structures, where
the time variable adds new information as well as challenges for the
information extraction algorithms. There are a huge number of research works
dedicated to multisource and multitemporal data fusion, but the methods for the
fusion of different modalities have expanded in different paths according to
each research community. This paper brings together the advances of multisource
and multitemporal data fusion approaches with respect to different research
communities and provides a thorough and discipline-specific starting point for
researchers at different levels (i.e., students, researchers, and senior
researchers) willing to conduct novel investigations on this challenging topic
by supplying sufficient detail and references
Detection of Dental Apical Lesions Using CNNs on Periapical Radiograph
Apical lesions, the general term for chronic infectious diseases, are very common dental diseases in modern life, and are caused by various factors. The current prevailing endodontic treatment makes use of X-ray photography taken from patients where the lesion area is marked manually, which is therefore time consuming. Additionally, for some images the significant details might not be recognizable due to the different shooting angles or doses. To make the diagnosis process shorter and efficient, repetitive tasks should be performed automatically to allow the dentists to focus more on the technical and medical diagnosis, such as treatment, tooth cleaning, or medical communication. To realize the automatic diagnosis, this article proposes and establishes a lesion area analysis model based on convolutional neural networks (CNN). For establishing a standardized database for clinical application, the Institutional Review Board (IRB) with application number 202002030B0 has been approved with the database established by dentists who provided the practical clinical data. In this study, the image data is preprocessed by a Gaussian high-pass filter. Then, an iterative thresholding is applied to slice the X-ray image into several individual tooth sample images. The collection of individual tooth images that comprises the image database are used as input into the CNN migration learning model for training. Seventy percent (70%) of the image database is used for training and validating the model while the remaining 30% is used for testing and estimating the accuracy of the model. The practical diagnosis accuracy of the proposed CNN model is 92.5%. The proposed model successfully facilitated the automatic diagnosis of the apical lesion
Enhancing Facial Emotion Recognition Using Image Processing with CNN
Facial expression recognition (FER) has been a challenging task in computer vision for decades. With recent advancements in deep learning, convolutional neural networks (CNNs) have shown promising results in this field. However, the accuracy of FER using CNNs heavily relies on the quality of the input images and the size of the dataset. Moreover, even in pictures of the same person with the same expression, brightness, backdrop, and stance might change. These variations are emphasized when comparing pictures of individuals with varying ethnic backgrounds and facial features, which makes it challenging for deep-learning models to classify. In this paper, we provide a simple yet efficient way for recognizing facial expressions that combines a CNN with certain image pre-processing techniques. We conducted our experiments on a combination of MUG, JAFFE, and CK+ datasets. To improve the performance of CNN, we experimented with various image pre-processing techniques such as face detection and cropping, image sharpening using Unsharp Mask, and normalization techniques like Global Contrast Normalization, Histogram Equalization, and Adaptive Histogram Equalization. Furthermore, we also examined data augmentation techniques such as image translations and adding noise to images to enhance performance of the deep learning model. Our custom CNN-based FER model achieved a maximum average accuracy of 93.3% (6 classes) and 91% (7 classes) after cross-validation. Our experimental results show that our proposed method can effectively enhance the accuracy of facial expression recognitio
Integrating IoT and Novel Approaches to Enhance Electromagnetic Image Quality using Modern Anisotropic Diffusion and Speckle Noise Reduction Techniques
Electromagnetic imaging is becoming more important in many sectors, and this requires high-quality pictures for reliable analysis. This study makes use of the complementary relationship between IoT and current image processing methods to improve the quality of electromagnetic images. The research presents a new framework for connecting Internet of Things sensors to imaging equipment, allowing for instantaneous input and adjustment. At the same time, the suggested system makes use of sophisticated anisotropic diffusion algorithms to bring out key details and hide noise in electromagnetic pictures. In addition, a cutting-edge technique for reducing speckle noise is used to combat this persistent issue in electromagnetic imaging. The effectiveness of the suggested system was determined via a comparison to standard imaging techniques. There was a noticeable improvement in visual sharpness, contrast, and overall clarity without any loss of information, as shown by the results. Incorporating IoT sensors also facilitated faster calibration and real-time modifications, which opened up new possibilities for use in contexts with a high degree of variation. In fields where electromagnetic imaging plays a crucial role, such as medicine, remote sensing, and aerospace, the ramifications of this study are far-reaching. Our research demonstrates how the Internet of Things (IoT) and cutting-edge image processing have the potential to dramatically improve the functionality and versatility of electromagnetic imaging systems
Image Enhancement in Foggy Images using Dark Channel Prior and Guided Filter
Haze is very apparent in images shot during periods of bad weather (fog). The image's clarity and readability are both diminished as a result. As part of this work, we suggest a method for improving the quality of the hazy image and for identifying any objects hidden inside it. To address this, we use the picture enhancement techniques of Dark Channel Prior and Guided Filter. The Saliency map is then used to segment the improved image and identify passing vehicles. Lastly, we describe our method for calculating the actual distance in units from a camera-equipped vehicle of an item (another vehicle).Our proposed solution can warn the driver based on the distance to help them prevent an accident. Our suggested technology improves images and accurately detects vehicles nearly 100% of the time
- …