10 research outputs found

    New approach to calculating the fundamental matrix

    Get PDF
    The estimation of the fundamental matrix (F) is to determine the epipolar geometry and to establish a geometrical relation between two images of the same scene or elaborate video frames. In the literature, we find many techniques that have been proposed for robust estimations such as RANSAC (random sample consensus), least-squares median (LMeds), and M estimators as exhaustive. This article presents a comparison between the different detectors that are (Harris, FAST, SIFT, and SURF) in terms of detected points number, the number of correct matches and the computation speed of the ‘F’. Our method based first on the extraction of descriptors by the algorithm (SURF) was used in comparison to the other one because of its robustness, then set the threshold of uniqueness to obtain the best points and also normalize these points and rank it according to the weighting function of the different regions at the end of the estimation of the matrix''F'' by the technique of the M-estimator at eight points, to calculate the average error and the speed of the calculation ''F''. The results of the experimental simulation were applied to the real images with different changes of viewpoints, for example (rotation, lighting, and moving object), give a good agreement in terms of the counting speed of the fundamental matrix and the acceptable average error. The results of the simulation show this technique of use in real-time application

    New approach to the identification of the easy expression recognition system by robust techniques (SIFT, PCA-SIFT, ASIFT and SURF)

    Get PDF
    In recent years, facial recognition has been a major problem in the field of computer vision, which has attracted lots of interest in previous years because of its use in different applications by different domains and image analysis. Which is based on the extraction of facial descriptors, it is a very important step in facial recognition. In this article, we compared robust methods (SIFT, PCA-SIFT, ASIFT and SURF) to extract relevant facial information with different facial posture variations (open and unopened mouth, glasses and no glasses, open and closed eyes). The simulation results show that the detector (SURF) is better than others at finding the similarity descriptor and calculation time. Our method is based on the normalization of vector descriptors and combined with the RANSAC algorithm to cancel outliers in order to calculate the Hessian matrix with the objective of reducing the calculation time. To validate our experience, we tested four facial images databases containing several modifications. The results of the simulation show that our method is more efficient than other detectors in terms of speed of recognition and determination of similar points between two images of the same face, one belonging to the base of the text and the other one to the base driven by different modifications. This method, which can be applied on a mobile platform to analyze the content of simple images, for example, to detect driver fatigue, human-machine interaction, human-robot. Using descriptors with properties important for good accuracy and real-time response

    A combined method based on CNN architecture for variation-resistant facial recognition

    Get PDF
    Identifying individuals from a facial image is a technique that forms part of computer vision and is used in various fields such as security, digital biometrics, smartphones, and banking. However, it can prove difficult due to the complexity of facial structure and the presence of variations that can affect the results. To overcome this difficulty, in this paper, we propose a combined approach that aims to improve the accuracy and robustness of facial recognition in the presence of variations. To this end, two datasets (ORL and UMIST) are used to train our model. We then began with the image pre-processing phase, which consists in applying a histogram equalization operation to adjust the gray levels over the entire image surface to improve quality and enhance the detection of features in each image. Next, the least important features are eliminated from the images using the Principal Component Analysis (PCA) method. Finally, the pre-processed images are subjected to a neural network architecture (CNN) consisting of multiple convolution layers and fully connected layers. Our simulation results show a high performance of our approach, with accuracy rates of up to 99.50% for the ORL dataset and 100% for the UMIST dataset

    Method of optimization of the fundamental matrix by technique speeded up robust features application of different stress images

    Get PDF
    The purpose of determining the fundamental matrix (F) is to define the epipolar geometry and to relate two 2D images of the same scene or video series to find the 3D scenes. The problem we address in this work is the estimation of the localization error and the processing time. We start by comparing the following feature extraction techniques: Harris, features from accelerated segment test (FAST), scale invariant feature transform (SIFT) and speed-up robust features (SURF) with respect to the number of detected points and correct matches by different changes in images. Then, we merged the best chosen by the objective function, which groups the descriptors by different regions in order to calculate ‘F’. Then, we applied the standardized eight-point algorithm which also automatically eliminates the outliers to find the optimal solution ‘F’. The test of our optimization approach is applied on the real images with different scene variations. Our simulation results provided good results in terms of accuracy and the computation time of ‘F’ does not exceed 900 ms, as well as the projection error of maximum 1 pixel, regardless of the modification

    N-Beats as an EHG signal forecasting method for labour prediction in full term pregnancy

    Get PDF
    The early prediction of onset labour is critical for avoiding the risk of death due to pregnancy delay. Low-income countries often struggle to deliver timely service to pregnant women due to a lack of infrastructure and healthcare facilities, resulting in pregnancy complications and, eventually, death. In this regard, several artificial-intelligence-based methods have been proposed based on the detection of contractions using electrohysterogram (EHG) signals. However, the forecasting of pregnancy contractions based on real-time EHG signals is a challenging task. This study proposes a novel model based on neural basis expansion analysis for interpretable time series (N-BEATS) which predicts labour based on EHG forecasting and contraction classification over a given time horizon. The publicly available TPEHG database of Physiobank was exploited in order to train and test the model, where signals from full-term pregnant women and signals recorded after 26 weeks of gestation were collected. For these signals, the 30 most commonly used classification parameters in the literature were calculated, and principal component analysis (PCA) was utilized to select the 15 most representative parameters (all the domains combined). The results show that neural basis expansion analysis for interpretable time series (N-BEATS) forecasting can forecast EHG signals through training after few iterations. Similarly, the forecasting signal’s duration is determined by the length of the recordings. We then deployed XG-Boost, which achieved the classification accuracy of 99 percent, outperforming the state-of-the-art approaches using a number of classification features greater than or equal to 15

    Face recognition method combining SVM machine learning and scale invariant feature transform

    No full text
    Facial recognition is a method to identify an individual from his image. It has attracted the intention of a large number of researchers in the field of computer vision in recent years due to its wide scope of application in several areas (health, security, robotics, biometrics...). The operation of this technology, so much in demand in today's market, is based on the extraction of features from an input image using techniques such as SIFT, SURF, LBP... and comparing them with others from another image to confirm or assert the identity of an individual. In this paper, we have performed a comparative study of a machine learning-based approach using several classification methods, applied on two face databases, which will be divided into two groups. The first one is the Train database used for the training stage of our model and the second one is the Test database, which will be used in the test phase of the model. The results of this comparison showed that the SIFT technique merged with the SVM classifier outperforms the other classifiers in terms of identification accuracy rate

    Detection of COVID-19 from chest radiology using histogram equalization combined with a CNN convolutional network

    No full text
    The world was shaken by the arrival of the corona virus (COVID-19), which ravaged all countries and caused a lot of human and economic damage. The world activity has been totally stopped in order to stop this pandemic, but unfortunately until today the world knows the arrival of new wave of contamination among the population despite the implementation of several vaccines that have been made available to the countries of the world and this is due to the appearance of new variants. All variants of this virus have recorded a common symptom which is an infection in the respiratory tract. In this paper a new method of detection of the presence of this virus in patients was implemented based on deep learning using a deep learning model by convolutional neural network architecture (CNN) using a COVID-QU chest X- ray imaging database. For this purpose, a pre-processing was performed on all the images used, aiming at unifying the dimensions of these images and applying a histogram equalization for an equitable distribution of the intensity on the whole of each image. After the pre-processing phase we proceeded to the formation of two groups, the first Train is used in the training phase of the model and the second called Test is used for the validation of the model. Finally, a lightweight CNN architecture was used to train a model. The model was evaluated using two metrics which are the confusion matrix which includes the following elements (ACCURACY, SPECIFITY, PRESITION, SENSITIVITY, F1_SCORE) and Receiver Operating Characteristic (the ROC curve). The results of our simulations showed an improvement after using the histogram equalization technique in terms of the following metrics: ACCURACY 96.5%, SPECIFITY 98.60% and PRESITION 98.66%

    Discriminative Approach Lung Diseases and COVID-19 from Chest X-Ray Images Using Convolutional Neural Networks: A Promising Approach for Accurate Diagnosis

    No full text
    Medical imaging treatment is one of the best-known computer science disciplines. It can be used to detect the presence of several diseases such as skin cancer and brain tumors, and since the arrival of the coronavirus (COVID-19), this technique has been used to alleviate the heavy burden placed on all health institutions and personnel, given the high rate of spread of this virus in the population. One of the problems encountered in diagnosing people suspected of having contracted COVID-19 is the difficulty of distinguishing symptoms due to this virus from those of other diseases such as influenza, as they are similar. This paper proposes a new approach to distinguishing between lung diseases and COVID-19 by analyzing chest x-ray images using a convolutional neural network (CNN) architecture. To achieve this, pre-processing was carried out on the dataset using histogram equalization, and then we trained two sub-datasets from the dataset using the Train et Test, the first to be used in the training phase and the second to be used in the model validation phase. Then a CNN architecture composed of several convolution layers and fully connected layers was deployed to train our model. Finally, we evaluated our model using two different metrics: the confusion matrix and the receiver operating characteristic. The simulation results recorded are satisfactory, with an accuracy rate of 96.27%
    corecore