325 research outputs found

    Multispectral Imaging For Face Recognition Over Varying Illumination

    Get PDF
    This dissertation addresses the advantage of using multispectral narrow-band images over conventional broad-band images for improved face recognition under varying illumination. To verify the effectiveness of multispectral images for improving face recognition performance, three sequential procedures are taken into action: multispectral face image acquisition, image fusion for multispectral and spectral band selection to remove information redundancy. Several efficient image fusion algorithms are proposed and conducted on spectral narrow-band face images in comparison to conventional images. Physics-based weighted fusion and illumination adjustment fusion make good use of spectral information in multispectral imaging process. The results demonstrate that fused narrow-band images outperform the conventional broad-band images under varying illuminations. In the case where multispectral images are acquired over severe changes in daylight, the fused images outperform conventional broad-band images by up to 78%. The success of fusing multispectral images lies in the fact that multispectral images can separate the illumination information from the reflectance of objects which is impossible for conventional broad-band images. To reduce the information redundancy among multispectral images and simplify the imaging system, distance-based band selection is proposed where a quantitative evaluation metric is defined to evaluate and differentiate the performance of multispectral narrow-band images. This method is proved to be exceptionally robust to parameter changes. Furthermore, complexity-guided distance-based band selection is proposed using model selection criterion for an automatic selection. The performance of selected bands outperforms the conventional images by up to 15%. From the significant performance improvement via distance-based band selection and complexity-guided distance-based band selection, we prove that specific facial information carried in certain narrow-band spectral images can enhance face recognition performance compared to broad-band images. In addition, both algorithms are proved to be independent to recognition engines. Significant performance improvement is achieved by proposed image fusion and band selection algorithms under varying illumination including outdoor daylight conditions. Our proposed imaging system and image processing algorithms lead to a new avenue of automatic face recognition system towards a better recognition performance than the conventional peer system over varying illuminations

    Advances in Hyperspectral Image Classification: Earth monitoring with statistical learning methods

    Full text link
    Hyperspectral images show similar statistical properties to natural grayscale or color photographic images. However, the classification of hyperspectral images is more challenging because of the very high dimensionality of the pixels and the small number of labeled examples typically available for learning. These peculiarities lead to particular signal processing problems, mainly characterized by indetermination and complex manifolds. The framework of statistical learning has gained popularity in the last decade. New methods have been presented to account for the spatial homogeneity of images, to include user's interaction via active learning, to take advantage of the manifold structure with semisupervised learning, to extract and encode invariances, or to adapt classifiers and image representations to unseen yet similar scenes. This tutuorial reviews the main advances for hyperspectral remote sensing image classification through illustrative examples.Comment: IEEE Signal Processing Magazine, 201

    Quantifying soybean phenotypes using UAV imagery and machine learning, deep learning methods

    Get PDF
    Crop breeding programs aim to introduce new cultivars to the world with improved traits to solve the food crisis. Food production should need to be twice of current growth rate to feed the increasing number of people by 2050. Soybean is one the major grain in the world and only US contributes around 35 percent of world soybean production. To increase soybean production, breeders still rely on conventional breeding strategy, which is mainly a 'trial and error' process. These constraints limit the expected progress of the crop breeding program. The goal was to quantify the soybean phenotypes of plant lodging and pubescence color using UAV-based imagery and advanced machine learning. Plant lodging and soybean pubescence color are two of the most important phenotypes for soybean breeding programs. Soybean lodging and pubescence color is conventionally evaluated visually by breeders, which is time-consuming and subjective to human errors. The goal of this study was to investigate the potential of unmanned aerial vehicle (UAV)-based imagery and machine learning in the assessment of lodging conditions and deep learning in the assessment pubescence color of soybean breeding lines. A UAV imaging system equipped with an RGB (red-green-blue) camera was used to collect the imagery data of 1,266 four-row plots in a soybean breeding field at the reproductive stage. Soybean lodging scores and pubescence scores were visually assessed by experienced breeders. Lodging scores were grouped into four classes, i.e., non-lodging, moderate lodging, high lodging, and severe lodging. In contrast, pubescence color scores were grouped into three classes, i.e., gray, tawny, and segregation. UAV images were stitched to build orthomosaics, and soybean plots were segmented using a grid method. Twelve image features were extracted from the collected images to assess the lodging scores of each breeding line. Four models, i.e., extreme gradient boosting (XGBoost), random forest (RF), K-nearest neighbor (KNN), and artificial neural network (ANN), were evaluated to classify soybean lodging classes. Five data pre-processing methods were used to treat the imbalanced dataset to improve the classification accuracy. Results indicate that the pre-processing method SMOTE-ENN consistently performs well for all four (XGBoost, RF, KNN, and ANN) classifiers, achieving the highest overall accuracy (OA), lowest misclassification, higher F1-score, and higher Kappa coefficient. This suggests that Synthetic Minority Over-sampling-Edited Nearest Neighbor (SMOTE-ENN) may be an excellent pre-processing method for using unbalanced datasets and classification tasks. Furthermore, an overall accuracy of 96 percent was obtained using the SMOTE-ENN dataset and ANN classifier. On the other hand, to classify the soybean pubescence color, seven pre-trained deep learning models, i.e., DenseNet121, DenseNet169, DenseNet201, ResNet50, InceptionResNet-V2, Inception-V3, and EfficientNet were used, and images of each plot were fed into the model. Data was enhanced using two rotational and two scaling factors to increase the datasets. Among the seven pre-trained deep learning models, ResNet50 and DenseNet121 classifiers showed a higher overall accuracy of 88 percent, along with higher precision, recall, and F1-score for all three classes of pubescence color. In conclusion, the developed UAV-based high-throughput phenotyping system can gather image features to estimate soybean crucial phenotypes and classify the phenotypes, which will help the breeders in phenotypic variations in breeding trials. Also, the RGB imagery-based classification could be a cost-effective choice for breeders and associated researchers for plant breeding programs in identifying superior genotypes.Includes bibliographical references

    New vision technology for multidimensional quality monitoring of food processes

    Get PDF

    Measuring and modelling parameters from hyperspectral sensors for site-specific crop protection

    Get PDF
    This thesis sought to optimise systems for plant protection in precision agriculture through developing a field method for estimating crop status parameters from hyperspectral sensors, and an empirical model for estimating the required herbicide dose in different parts of the field. The hyperspectral reflectance measurements in the open field took the form of instantaneous spectra recording using an existing method called feature vector based analysis (FVBA), which was applied on disease severity. A new method called iterative normalisation based analysis (INBA) was developed and evaluated on disease severity and plant biomass. The methods revealed two different spectral signatures in both disease severity and plant density data. By concentrating the analysis on a 12% random subset of the hyperspectral field data, the unknown part of the data could be estimated with 94-97% coefficient of determination. The empirical model for site-specific weed control combined a model for weed competition and a dose response model. Comparisons of site-specific and conventional uniform spraying using model simulations showed that site-specific spraying with the uniform recommended dose resulted in 64% herbicide saving. Comparison with a uniform dose with equal weed control effect resulted in 36% herbicide saving. The methods developed in this thesis can be used to improve systems for site-specific plant protection in precision agriculture and to evaluate site-specific plant protection systems in relation to uniform spraying. Overall, this could be beneficial both for farm finances and for the environment

    Hyperspectral Imaging from Ground Based Mobile Platforms and Applications in Precision Agriculture

    Get PDF
    This thesis focuses on the use of line scanning hyperspectral sensors on mobile ground based platforms and applying them to agricultural applications. First this work deals with the geometric and radiometric calibration and correction of acquired hyperspectral data. When operating at low altitudes, changing lighting conditions are common and inevitable, complicating the retrieval of a surface's reflectance, which is solely a function of its physical structure and chemical composition. Therefore, this thesis contributes the evaluation of an approach to compensate for changes in illumination and obtain reflectance that is less labour intensive than traditional empirical methods. Convenient field protocols are produced that only require a representative set of illumination and reflectance spectral samples. In addition, a method for determining a line scanning camera's rigid 6 degree of freedom (DOF) offset and uncertainty with respect to a navigation system is developed, enabling accurate georegistration and sensor fusion. The thesis then applies the data captured from the platform to two different agricultural applications. The first is a self-supervised weed detection framework that allows training of a per-pixel classifier using hyperspectral data without manual labelling. The experiments support the effectiveness of the framework, rivalling classifiers trained on hand labelled training data. Then the thesis demonstrates the mapping of mango maturity using hyperspectral data on an orchard wide scale using efficient image scanning techniques, which is a world first result. A novel classification, regression and mapping pipeline is proposed to generate per tree mango maturity averages. The results confirm that maturity prediction in mango orchards is possible in natural daylight using a hyperspectral camera, despite complex micro-illumination-climates under the canopy

    Review: computer vision applied to the inspection and quality control of fruits and vegetables

    Get PDF
    This is a review of the current existing literature concerning the inspection of fruits and vegetables with the application of computer vision, where the techniques most used to estimate various properties related to quality are analyzed. The objectives of the typical applications of such systems include the classification, quality estimation according to the internal and external characteristics, supervision of fruit processes during storage or the evaluation of experimental treatments. In general, computer vision systems do not only replace manual inspection, but can also improve their skills. In conclusion, computer vision systems are powerful tools for the automatic inspection of fruits and vegetables. In addition, the development of such systems adapted to the food industry is fundamental to achieve competitive advantages

    Enhancing the usability of Satellite Earth Observations through Data Driven Models. An application to Sea Water Quality

    Get PDF
    Earth Observation from satellites has the potential to provide comprehensive, rapid and inexpensive information about land and water bodies. Marine monitoring could gain in effectiveness if integrated with approaches that are able to collect data from wide geographic areas, such as satellite observation. Integrated with in situ measurements, satellite observations enable to extend the punctual information of sampling campaigns to a synoptic view, increase the spatial and temporal coverage, and thus increase the representativeness of the natural diversity of the monitored water bodies, their inter-annual variability and water quality trends, providing information to support EU Member States’ action plans. Turbidity is one of the optically active water quality parameters that can be derived from satellite data, and is one of the environmental indicator considered by EU directives monitoring programmes. Turbidity is a visual property of water, related to the amount of light scattered by particles in water, and it can act as simple and convenient indirect measure of the concentration of suspended solids and other particulate material. A review of the state-of-the-art shows that most traditional methods to estimate turbidity from optical satellite images are based on semi-empirical models relying on few spectral bands. The choice of the most suitable bands to be used is often site and season specific, as it is related to the type and concentration of suspended particles. When investigating wide areas or long time series that include different optical water types, the application of machine learning algorithms seems to be promising due to their flexibility, responding to the need of a model that can adapt to varying water conditions with smooth transition, and their ability to exploit the wealth of spectral information. Moreover, machine learning models have shown to be less affected by atmospheric and other background factors. Atmospheric correction for water leaving reflectance, in fact, still remains one of the major challenges in aquatic remote sensing. The use of machine learning for remotely sensed water quality estimation has spread in recent years thanks to the advances in algorithm development, computing power, and availability of higher spatial resolution data. Among all existing algorithms, the choice of the complexity of the model derives from the nature and number of available data. The present study explores the use of Sentinel-2 MultiSpectral Instrument (MSI) Level-1C Top of Atmosphere spectral radiance to derive water turbidity, through application of a Polynomial Kernel Regularized Least Squares regression. This algorithms is characterized by a simple model structure, good generalization, global optimal solution, especially suitable for non-linear and high dimension problems. The study area is located in the North Tyrrhenian Sea (Italy), covering a coastline of about 100 km, characterized by a varied shoreline, embracing environments worthy of protection and valuable biodiversity, but also relevant ports, and three main river flow and sediment discharge. The coastal environment in this area has been monitored since 2001, according to the 2000/60/EC Water Framework Directive, and in 2008 EU Marine Strategy Framework Directive 2008/56/EC further strengthened the investigation in the area. A dataset of combination of turbidity measurements, expressed in nephelometric turbidity units (NTU), and values of the 13 spectral bands in the pixel corresponding to the sample location was used to calibrate and validate the model. The developed turbidity model shows good agreement of the estimated satellite-derived surface turbidity with the measured one, confirming that the use of ML techniques allows to reach a good accuracy in turbidity estimation from satellite Top of Atmosphere reflectance. Comparison between turbidity estimates obtained from the model with turbidity data from Copernicus CMEMS dataset named ’Mediterranean Sea, Bio-Geo-Chemical, L3, daily observation’, which was used as benchmark, produced consistent results. A band importance analysis revealed the contribution of the different spectral bands and the main role of the red-edge range. Finally, turbidity maps from satellite imagery were produced for the study area, showing the ability of the model to catch extreme events and, overall, how it represents an important tool to improve our understanding of the complex factors that influence water quality in our oceans

    Advanced photon counting techniques for long-range depth imaging

    Get PDF
    The Time-Correlated Single-Photon Counting (TCSPC) technique has emerged as a candidate approach for Light Detection and Ranging (LiDAR) and active depth imaging applications. The work of this Thesis concentrates on the development and investigation of functional TCSPC-based long-range scanning time-of-flight (TOF) depth imaging systems. Although these systems have several different configurations and functions, all can facilitate depth profiling of remote targets at low light levels and with good surface-to-surface depth resolution. Firstly, a Superconducting Nanowire Single-Photon Detector (SNSPD) and an InGaAs/InP Single-Photon Avalanche Diode (SPAD) module were employed for developing kilometre-range TOF depth imaging systems at wavelengths of ~1550 nm. Secondly, a TOF depth imaging system at a wavelength of 817 nm that incorporated a Complementary Metal-Oxide-Semiconductor (CMOS) 32Ă—32 Si-SPAD detector array was developed. This system was used with structured illumination to examine the potential for covert, eye-safe and high-speed depth imaging. In order to improve the light coupling efficiency onto the detectors, the arrayed CMOS Si-SPAD detector chips were integrated with microlens arrays using flip-chip bonding technology. This approach led to the improvement in the fill factor by up to a factor of 15. Thirdly, a multispectral TCSPC-based full-waveform LiDAR system was developed using a tunable broadband pulsed supercontinuum laser source which can provide simultaneous multispectral illumination, at wavelengths of 531, 570, 670 and ~780 nm. The investigated multispectral reflectance data on a tree was used to provide the determination of physiological parameters as a function of the tree depth profile relating to biomass and foliage photosynthetic efficiency. Fourthly, depth images were estimated using spatial correlation techniques in order to reduce the aggregate number of photon required for depth reconstruction with low error. A depth imaging system was characterised and re-configured to reduce the effects of scintillation due to atmospheric turbulence. In addition, depth images were analysed in terms of spatial and depth resolution
    • …
    corecore