51 research outputs found

    Counterfeit Detection with Multispectral Imaging

    Get PDF
    Multispectral imaging is becoming more practical for a variety of applications due to its ability to provide hyper specific information through a non-destructive analysis. Multispectral imaging cameras can detect light reflectance from different spectral bands of visible and nonvisible wavelengths. Based on the different amount of band reflectance, information can be deduced on the subject. Counterfeit detection applications of multispectral imaging will be decomposed and analyzed in this thesis. Relations between light reflectance and objects’ features will be addressed. The process of the analysis will be broken down to show how this information can be used to provide more insight on the object. This technology provides desired and viable information that can greatly improve multiple fields. For this paper, the multispectral imaging research process of element solution concentrations and counterfeit detection applications of multispectral imaging will be discussed. BaySpec’s OCI-M Ultra Compact Multispectral Imager is used for data collection. This camera is capable of capturing light reflectance from wavelengths of 400 – 1000 nm. Further research opportunities of developing self-automated unmanned aerial vehicles for precision agriculture and extending counterfeit detection applications will also be explored

    Accidental Light Probes

    Full text link
    Recovering lighting in a scene from a single image is a fundamental problem in computer vision. While a mirror ball light probe can capture omnidirectional lighting, light probes are generally unavailable in everyday images. In this work, we study recovering lighting from accidental light probes (ALPs) -- common, shiny objects like Coke cans, which often accidentally appear in daily scenes. We propose a physically-based approach to model ALPs and estimate lighting from their appearances in single images. The main idea is to model the appearance of ALPs by photogrammetrically principled shading and to invert this process via differentiable rendering to recover incidental illumination. We demonstrate that we can put an ALP into a scene to allow high-fidelity lighting estimation. Our model can also recover lighting for existing images that happen to contain an ALP.Comment: CVPR2023. Project website: https://kovenyu.com/ALP

    Evaluation of different segmentation-based approaches for skin disorders from dermoscopic images

    Full text link
    Treballs Finals de Grau d'Enginyeria Biomèdica. Facultat de Medicina i Ciències de la Salut. Universitat de Barcelona. Curs: 2022-2023. Tutor/Director: Sala Llonch, Roser, Mata Miquel, Christian, Munuera, JosepSkin disorders are the most common type of cancer in the world and the incident has been lately increasing over the past decades. Even with the most complex and advanced technologies, current image acquisition systems do not permit a reliable identification of the skin lesion by visual examination due to the challenging structure of the malignancy. This promotes the need for the implementation of automatic skin lesion segmentation methods in order to assist in physicians’ diagnostic when determining the lesion's region and to serve as a preliminary step for the classification of the skin lesion. Accurate and precise segmentation is crucial for a rigorous screening and monitoring of the disease's progression. For the purpose of the commented concern, the present project aims to accomplish a state-of-the-art review about the most predominant conventional segmentation models for skin lesion segmentation, alongside with a market analysis examination. With the rise of automatic segmentation tools, a wide number of algorithms are currently being used, but many are the drawbacks when employing them for dermatological disorders due to the high-level presence of artefacts in the image acquired. In light of the above, three segmentation techniques have been selected for the completion of the work: level set method, an algorithm combining GrabCut and k-means methods and an intensity automatic algorithm developed by Hospital Sant Joan de Déu de Barcelona research group. In addition, a validation of their performance is conducted for a further implementation of them in clinical training. The proposals, together with the got outcomes, have been accomplished by means of a publicly available skin lesion image database

    Dual Branch Neural Network for Sea Fog Detection in Geostationary Ocean Color Imager

    Full text link
    Sea fog significantly threatens the safety of maritime activities. This paper develops a sea fog dataset (SFDD) and a dual branch sea fog detection network (DB-SFNet). We investigate all the observed sea fog events in the Yellow Sea and the Bohai Sea (118.1{\deg}E-128.1{\deg}E, 29.5{\deg}N-43.8{\deg}N) from 2010 to 2020, and collect the sea fog images for each event from the Geostationary Ocean Color Imager (GOCI) to comprise the dataset SFDD. The location of the sea fog in each image in SFDD is accurately marked. The proposed dataset is characterized by a long-time span, large number of samples, and accurate labeling, that can substantially improve the robustness of various sea fog detection models. Furthermore, this paper proposes a dual branch sea fog detection network to achieve accurate and holistic sea fog detection. The poporsed DB-SFNet is composed of a knowledge extraction module and a dual branch optional encoding decoding module. The two modules jointly extracts discriminative features from both visual and statistical domain. Experiments show promising sea fog detection results with an F1-score of 0.77 and a critical success index of 0.63. Compared with existing advanced deep learning networks, DB-SFNet is superior in detection performance and stability, particularly in the mixed cloud and fog areas

    Efficient multitemporal change detection techniques for hyperspectral images on GPU

    Get PDF
    Hyperspectral images contain hundreds of reflectance values for each pixel. Detecting regions of change in multiple hyperspectral images of the same scene taken at different times is of widespread interest for a large number of applications. For remote sensing, in particular, a very common application is land-cover analysis. The high dimensionality of the hyperspectral images makes the development of computationally efficient processing schemes critical. This thesis focuses on the development of change detection approaches at object level, based on supervised direct multidate classification, for hyperspectral datasets. The proposed approaches improve the accuracy of current state of the art algorithms and their projection onto Graphics Processing Units (GPUs) allows their execution in real-time scenarios

    Modélisation et traitement statistique d'images de microscopie confocale : application en dermatologie

    Get PDF
    Dans cette thèse, nous développons des modèles et des méthodes statistiques pour le traitement d’images de microscopie confocale de la peau dans le but de détecter une maladie de la peau appelée lentigo. Une première contribution consiste à proposer un modèle statistique paramétrique pour représenter la texture dans le domaine des ondelettes. Plus précisément, il s’agit d’une distribution gaussienne généralisée dont on montre que le paramètre d’échelle est caractéristique des tissus sousjacents. La modélisation des données dans le domaine de l’image est un autre sujet traité dans cette thèse. A cette fin, une distribution gamma généralisée est proposée. Notre deuxième contribution consiste alors à développer un estimateur efficace des paramètres de cette loi à l’aide d’une descente de gradient naturel. Finalement, un modèle d’observation de bruit multiplicatif est établi pour expliquer la distribution gamma généralisée des données. Des méthodes d’inférence bayésienne paramétrique sont ensuite développées avec ce modèle pour permettre la classification d’images saines et présentant un lentigo. Les algorithmes développés sont appliqués à des images réelles obtenues d’une étude clinique dermatologique

    Quantifying corn emergence using UAV imagery and machine learning

    Get PDF
    Corn (Zea mays L.) is one of the important crops in the United States for animal feed, ethanol production, and human consumption. To maximize the final corn yield, one of the critical factors to consider is to improve the corn emergence uniformity temporally (emergence date) and spatially (plant spacing). Conventionally, the assessment of emergence uniformity usually is performed through visual observation by farmers at selected small plots to represent the whole field, but this is limited by time and labor needed. With the advance of unmanned aerial vehicle (UAV)-based imaging technology and advanced image processing techniques powered by machine learning (ML) and deep learning (DL), a more automatic, non-subjective, precise, and accurate field-scale assessment of emergence uniformity becomes possible. Previous studies had demonstrated the success of crop emergence uniformity using UAV imagery, specifically at fields with simple soil background. There is no research having investigated the feasibility of UAV imagery in the corn emergence assessment at fields of conservation agriculture that are covered with cover crops or residues to improve soil health and sustainability. The overall goal of this research was to develop a fast and accurate method for the assessment of corn emergence using UAV imagery, ML and DL techniques. The pertinent information is essential for corn production early and in-season decision making as well as agronomy research. The research comprised three main studies, including Study 1: quantifying corn emergence date using UAV imagery and a ML model; Study 2: estimating corn stand count in different cropping systems (CS) using UAV images and DL; and Study 3: estimating and mapping corn emergence under different planting depths. Two case studies extended Study 3 to field-scale applications by relating emergence uniformity derived from the developed method to planting depths treatments and estimating final yield. For all studies, the primary imagery data were collected using a consumer-grade UAV equipped with a red-green-blue (RGB) camera at a flight height of approximate 10 m above ground level. The imagery data had a ground sampling distance (GSD) of 0.55 - 3.00 mm pixel-1 that was sufficient to detect small size seedlings. In addition, a UAV multispectral camera was used to capture corn plants at early growth stages (V4, V6, and V7) in case studies to extract plant reflectance (vegetation indices, VIs) as plant growth variation indicators. Random forest (RF) ML models were used to classify the corn emergence date based on the days after emergence (DAE) to time of assessment and estimate yield. The DL models, U-Net and ResNet18, were used to segment corn seedlings from UAV images and estimate emergence parameters, including plant density, average DAE (DAEmean), and plant spacing standard deviation (PSstd), respectively. Results from Study 1 indicated that individual corn plant quantification using UAV imagery and a RF ML model achieved moderate classification accuracies of 0.20 - 0.49 that increased to 0.55 - 0.88 when DAE classification was expanded to be within a 3-day window. In Study 2, the precision for image segmentation by the U-Net model was [greater than or equal to] 0.81 for all CS, resulting in high accuracies in estimating plant density (R2 [greater than or equal to] 0.92; RMSE [less than or equal to] 0.48 plants m-1). Then, the ResNet18 model in Study 3 was able to estimate emergence parameters with high accuracies (0.97, 0.95, and 0.73 for plant density, DAEmean, and PSstd, respectively). Case studies showed that crop emergence maps and evaluation in field conditions indicated an expected trend of decreasing plant density and DAEmean with increasing planting depths and opposite results for PSstd. However, mixed trends were found for emergence parameters among planting depths at different replications and across the N-S direction of the fields. For yield estimation, emergence data alone did not show any relation with final yield (R2 = 0.01, RMSE = 720 kg ha-1). The combination of VIs from all the growth stages was only able to estimate yield with R2 of 0.34 and RMSE of 560 kg ha-1. In summary, this research demonstrated the success of UAV imagery and ML/DL techniques in assessing and mapping corn emergence at fields practicing all or some components of conservation agriculture. The findings give more insights for future agronomic and breeding studies in providing field-scale crop emergence evaluations as affected by treatments and management as well as relating emergence assessment to final yield. In addition, these emergence evaluations may be useful for commercial companies when needing justification for developing new technologies relating to precision planting to crop performance. For commercial crop production, more comprehensive emergence maps (in terms of temporal and spatial uniformity) will help to make better replanting or early management decisions. Further enhancement of the methods such as more validation studies in different locations and years as well as development of interactive frameworks will establish a more automatic, robust, precise, accurate, and 'ready-to-use' approach in estimating and mapping crop emergence uniformity.Includes bibliographical references

    Analyse d'image visibles et proche infrarouges : contributions Ă  l'Ă©valuation non-destructive du persillage dans la viande du boeuf

    Get PDF
    Le persillage (gras intramusculaire) dans la viande de boeuf est l'un des critères les plus importants pour l'évaluation de la qualité, notamment sa jutosité, dans les systèmes de classification de la viande. Le processus chimique, méthode destructive, est l'unique moyen officiellement utilisé pour évaluer la proportion du persillage dans la viande. C'est une méthode destructive, complexe et qui n'offre aucune information sur la distribution du persillage dans la viande. Cette thèse porte sur le développement d'une méthode originale destinée à l'évaluation non-destructive de la proportion volumétrique du persillage dans la viande du boeuf. Cette nouvelle méthode, qui pourrait être intégrée dans un système de vision artificielle (machine vision), est une première expérience pour ce genre d'application. À notre meilleure connaissance, aucune méthode semblable n'a été élaborée. De ces travaux de doctorat quatre contributions sont identifiées: la technique proposée, deux méthodes de segmentation d'images et une méthode non-destructive pour estimer la proportion volumétrique du persillage. La technique proposée permet d'avoir deux types d'images : une visible qui illustre la surface de la viande et une proche infrarouge qui est la projection orthogonale de l'échantillon de la viande (3D) en une image d'ombre (2D). Compte tenu de la complexité d'analyse des images, nous avons développé une méthode efficace de segmentation permettant d'identifier les régions homogènes les plus (ou les moins) claires dans une image à niveaux de gris. Cette méthode, qui est relativement générale, est basée sur un modèle mathématique permettant d'évaluer l'homogénéité des régions, qui lui-même a été introduit dans cette thèse. La généralisation de cette méthode pour la segmentation du persillage a démontré des résultats satisfaisants face aux objectifs attendus. Étant donné, que la forme volumétrique du persillage est aléatoire et que celle-ci dépend de la façon dont le persillage est déposé entre les fibres musculaires, ce qui est imprévisible, nous avons combiné les résultats de la segmentation de deux types d'images pour estimer le volume du persillage. L'intégration de l'ensemble des approches précédentes nous a permis de développer une nouvelle méthode non-destructive pour estimer la proportion volumétrique du persillage. Les résultats obtenus par la méthode proposée (non-destructive) ont été comparés aux résultats obtenus par une méthode chimique (destructive) comme étant la vérité-terrain (gold standard). Les résultats expérimentaux confirment les propriétés attendues de la méthode proposée et ils illustrent la qualité des résultats obtenus
    • …
    corecore