11 research outputs found

    Low-rank plus sparse decomposition for exoplanet detection in direct-imaging ADI sequences. The LLSG algorithm

    Get PDF
    Context. Data processing constitutes a critical component of high-contrast exoplanet imaging. Its role is almost as important as the choice of a coronagraph or a wavefront control system, and it is intertwined with the chosen observing strategy. Among the data processing techniques for angular differential imaging (ADI), the most recent is the family of principal component analysis (PCA) based algorithms. It is a widely used statistical tool developed during the first half of the past century. PCA serves, in this case, as a subspace projection technique for constructing a reference point spread function (PSF) that can be subtracted from the science data for boosting the detectability of potential companions present in the data. Unfortunately, when building this reference PSF from the science data itself, PCA comes with certain limitations such as the sensitivity of the lower dimensional orthogonal subspace to non-Gaussian noise. Aims. Inspired by recent advances in machine learning algorithms such as robust PCA, we aim to propose a localized subspace projection technique that surpasses current PCA-based post-processing algorithms in terms of the detectability of companions at near real-time speed, a quality that will be useful for future direct imaging surveys. Methods. We used randomized low-rank approximation methods recently proposed in the machine learning literature, coupled with entry-wise thresholding to decompose an ADI image sequence locally into low-rank, sparse, and Gaussian noise components (LLSG). This local three-term decomposition separates the starlight and the associated speckle noise from the planetary signal, which mostly remains in the sparse term. We tested the performance of our new algorithm on a long ADI sequence obtained on β Pictoris with VLT/NACO. Results. Compared to a standard PCA approach, LLSG decomposition reaches a higher signal-to-noise ratio and has an overall better performance in the receiver operating characteristic space. This three-term decomposition brings a detectability boost compared to the full-frame standard PCA approach, especially in the small inner working angle region where complex speckle noise prevents PCA from discerning true companions from noise

    Semantic Background Subtraction

    Full text link
    peer reviewedWe introduce the notion of semantic background subtraction, a novel framework for motion detection in video sequences. The key innovation consists to leverage object-level semantics to address the variety of challenging scenarios for background subtraction. Our framework combines the information of a semantic segmentation algorithm, expressed by a probability for each pixel, with the output of any background subtraction algorithm to reduce false positive detections produced by illumination changes, dynamic backgrounds, strong shadows, and ghosts. In addition, it maintains a fully semantic background model to improve the detection of camouflaged foreground objects. Experiments led on the CDNet dataset show that we managed to improve, significantly, almost all background subtraction algorithms of the CDNet leaderboard, and reduce the mean overall error rate of all the 34 algorithms (resp. of the best 5 algorithms) by roughly 50% (resp. 20%). Note that a C++ implementation of the framework is available at http://www.telecom.ulg.ac.be/semantic

    The leaf angle distribution of natural plant populations: assessing the canopy with a novel software tool

    Get PDF
    Background Three-dimensional canopies form complex architectures with temporally and spatially changing leaf orientations. Variations in canopy structure are linked to canopy function and they occur within the scope of genetic variability as well as a reaction to environmental factors like light, water and nutrient supply, and stress. An important key measure to characterize these structural properties is the leaf angle distribution, which in turn requires knowledge on the 3-dimensional single leaf surface. Despite a large number of 3-d sensors and methods only a few systems are applicable for fast and routine measurements in plants and natural canopies. A suitable approach is stereo imaging, which combines depth and color information that allows for easy segmentation of green leaf material and the extraction of plant traits, such as leaf angle distribution. Results We developed a software package, which provides tools for the quantification of leaf surface properties within natural canopies via 3-d reconstruction from stereo images. Our approach includes a semi-automatic selection process of single leaves and different modes of surface characterization via polygon smoothing or surface model fitting. Based on the resulting surface meshes leaf angle statistics are computed on the whole-leaf level or from local derivations. We include a case study to demonstrate the functionality of our software. 48 images of small sugar beet populations (4 varieties) have been analyzed on the base of their leaf angle distribution in order to investigate seasonal, genotypic and fertilization effects on leaf angle distributions. We could show that leaf angle distributions change during the course of the season with all varieties having a comparable development. Additionally, different varieties had different leaf angle orientation that could be separated in principle component analysis. In contrast nitrogen treatment had no effect on leaf angles. Conclusions We show that a stereo imaging setup together with the appropriate image processing tools is capable of retrieving the geometric leaf surface properties of plants and canopies. Our software package provides whole-leaf statistics but also a local estimation of leaf angles, which may have great potential to better understand and quantify structural canopy traits for guided breeding and optimized crop management

    The leaf angle distribution of natural plant populations: assessing the canopy with a novel software tool

    Get PDF
    Background Three-dimensional canopies form complex architectures with temporally and spatially changing leaf orientations. Variations in canopy structure are linked to canopy function and they occur within the scope of genetic variability as well as a reaction to environmental factors like light, water and nutrient supply, and stress. An important key measure to characterize these structural properties is the leaf angle distribution, which in turn requires knowledge on the 3-dimensional single leaf surface. Despite a large number of 3-d sensors and methods only a few systems are applicable for fast and routine measurements in plants and natural canopies. A suitable approach is stereo imaging, which combines depth and color information that allows for easy segmentation of green leaf material and the extraction of plant traits, such as leaf angle distribution. Results We developed a software package, which provides tools for the quantification of leaf surface properties within natural canopies via 3-d reconstruction from stereo images. Our approach includes a semi-automatic selection process of single leaves and different modes of surface characterization via polygon smoothing or surface model fitting. Based on the resulting surface meshes leaf angle statistics are computed on the whole-leaf level or from local derivations. We include a case study to demonstrate the functionality of our software. 48 images of small sugar beet populations (4 varieties) have been analyzed on the base of their leaf angle distribution in order to investigate seasonal, genotypic and fertilization effects on leaf angle distributions. We could show that leaf angle distributions change during the course of the season with all varieties having a comparable development. Additionally, different varieties had different leaf angle orientation that could be separated in principle component analysis. In contrast nitrogen treatment had no effect on leaf angles. Conclusions We show that a stereo imaging setup together with the appropriate image processing tools is capable of retrieving the geometric leaf surface properties of plants and canopies. Our software package provides whole-leaf statistics but also a local estimation of leaf angles, which may have great potential to better understand and quantify structural canopy traits for guided breeding and optimized crop management

    Programmazione di Convolutional Neural Networks orientata all'accelerazione su FPGA

    Get PDF
    Attualmente la Computer Vision, disciplina che consente di estrarre informazioni a partire da immagini digitali, è uno dei settori informatici più in fermento. Grazie alle recenti conquiste e progressi, tale settore ha raggiunto uno stato di maturità tale da poter essere applicato in svariati ambiti, a partire da quello industriale, fino ad arrivare ad applicazioni più vicine alla vita quotidiana. In particolare, si è raggiunto uno stato dell'arte sempre più solido nel campo del riconoscimento di oggetti (object detection) grazie allo sviluppo delle Convolutional Neural Networks (CNN): sistemi che si basano su un modello matematico, che viene gradualmente raffinato in base all'esperienza stessa del sistema nell'esecuzione di questo task, acquisita mediante tecniche di machine learning. Grazie a ciò, le CNN sono in grado di riconoscere e classificare il contenuto di immagini, dando loro una semantica. Tali sistemi però richiedono una grande capacità computazionale ed un'ingente quantità di memoria, pertanto la loro esecuzione avviene maggiormente su architetture potenti, come le GPU. Nonostante ciò, una delle sfide attualmente più importanti riguarda la classificazione in tempo reale di immagini eseguendo le reti neurali convolutive anche su architetture con disponibilità energetica e capacità computazionali ridotte, quali sono i sistemi embedded. Quindi, nel seguente trattato si propone un'implementazione di CNN riconfigurabile realizzata in linguaggio C. Ciò è risultato in un sistema semplice e modulare che con diverse ottimizzazioni ad-hoc può essere considerato un buon candidato per il porting su architetture embedded riconfigurabili FPGA

    Prediction of user action in moving-target selection tasks

    Get PDF
    Selection of moving targets is a common task in human–computer interaction (HCI), and more specifically in virtual reality (VR). In spite of the increased number of applications involving moving–target selection, HCI and VR studies have largely focused on static-target selection. Compared to its static-target counterpart, however, moving-target selection poses special challenges, including the need to continuously and simultaneously track the target and plan to reach for it, which may be difficult depending on the user’s reactiveness and the target’s movement. Action prediction has proven to be the most comprehensive enhancement to address moving-target selection challenges. Current predictive techniques, however, heavily rely on continuous tracking of user actions, without considering the possibility that target-reaching actions may have a dominant pre-programmed component—this theory is known as the pre-programmed control theory. Thus, based on the pre-programmed control theory, this research explores the possibility of predicting moving-target selection prior to action execution. Specifically, three levels of action prediction are investigated: action performance, prospective action difficulty, and intention. The proposed performance models predict the movement time (MT) required to reach for a moving target in 2-D and 3-D space, and are useful to compare users and interfaces objectively. The prospective difficulty (PD) models predict the subjective effort required to reach for a moving target, without actually executing the action, and can therefore be measured when performance can not. Finally, the intention models predict the target that the user plans to select, and can therefore be used to facilitate the selection of the intended target. Intention prediction models are developed using decision trees and scoring functions, and evaluated in two VR studies: the first investigates undirected selection (i.e., tasks in which the users are free to select an object among multiple others), and the second directed selection (i.e., the more common experimental task in which users are instructed to select a specific object). PD models for 1-D, and 2-D moving-target selection tasks are developed based on Fitts’ Law, and evaluated in an online experiment. Finally, MT models with the same structural form of the aforementioned PD models are evaluated in a 3-D moving-target selection experiment deployed in VR. Aside from intention predictions on directed selection, all of the explored models yield relatively high accuracies—up to ~78% predicting intended targets in undirected tasks, R^2 = .97 predicting PD, and R^2 = .93 predicting MT

    Wheat Yield Assessment Using In-Field Organ-Scale Phenotyping and Deep Learning Methods

    Full text link
    Phenwhea

    Dynamics of wheat organs by close-range multimodal machine vision

    Full text link

    8th. International congress on archaeology computer graphica. Cultural heritage and innovation

    Full text link
    El lema del Congreso es: 'Documentación 3D avanzada, modelado y reconstrucción de objetos patrimoniales, monumentos y sitios.Invitamos a investigadores, profesores, arqueólogos, arquitectos, ingenieros, historiadores de arte... que se ocupan del patrimonio cultural desde la arqueología, la informática gráfica y la geomática, a compartir conocimientos y experiencias en el campo de la Arqueología Virtual. La participación de investigadores y empresas de prestigio será muy apreciada. Se ha preparado un atractivo e interesante programa para participantes y visitantes.Lerma García, JL. (2016). 8th. International congress on archaeology computer graphica. Cultural heritage and innovation. Editorial Universitat Politècnica de València. http://hdl.handle.net/10251/73708EDITORIA

    3D Stereo MEDIA 2015

    Full text link
    3D Stereo MEDIA 2015 was held in Liège, Belgium, on 14-15 Dec 2015, and comprised the following sub-events: 3D Workshop in Cannes (Festival de Cannes, American Pavilion, Cannes, France, 15 May 2015), Scientific Conference: 2015 International Conference on 3D Imaging (IC3D 2015), 3D Content Financing Market, International 3D Festival, Professional Conference, 3D Academy, Awards Evening
    corecore