106 research outputs found

    Phenotyping on microscopic scale using DIC microscopy

    Get PDF
    Image analysis of Arabidopsis (Arabidopsis thaliana) plants is an important method for studying plant growth. Most work on automated analysis focuses on full rosette analysis, often in a high-throughput monitoring system. In this talk we propose a new workflow that analysis plant growth on a microscopic scale. This approach results in more detail than the common growth measurements, i.e. analysis of the number of cells, the average cell size, etc. The proposed workflow uses differential interference contrast (DIC) microscopy to visualise cells. DIC microscopy is preferred over fluorescence techniques because it provides a very fast methodology (i.e. image analysis is already possible after 1 day) and it also results in clear contrast in the samples. Although these images are easy to interpret by a human operator, they pose several challenges for automated computer vision methods. In our proposed talk we circumvent most of these challenges by combining multiple images, acquired with different microscopy settings. This approach allows us to automatically segment and analyse cells in the images. The proposed workflow enables a new form of automated phenotyping on microscopic scale

    Detection of visitors in elderly care using a low-resolution visual sensor network

    Get PDF
    Loneliness is a common condition associated with aging and comes with extreme health consequences including decline in physical and mental health, increased mortality and poor living conditions. Detecting and assisting lonely persons is therefore important-especially in the home environment. The current studies analyse the Activities of Daily Living (ADL) usually with the focus on persons living alone, e.g., to detect health deterioration. However, this type of data analysis relies on the assumption of a single person being analysed, and the ADL data analysis becomes less reliable without assessing socialization in seniors for health state assessment and intervention. In this paper, we propose a network of cheap low-resolution visual sensors for the detection of visitors. The visitor analysis starts by visual feature extraction based on foreground/background detection and morphological operations to track the motion patterns in each visual sensor. Then, we utilize the features of the visual sensors to build a Hidden Markov Model (HMM) for the actual detection. Finally, a rule-based classifier is used to compute the number and the duration of visits. We evaluate our framework on a real-life dataset of ten months. The results show a promising visit detection performance when compared to ground truth

    3D reconstruction of maize plants in the phenoVision system

    Get PDF
    In order to efficiently study the impact of environmental changes, or the differences between various genotypes, large numbers of plants need to be measured. At the VIB, a system named \emph{PhenoVision} was built to automatically image plants during their growth. This system is used to evaluate the impact of drought on different maize genotypes. To this end, we require 3D reconstructions of the maize plants, which we obtain from voxel carving

    GPU-based maize plant analysis: accelerating CNN segmentation and voxel carving

    Get PDF
    PHENOVISION is a high-throughput plant phenotyping system for crop plants in greenhouse conditions. A conveyor belt transports plants between automated irrigation stations and imaging cabins. The aim is to phenotype maize varieties grown under different conditions. To this end we model the plants in 3D and automate the measuring of the plants

    Machine learning for maize plant segmentation

    Get PDF
    High-throughput plant phenotyping platforms produce immense volumes of image data. Here, a binary segmentation of maize colour images is required for 3D reconstruction of plant structure and measurement of growth traits. To this end, we employ a convolutional neural network (CNN) to perform this segmentation successfully

    Out-of-home activity analysis using a low-resolution visual sensor

    Get PDF
    Loneliness and social isolation are probably the most prevalent psychosocial problems related to aging. One critical component in assessing social isolation in an unobtrusive manner is to measure the out-of-home activity levels, as social isolation often goes along with decreased physical activity, decreased motoric functioning, and a decline in activities of daily living, all of which may lead to a reduction in the amount of time spent out-of-home. In this work, we propose to use a single visual sensor for detecting out-of-home activity. The visual sensor has a very low spatial resolution (900 pixels), which is a key feature to ensure a cheap technology and to maintain the user’s privacy. Firstly, the visual sensor is installed in a top view setup at the door entrance. Secondly, a correlation-based foreground detection method is used to extract the foreground. Thirdly, an Extra Trees Classifier (ETC) is trained to classify the directionality of the person (in/out) based on the motion of the foreground pixels. Due to the nature of variability of the out-of-home activity, the relative frequency of the directionality (in/out) is measured over a window of 3 seconds to determine the final result. We installed our system in 9 different service flats in the UK, Belgium and France where the same ETC model is used. We evaluate our method on video sequences captured in real-life environments from the different setups, where the persons’ out-of-home routines are recorded. The results show that our approach of detecting out-of-home activity achieves an accuracy of 91.30%

    Parameter-unaware autocalibration for occupancy mapping

    Get PDF
    People localization and occupancy mapping are common and important tasks for multi-camera systems. In this paper, we present a novel approach to overcome the hurdle of manual extrinsic calibration of the multi-camera system. Our approach is completely parameter unaware, meaning that the user does not need to know the focal length, position or viewing angle in advance, nor will these values be calibrated as such. The only requirement to the multi-camera setup is that the views overlap substantially and are mounted at approximately the same height, requirements that are satisfied in most typical multi-camera configurations. The proposed method uses the observed height of an object or person moving through the space to estimate the distance to the object or person. Using this distance to backproject the lowest point of each detected object, we obtain a rotated and anisotropically scaled view of the ground plane for each camera. An algorithm is presented to estimate the anisotropic scaling parameters and rotation for each camera, after which ground plane positions can be computed up to an isotropic scale factor. Lens distortion is not taken into account. The method is tested in simulation yielding average accuracies within 5cm, and in a real multi-camera environment with an accuracy within 15cm

    Multi-camera complexity assessment system for assembly line work stations

    Get PDF
    In the last couple of years, the market demands an increasing number of product variants. This leads to an inevitable rise of the complexity in manufacturing systems. A model to quantify the complexity in a workstation has been developed, but part of the analysis is done manually. Thereto, this paper presents the results of an industrial proof-of-concept in which the possibility of automating the complexity analysis using multi camera video images, was tested

    Self-learning voxel-based multi-camera occlusion maps for 3D reconstruction

    Get PDF
    The quality of a shape-from-silhouettes 3D reconstruction technique strongly depends on the completeness of the silhouettes from each of the cameras. Static occlusion, due to e.g. furniture, makes reconstruction difficult, as we assume no prior knowledge concerning shape and size of occluding objects in the scene. In this paper we present a self-learning algorithm that is able to build an occlusion map for each camera from a voxel perspective. This information is then used to determine which cameras need to be evaluated when reconstructing the 3D model at every voxel in the scene. We show promising results in a multi-camera setup with seven cameras where the object is significantly better reconstructed compared to the state of the art methods, despite the occluding object in the center of the room
    • …
    corecore