420 research outputs found

    Performances Evaluation of a Low-Cost Platform for High-Resolution Plant Phenotyping

    Get PDF
    This study aims to test the performances of a low-cost and automatic phenotyping platform, consisting of a Red-Green-Blue (RGB) commercial camera scanning objects on rotating plates and the reconstruction of main plant phenotypic traits via the structure for motion approach (SfM). The precision of this platform was tested in relation to three-dimensional (3D) models generated from images of potted maize, tomato and olive tree, acquired at a different frequency (steps of 4°, 8° and 12°) and quality (4.88, 6.52 and 9.77 µm/pixel). Plant and organs heights, angles and areas were extracted from the 3D models generated for each combination of these factors. Coefficient of determination (R2), relative Root Mean Square Error (rRMSE) and Akaike Information Criterion (AIC) were used as goodness-of-fit indexes to compare the simulated to the observed data. The results indicated that while the best performances in reproducing plant traits were obtained using 90 images at 4.88 µm/pixel (R2 = 0.81, rRMSE = 9.49% and AIC = 35.78), this corresponded to an unviable processing time (from 2.46 h to 28.25 h for herbaceous plants and olive trees, respectively). Conversely, 30 images at 4.88 µm/pixel resulted in a good compromise between a reliable reconstruction of considered traits (R2 = 0.72, rRMSE = 11.92% and AIC = 42.59) and processing time (from 0.50 h to 2.05 h for herbaceous plants and olive trees, respectively). In any case, the results pointed out that this input combination may vary based on the trait under analysis, which can be more or less demanding in terms of input images and time according to the complexity of its shape (R2 = 0.83, rRSME = 10.15% and AIC = 38.78). These findings highlight the reliability of the developed low-cost platform for plant phenotyping, further indicating the best combination of factors to speed up the acquisition and elaboration process, at the same time minimizing the bias between observed and simulated data

    Automatic extraction of the size of myocardial infarction in an experimental murine model

    Get PDF
    Tese de mestrado. Engenharia Biomédica. Universidade do Porto. Faculdade de Engenharia. 201

    Human object annotation for surveillance video forensics

    Get PDF
    A system that can automatically annotate surveillance video in a manner useful for locating a person with a given description of clothing is presented. Each human is annotated based on two appearance features: primary colors of clothes and the presence of text/logos on clothes. The annotation occurs after a robust foreground extraction stage employing a modified Gaussian mixture model-based approach. The proposed pipeline consists of a preprocessing stage where color appearance of an image is improved using a color constancy algorithm. In order to annotate color information for human clothes, we use the color histogram feature in HSV space and find local maxima to extract dominant colors for different parts of a segmented human object. To detect text/logos on clothes, we begin with the extraction of connected components of enhanced horizontal, vertical, and diagonal edges in the frames. These candidate regions are classified as text or nontext on the basis of their local energy-based shape histogram features. Further, to detect humans, a novel technique has been proposed that uses contourlet transform-based local binary pattern (CLBP) features. In the proposed method, we extract the uniform direction invariant LBP feature descriptor for contourlet transformed high-pass subimages from vertical and diagonal directional bands. In the final stage, extracted CLBP descriptors are classified by a trained support vector machine. Experimental results illustrate the superiority of our method on large-scale surveillance video data

    Understanding Leaves in Natural Images - A Model-Based Approach for Tree Species Identification

    Get PDF
    International audienceWith the aim of elaborating a mobile application, accessible to anyone and with educational purposes, we present a method for tree species identification that relies on dedicated algorithms and explicit botany-inspired descriptors. Focusing on the analysis of leaves, we developed a working process to help recognize species, starting from a picture of a leaf in a complex natural background. A two-step active contour segmentation algorithm based on a polygonal leaf model processes the image to retrieve the contour of the leaf. Features we use afterwards are high-level geometrical descriptors that make a semantic interpretation possible, and prove to achieve better performance than more generic and statistical shape descriptors alone. We present the results, both in terms of segmentation and classification, considering a database of 50 European broad-leaved tree species, and an implementation of the system is available in the iPhone application Folia

    Making microscopy count: quantitative light microscopy of dynamic processes in living plants

    Get PDF
    First published: April 2016This is the author accepted manuscript. The final version is available from the publisher via the DOI in this record.Cell theory has officially reached 350 years of age as the first use of the word ‘cell’ in a biological context can be traced to a description of plant material by Robert Hooke in his historic publication “Micrographia: or some physiological definitions of minute bodies”. The 2015 Royal Microscopical Society Botanical Microscopy meeting was a celebration of the streams of investigation initiated by Hooke to understand at the sub-cellular scale how plant cell function and form arises. Much of the work presented, and Honorary Fellowships awarded, reflected the advanced application of bioimaging informatics to extract quantitative data from micrographs that reveal dynamic molecular processes driving cell growth and physiology. The field has progressed from collecting many pixels in multiple modes to associating these measurements with objects or features that are meaningful biologically. The additional complexity involves object identification that draws on a different type of expertise from computer science and statistics that is often impenetrable to biologists. There are many useful tools and approaches being developed, but we now need more inter-disciplinary exchange to use them effectively. In this review we show how this quiet revolution has provided tools available to any personal computer user. We also discuss the oft-neglected issue of quantifying algorithm robustness and the exciting possibilities offered through the integration of physiological information generated by biosensors with object detection and tracking

    A Multi-Sensor Phenotyping System: Applications on Wheat Height Estimation and Soybean Trait Early Prediction

    Get PDF
    Phenotyping is an essential aspect for plant breeding research since it is the foundation of the plant selection process. Traditional plant phenotyping methods such as measuring and recording plant traits manually can be inefficient, laborious and prone to error. With the help of modern sensing technologies, high-throughput field phenotyping is becoming popular recently due to its ability of sensing various crop traits non-destructively with high efficiency. A multi-sensor phenotyping system equipped with red-green-blue (RGB) cameras, radiometers, ultrasonic sensors, spectrometers, a global positioning system (GPS) receiver, a pyranometer, a temperature and relative humidity probe and a light detection and ranging (LiDAR) was first constructed, and a LabVIEW program was developed for sensor controlling and data acquisition. Two studies were conducted focusing on system performance examination and data exploration respectively. The first study was to compare wheat height measurements from ultrasonic sensor and LiDAR. Canopy heights of 100 wheat plots were estimated five times over the season by the ground phenotyping system, and the results were compared to manual measurements. Overall, LiDAR provided the better estimations with root mean square error (RMSE) of 0.05 m and R2 of 0.97. Ultrasonic sensor did not perform well due to the style of our application. In conclusion LiDAR was recommended as a reliable method for wheat height evaluation. The second study was to explore the possibility of early predicting soybean traits through color and texture features of canopy images. Six thousand three hundred and eighty-three RGB images were captured at V4/V5 growth stage over 5667 soybean plots growing at four locations. One hundred and forty color features and 315 gray-level co-occurrence matrix (GLCM)-based texture features were derived from each image. Another two variables were also introduced to account for the location and timing difference between images. Cubist and Random Forests were used for regression and classification modelling respectively. Yield (RMSE=9.82, R2=0.68), Maturity (RMSE=3.70, R2=0.76) and Seed Size (RMSE=1.63, R2=0.53) were identified as potential soybean traits that might be early-predictable. Advisor: Yufeng G

    A Multi-Sensor Phenotyping System: Applications on Wheat Height Estimation and Soybean Trait Early Prediction

    Get PDF
    Phenotyping is an essential aspect for plant breeding research since it is the foundation of the plant selection process. Traditional plant phenotyping methods such as measuring and recording plant traits manually can be inefficient, laborious and prone to error. With the help of modern sensing technologies, high-throughput field phenotyping is becoming popular recently due to its ability of sensing various crop traits non-destructively with high efficiency. A multi-sensor phenotyping system equipped with red-green-blue (RGB) cameras, radiometers, ultrasonic sensors, spectrometers, a global positioning system (GPS) receiver, a pyranometer, a temperature and relative humidity probe and a light detection and ranging (LiDAR) was first constructed, and a LabVIEW program was developed for sensor controlling and data acquisition. Two studies were conducted focusing on system performance examination and data exploration respectively. The first study was to compare wheat height measurements from ultrasonic sensor and LiDAR. Canopy heights of 100 wheat plots were estimated five times over the season by the ground phenotyping system, and the results were compared to manual measurements. Overall, LiDAR provided the better estimations with root mean square error (RMSE) of 0.05 m and R2 of 0.97. Ultrasonic sensor did not perform well due to the style of our application. In conclusion LiDAR was recommended as a reliable method for wheat height evaluation. The second study was to explore the possibility of early predicting soybean traits through color and texture features of canopy images. Six thousand three hundred and eighty-three RGB images were captured at V4/V5 growth stage over 5667 soybean plots growing at four locations. One hundred and forty color features and 315 gray-level co-occurrence matrix (GLCM)-based texture features were derived from each image. Another two variables were also introduced to account for the location and timing difference between images. Cubist and Random Forests were used for regression and classification modelling respectively. Yield (RMSE=9.82, R2=0.68), Maturity (RMSE=3.70, R2=0.76) and Seed Size (RMSE=1.63, R2=0.53) were identified as potential soybean traits that might be early-predictable. Advisor: Yufeng G

    Pattern Formation and Organization of Epithelial Tissues

    Full text link
    Developmental biology is a study of how elaborate patterns, shapes, and functions emerge as an organism grows and develops its body plan. From the physics point of view this is very much a self-organization process. The genetic blueprint contained in the DNA does not explicitly encode shapes and patterns an animal ought to make as it develops from an embryo. Instead, the DNA encodes various proteins which, among other roles, specify how different cells function and interact with each other. Epithelial tissues, from which many organs are sculpted, serve as experimentally- and analytically-tractable systems to study patterning mechanisms in animal development. Despite extensive studies in the past decade, the mechanisms that shape epithelial tissues into functioning organs remain incompletely understood. This thesis summarizes various studies we have done on epithelial organization and patterning, both in abstract theory and in close contact with experiments. A novel mechanism to establish cellular left-right asymmetry based on planar polarity instabilities is discussed. Tissue chirality is often assumed to originate from handedness of biological molecules. Here we propose an alternative where it results from spontaneous symmetry breaking of planar polarity mechanisms. We show that planar cell polarity (PCP), a class of well-studied mechanisms that allows epithelia to spontaneously break rotational symmetry, is also generically capable of spontaneously breaking reflection symmetry. Our results provide a clear interpretation of many mutant phenotypes, especially those that result in incomplete inversion. To bridge theory and experiments, we develop quantitative methods to analyze fluorescence microscopy images. Included in this thesis are algorithms to selectively project intensities from a surface in z-stack images, analysis of cells forming short chain fragments, analysis of thick fluorescent bands using steerable ridge detector, and analysis of cell recoil in laser ablation experiments. These techniques, though developed in the context of zebrafish retina mosaic, are general and can be adapted to other systems. Finally we explore correlated noise in morphogenesis of fly pupa notum. Here we report unexpected correlation of noise in cell movements between left and right halves of developing notum, suggesting that feedback or other mechanisms might be present to counteract stochastic noise and maintain left-right symmetry.PHDPhysicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/138800/1/hjeremy_1.pd

    Tree Species Traits Determine the Success of LiDAR-based Crown Mapping in a Mixed Temperate Forest

    Get PDF
    Automated individual tree crown delineation (ITCD) via remote sensing platforms offers a path forward to obtain wall-to-wall detailed tree inventory/information over large areas. While LiDAR-based ITCD methods have proven successful in conifer dominated forests, it remains unclear how well these methods can be applied broadly in deciduous broadleaf (hardwood) dominated forests. In this study, I applied five common automated LiDAR-based ITCD methods across fifteen plots ranging from conifer- to hardwood- dominated at the Harvard Forest in Petersham, MA, USA, and assess accuracy against manually delineation crowns. I then identified basic tree- and plot-level factors influencing the success of delineation techniques. My results showed that automated crown delineation shows promise in closed canopy mixed-species forests. There was relatively little difference between crown delineation methods (51-59% aggregated plot accuracy), and despite parameter tuning, none of the methods produce high accuracy across all plots (27 – 90% range in plot-level accuracy). I found that all methods delineate conifer species (mean 64%) better than hardwood species (mean 42%), and that accuracy of each method varied similarly across plots and was significantly related to plot-level conifer fraction. Further, while tree-level factors related to tree size (DBH, height and crown area) all strongly influenced the success of crown delineations, the influence of plot-level factors varied. Species evenness (relative species abundance) was the most important plot-level variable controlling crown delineation success, and as species evenness decreased, the probability of successful delineation increased. Evenness was likely important due to 1) its negative relationship to conifer fraction and 2) a relationship between evenness and increased canopy space filling efficiency. Overall, my work suggests that the ability to delineate crowns is not strongly driven by methodological differences, but instead driven by differences in functional group (conifer vs. hardwood) tree size and diversity and how crowns are displayed in relation to each other. While LiDAR-based ITCD methods are well suited for conifer dominated plots with distinct canopy structure, they remain less reliable in hardwood dominated plots. I suggest that future work focus on integrating phenology and spectral characteristics with existing LiDAR approaches to better delineate hardwood dominated stands
    corecore