8,911 research outputs found

    Holistic and component plant phenotyping using temporal image sequence

    Get PDF
    Background: Image-based plant phenotyping facilitates the extraction of traits noninvasively by analyzing large number of plants in a relatively short period of time. It has the potential to compute advanced phenotypes by considering the whole plant as a single object (holistic phenotypes) or as individual components, i.e., leaves and the stem (component phenotypes), to investigate the biophysical characteristics of the plants. The emergence timing, total number of leaves present at any point of time and the growth of individual leaves during vegetative stage life cycle of the maize plants are significant phenotypic expressions that best contribute to assess the plant vigor. However, image-based automated solution to this novel problem is yet to be explored. Results: A set of new holistic and component phenotypes are introduced in this paper. To compute the component phenotypes, it is essential to detect the individual leaves and the stem. Thus, the paper introduces a novel method to reliably detect the leaves and the stem of the maize plants by analyzing 2-dimensional visible light image sequences captured from the side using a graph based approach. The total number of leaves are counted and the length of each leaf is measured for all images in the sequence to monitor leaf growth. To evaluate the performance of the proposed algorithm, we introduce University of Nebraska–Lincoln Component Plant Phenotyping Dataset (UNL-CPPD) and provide ground truth to facilitate new algorithm development and uniform comparison. The temporal variation of the component phenotypes regulated by genotypes and environment (i.e., greenhouse) are experimentally demonstrated for the maize plants on UNL-CPPD. Statistical models are applied to analyze the greenhouse environment impact and demonstrate the genetic regulation of the temporal variation of the holistic phenotypes on the public dataset called Panicoid Phenomap-1. Conclusion: The central contribution of the paper is a novel computer vision based algorithm for automated detection of individual leaves and the stem to compute new component phenotypes along with a public release of a benchmark dataset, i.e., UNL-CPPD. Detailed experimental analyses are performed to demonstrate the temporal variation of the holistic and component phenotypes in maize regulated by environment and genetic variation with a discussion on their significance in the context of plant science

    A Portable System for Screening of Cervical Cancer

    Get PDF
    Cervical cancer is one of the most common cancers that affect women, with the highest incidence and mortality rates occurring in low- and middle-income countries. Early detection is crucial for successful treatment, but the need for expensive equipment, trained colposcopists, and clinical infrastructure has made it difficult to eradicate this disease. Accurately determining the size and location of a precancerous lesion involves specialized and costly equipment, making it difficult to track the progression of the disease or the efficacy of treatment. Imaging and machine learning techniques have been attempted by several researchers to overcome these limitations, but the subjective nature of diagnosis and other challenges persist. Therefore, there is a need to develop a system to automatically segment lesions on the cervix and quantify their size in relation to the cervical region of interest. Challenges to the automated detection of cervical cancer include:• Low quality of the devices used, which impair the image resolution; lighting conditions, which can make shadows appear, hindering the ability to find the cervical region of interest (ROI); distortion of the images due to the presence of glare or specular reflections (SR) from the light source; and the appearance of artifacts such as the speculum and surrounding tissue. The limitations that exist in selecting or designing a device to acquire cervical images (cervigrams) have been investigated. • The acquisition of cervical images requires access to sensitive patient information, which raises concerns about patient privacy and data security. Ensuring that patient data is protected and used only for diagnostic purposes is critical to building patient trust and ensuring widespread adoption of automated screening technologies. A pilot study to capture cervigrams from women that present early signs of cervical cancer was designed. Relevant data would be collected to further understand the progression of this disease, while maintaining privacy and confidentiality of the participants in the study. • The early detection of cervical cancer requires analyzing complex data, including images, pathology reports, and medical records. Automating the analysis of this data requires machine learning algorithms or image processing techniques capable of interpreting such information. Image processing methods based on traditional and machine learning techniques were leveraged to identify the cervical region of interest and remove light reflections from the cervical epithelium. Lesions present on the cervix were detected and their size, invariant with respect to the orientation of the camera or its distance from the cervix, was calculated. • Finally, variability and subjectivity are involved when acquiring and analyzing cervigrams. A graphical user interface was developed to facilitate data collection and analysis throughout the pilot study and future clinical trials. Results indicate that it is possible to segment images of the cervix, reduce the effect of glare from light sources, remove specular reflections and other artifacts, and successfully detect and quantify lesions through the proposed methods. The above approaches are demonstrated throughout this dissertation to show that a low-cost bioinformatics-based tool for early detection of cervical cancer can be achieved for screening patients in a clinical setting. While the algorithms used for analysis were validated using sample images from public databases, it is crucial to conduct small-scale clinical trials to further validate these methods. Furthermore, the use of more advanced image processing techniques or machine learning algorithms to improve the accuracy and speed of lesion detection is under review

    Video Event Recognition and Anomaly Detection by Combining Gaussian Process and Hierarchical Dirichlet Process Models

    Get PDF
    In this paper, we present an unsupervised learning framework for analyzing activities and interactions in surveillance videos. In our framework, three levels of video events are connected by Hierarchical Dirichlet Process (HDP) model: low-level visual features, simple atomic activities, and multi-agent interactions. Atomic activities are represented as distribution of low-level features, while complicated interactions are represented as distribution of atomic activities. This learning process is unsupervised. Given a training video sequence, low-level visual features are extracted based on optic flow and then clustered into different atomic activities and video clips are clustered into different interactions. The HDP model automatically decide the number of clusters, i.e. the categories of atomic activities and interactions. Based on the learned atomic activities and interactions, a training dataset is generated to train the Gaussian Process (GP) classifier. Then the trained GP models work in newly captured video to classify interactions and detect abnormal events in real time. Furthermore, the temporal dependencies between video events learned by HDP-Hidden Markov Models (HMM) are effectively integrated into GP classifier to enhance the accuracy of the classification in newly captured videos. Our framework couples the benefits of the generative model (HDP) with the discriminant model (GP). We provide detailed experiments showing that our framework enjoys favorable performance in video event classification in real-time in a crowded traffic scene

    Structured Indoor Modeling

    Get PDF
    In this dissertation, we propose data-driven approaches to reconstruct 3D models for indoor scenes which are represented in a structured way (e.g., a wall is represented by a planar surface and two rooms are connected via the wall). The structured representation of models is more application ready than dense representations (e.g., a point cloud), but poses additional challenges for reconstruction since extracting structures requires high-level understanding about geometries. To address this challenging problem, we explore two common structural regularities of indoor scenes: 1) most indoor structures consist of planar surfaces (planarity), and 2) structural surfaces (e.g., walls and floor) can be represented by a 2D floorplan as a top-down view projection (orthogonality). With breakthroughs in data capturing techniques, we develop automated systems to tackle structured modeling problems, namely piece-wise planar reconstruction and floorplan reconstruction, by learning shape priors (i.e., planarity and orthogonality) from data. With structured representations and production-level quality, the reconstructed models have an immediate impact on many industrial applications
    • …
    corecore