102 research outputs found

    COMPOSE: Compacted object sample extraction a framework for semi-supervised learning in nonstationary environments

    Get PDF
    An increasing number of real-world applications are associated with streaming data drawn from drifting and nonstationary distributions. These applications demand new algorithms that can learn and adapt to such changes, also known as concept drift. Proper characterization of such data with existing approaches typically requires substantial amount of labeled instances, which may be difficult, expensive, or even impractical to obtain. In this thesis, compacted object sample extraction (COMPOSE) is introduced - a computational geometry-based framework to learn from nonstationary streaming data - where labels are unavailable (or presented very sporadically) after initialization. The feasibility and performance of the algorithm are evaluated on several synthetic and real-world data sets, which present various different scenarios of initially labeled streaming environments. On carefully designed synthetic data sets, we also compare the performance of COMPOSE against the optimal Bayes classifier, as well as the arbitrary subpopulation tracker algorithm, which addresses a similar environment referred to as extreme verification latency. Furthermore, using the real-world National Oceanic and Atmospheric Administration weather data set, we demonstrate that COMPOSE is competitive even with a well-established and fully supervised nonstationary learning algorithm that receives labeled data in every batch

    TREE DIGITISATION FROM POINT CLOUDS WITH UNREAL ENGINE

    Get PDF
    Trees are fundamental parts of urban areas and green urbanism. Although much effort is being put into the digitisation of urban areas, trees present great complexity and are usually replaced by predefined models. On the one hand, trees are elements composed of trunk, branches, and leaves, each with a completely different structure and geometry. On the other hand, the tree parts are closely related to each species. Therefore, in order to obtain a realistic digital urban environment, in 3D models such as CityGML or Metaverse, it is necessary that the trees correspond faithfully to reality. The aim of this work is to propose a method to digitise trees from Mobile Laser Scanning and Terrestrial Laser Scanning data. The process takes advantage of the differentiation between trunks and leaves for their segmentation by point cloud geometric features. Unreal Engine is then used to digitise each part. Trunk and branches are geometrically preserved. For dense canopy trees, predefined leaves according to the species are imported and the alpha shape of the crown is filled. For non-dense canopy trees, the canopy is imported and modified to fit the branches. The method was tested on four real case studies. The results show realistic trees, with correct trunk and foliage segmentation, but highly dependent on the life/canopy repositories. Unreal Engine was a very complete and useful tool for the digitisation of trees generating realistic textures and lighting options

    Computer Vision Problems in 3D Plant Phenotyping

    Get PDF
    In recent years, there has been significant progress in Computer Vision based plant phenotyping (quantitative analysis of biological properties of plants) technologies. Traditional methods of plant phenotyping are destructive, manual and error prone. Due to non-invasiveness and non-contact properties as well as increased accuracy, imaging techniques are becoming state-of-the-art in plant phenotyping. Among several parameters of plant phenotyping, growth analysis is very important for biological inference. Automating the growth analysis can result in accelerating the throughput in crop production. This thesis contributes to the automation of plant growth analysis. First, we present a novel system for automated and non-invasive/non-contact plant growth measurement. We exploit the recent advancements of sophisticated robotic technologies and near infrared laser scanners to build a 3D imaging system and use state-of-the-art Computer Vision algorithms to fully automate growth measurement. We have set up a gantry robot system having 7 degrees of freedom hanging from the roof of a growth chamber. The payload is a range scanner, which can measure dense depth maps (raw 3D coordinate points in mm) on the surface of an object (the plant). The scanner can be moved around the plant to scan from different viewpoints by programming the robot with a specific trajectory. The sequence of overlapping images can be aligned to obtain a full 3D structure of the plant in raw point cloud format, which can be triangulated to obtain a smooth surface (triangular mesh), enclosing the original plant. We show the capability of the system to capture the well known diurnal pattern of plant growth computed from the surface area and volume of the plant meshes for a number of plant species. Second, we propose a technique to detect branch junctions in plant point cloud data. We demonstrate that using these junctions as feature points, the correspondence estimation can be formulated as a subgraph matching problem, and better matching results than state-of-the-art can be achieved. Also, this idea removes the requirement of a priori knowledge about rotational angles between adjacent scanning viewpoints imposed by the original registration algorithm for complex plant data. Before, this angle information had to be approximately known. Third, we present an algorithm to classify partially occluded leaves by their contours. In general, partial contour matching is a NP-hard problem. We propose a suboptimal matching solution and show that our method outperforms state-of-the-art on 3 public leaf datasets. We anticipate using this algorithm to track growing segmented leaves in our plant range data, even when a leaf becomes partially occluded by other plant matter over time. Finally, we perform some experiments to demonstrate the capability and limitations of the system and highlight the future research directions for Computer Vision based plant phenotyping

    Computer Vision Problems in 3D Plant Phenotyping

    Get PDF
    In recent years, there has been significant progress in Computer Vision based plant phenotyping (quantitative analysis of biological properties of plants) technologies. Traditional methods of plant phenotyping are destructive, manual and error prone. Due to non-invasiveness and non-contact properties as well as increased accuracy, imaging techniques are becoming state-of-the-art in plant phenotyping. Among several parameters of plant phenotyping, growth analysis is very important for biological inference. Automating the growth analysis can result in accelerating the throughput in crop production. This thesis contributes to the automation of plant growth analysis. First, we present a novel system for automated and non-invasive/non-contact plant growth measurement. We exploit the recent advancements of sophisticated robotic technologies and near infrared laser scanners to build a 3D imaging system and use state-of-the-art Computer Vision algorithms to fully automate growth measurement. We have set up a gantry robot system having 7 degrees of freedom hanging from the roof of a growth chamber. The payload is a range scanner, which can measure dense depth maps (raw 3D coordinate points in mm) on the surface of an object (the plant). The scanner can be moved around the plant to scan from different viewpoints by programming the robot with a specific trajectory. The sequence of overlapping images can be aligned to obtain a full 3D structure of the plant in raw point cloud format, which can be triangulated to obtain a smooth surface (triangular mesh), enclosing the original plant. We show the capability of the system to capture the well known diurnal pattern of plant growth computed from the surface area and volume of the plant meshes for a number of plant species. Second, we propose a technique to detect branch junctions in plant point cloud data. We demonstrate that using these junctions as feature points, the correspondence estimation can be formulated as a subgraph matching problem, and better matching results than state-of-the-art can be achieved. Also, this idea removes the requirement of a priori knowledge about rotational angles between adjacent scanning viewpoints imposed by the original registration algorithm for complex plant data. Before, this angle information had to be approximately known. Third, we present an algorithm to classify partially occluded leaves by their contours. In general, partial contour matching is a NP-hard problem. We propose a suboptimal matching solution and show that our method outperforms state-of-the-art on 3 public leaf datasets. We anticipate using this algorithm to track growing segmented leaves in our plant range data, even when a leaf becomes partially occluded by other plant matter over time. Finally, we perform some experiments to demonstrate the capability and limitations of the system and highlight the future research directions for Computer Vision based plant phenotyping

    A Survey of Surface Reconstruction from Point Clouds

    Get PDF
    International audienceThe area of surface reconstruction has seen substantial progress in the past two decades. The traditional problem addressed by surface reconstruction is to recover the digital representation of a physical shape that has been scanned, where the scanned data contains a wide variety of defects. While much of the earlier work has been focused on reconstructing a piece-wise smooth representation of the original shape, recent work has taken on more specialized priors to address significantly challenging data imperfections, where the reconstruction can take on different representations – not necessarily the explicit geometry. We survey the field of surface reconstruction, and provide a categorization with respect to priors, data imperfections, and reconstruction output. By considering a holistic view of surface reconstruction, we show a detailed characterization of the field, highlight similarities between diverse reconstruction techniques, and provide directions for future work in surface reconstruction

    Animation in relational information visualization

    Get PDF
    In order to be able to navigate in the world without memorizing each detail, the human brain builds a mental map of its environment. The mental map is a distorted and abstracted representation of the real environment. Unimportant areas tend to be collapsed to a single entity while important landmarks are overemphasized. When working with visualizations of data we build a mental map of the data which is closely linked to the particular visualization. If the visualization changes significantly due to changes in the data or the way it is presented we loose the mental map and have to rebuild it from scratch. The purpose of the research underlying this thesis was to investigate and devise methods to create smooth transformations between visualizations of relational data which help users in maintaining or quickly updating their mental map

    Animation in relational information visualization

    Get PDF
    In order to be able to navigate in the world without memorizing each detail, the human brain builds a mental map of its environment. The mental map is a distorted and abstracted representation of the real environment. Unimportant areas tend to be collapsed to a single entity while important landmarks are overemphasized. When working with visualizations of data we build a mental map of the data which is closely linked to the particular visualization. If the visualization changes significantly due to changes in the data or the way it is presented we loose the mental map and have to rebuild it from scratch. The purpose of the research underlying this thesis was to investigate and devise methods to create smooth transformations between visualizations of relational data which help users in maintaining or quickly updating their mental map

    An Exploration of Controlling the Content Learned by Deep Neural Networks

    Get PDF
    With the great success of the Deep Neural Network (DNN), how to get a trustworthy model attracts more and more attention. Generally, people intend to provide the raw data to the DNN directly in training. However, the entire training process is in a black box, in which the knowledge learned by the DNN is out of control. There are many risks inside. The most common one is overfitting. With the deepening of research on neural networks, additional and probably greater risks were discovered recently. The related research shows that unknown clues can hide in the training data because of the randomization of the data and the finite scale of the training data. Some of the clues build meaningless but explicit links between input data the output data called ``shortcuts\u27\u27. The DNN makes the decision based on these ``shortcuts\u27\u27. This phenomenon is also called ``network cheating\u27\u27. The knowledge of such shortcuts learned by DNN ruins all the training and makes the performance of the DNN unreliable. Therefore, we need to control the raw data using in training. Here, we name the explicit raw data as ``content\u27\u27 and the implicit logic learned by the DNN as ``knowledge\u27\u27 in this dissertation. By quantifying the information in DNN\u27s training, we find that the information learned by the network is much less than the information contained in the dataset. It indicates that it is unnecessary to train the neural network with all of the information, which means using partial information for training can also achieve a similar effect of using full information. In other words, it is possible to control the content fed into the DNN, and this strategy shown in this study can reduce the risks (e.g., overfitting and shortcuts) mentioned above. Moreover, use reconstructed data (with partial information) to train the network can reduce the complexity of the network and accelerate the training. In this dissertation, we provide a pipeline to implement content control in DNN\u27s training. We use a series of experiments to prove its feasibility in two applications. One is human brain anatomy structure analysis, and the other is human pose detection and classification

    Shape analysis of the human brain.

    Get PDF
    Autism is a complex developmental disability that has dramatically increased in prevalence, having a decisive impact on the health and behavior of children. Methods used to detect and recommend therapies have been much debated in the medical community because of the subjective nature of diagnosing autism. In order to provide an alternative method for understanding autism, the current work has developed a 3-dimensional state-of-the-art shape based analysis of the human brain to aid in creating more accurate diagnostic assessments and guided risk analyses for individuals with neurological conditions, such as autism. Methods: The aim of this work was to assess whether the shape of the human brain can be used as a reliable source of information for determining whether an individual will be diagnosed with autism. The study was conducted using multi-center databases of magnetic resonance images of the human brain. The subjects in the databases were analyzed using a series of algorithms consisting of bias correction, skull stripping, multi-label brain segmentation, 3-dimensional mesh construction, spherical harmonic decomposition, registration, and classification. The software algorithms were developed as an original contribution of this dissertation in collaboration with the BioImaging Laboratory at the University of Louisville Speed School of Engineering. The classification of each subject was used to construct diagnoses and therapeutic risk assessments for each patient. Results: A reliable metric for making neurological diagnoses and constructing therapeutic risk assessment for individuals has been identified. The metric was explored in populations of individuals having autism spectrum disorders, dyslexia, Alzheimers disease, and lung cancer. Conclusion: Currently, the clinical applicability and benefits of the proposed software approach are being discussed by the broader community of doctors, therapists, and parents for use in improving current methods by which autism spectrum disorders are diagnosed and understood

    Spatial relationship based scene analysis and synthesis

    Get PDF
    In this thesis, we propose a new representation, which we name Interaction Bisector Surface (IBS), that can describe the general nature of spatial relationship. We show that the IBS can be applied in 3D scene analysis, retrieval and synthesis. Despite the fact that the spatial relationship between different objects plays a significant role in describing the context, few works have focused on elaborating a representation that can describe arbitrary interactions between different objects. Previous methods simply concatenate the individual state vectors to produce a joint space, or only use simple representations such as relative vectors or contacts to describe the context. Such representations do not contain detailed information of spatial relationships. They cannot describe complex interactions such as hooking and enclosure. The IBS is a data structure with rich information about the interaction. It provides the topological, geometric and correspondence features that can be used to classify and recognize interactions. The topological features are at the most abstract level and it can be used to recognize spatial relationships such as enclosure, hooking and surrounding. The geometric features encode the fine details of interactions. The correspondence feature describes which parts of the scene elements contribute to the interaction and is especially useful for recognizing character-object interactions. We show examples of successful classification and retrieval of different types of data including indoor static scenes and dynamic scenes which contain character-object interactions. We also conduct an exhaustive comparison which shows that our method outperforms existing approaches. We also propose a novel approach to automatically synthesizing new interactions from example scenes and new objects. Given an example scene composed of two objects, the open space between the objects is abstracted by the IBS. Then, an translation, rotation and scale equivariant feature called shape coverage feature, which encodes how the point in the open space is surrounded by the environment, is computed near the IBS and around the open space of the new objects. Finally, a novel scene is synthesized by conducting a partial matching of the open space around the new objects with the IBS. Using our approach, new scenes can be automatically synthesized from example scenes and new objects without relying on label information, which is especially useful when the data of scenes and objects come from multiple sources
    • …
    corecore