143 research outputs found

    Monoplotting through Fusion of LIDAR Data and Low-Cost Digital Aerial Imagery

    Get PDF

    Methodology and Algorithms for Pedestrian Network Construction

    Get PDF
    With the advanced capabilities of mobile devices and the success of car navigation systems, interest in pedestrian navigation systems is on the rise. A critical component of any navigation system is a map database which represents a network (e.g., road networks in car navigation systems) and supports key functionality such as map display, geocoding, and routing. Road networks, mainly due to the popularity of car navigation systems, are well defined and publicly available. However, in pedestrian navigation systems, as well as other applications including urban planning and physical activities studies, road networks do not adequately represent the paths that pedestrians usually travel. Currently, there are no techniques to automatically construct pedestrian networks, impeding research and development of applications requiring pedestrian data. This coupled with the increased demand for pedestrian networks is the prime motivation for this dissertation which is focused on development of a methodology and algorithms that can construct pedestrian networks automatically. A methodology, which involves three independent approaches, network buffering (using existing road networks), collaborative mapping (using GPS traces collected by volunteers), and image processing (using high-resolution satellite and laser imageries) was developed. Experiments were conducted to evaluate the pedestrian networks constructed by these approaches with a pedestrian network baseline as a ground truth. The results of the experiments indicate that these three approaches, while differing in complexity and outcome, are viable for automatically constructing pedestrian networks

    Learning-Based Data-Driven and Vision Methodology for Optimized Printed Electronics

    Get PDF
    Inkjet printing is an active domain of additive manufacturing and printed electronics due to its promising features, starting from low-cost, scalability, non-contact printing, and microscale on-demand pattern customization. Up until now, mainstream research has been making headway in the development of ink material and printing process optimization through traditional methods, with almost no work concentrated on machine learning and vision-based drop behavior prediction, pattern generation, and enhancement. In this work, we first carry out a systematic piezoelectric drop on demand inkjet drop generation and characterization study to structure our dataset, which is later used to develop a drop formulation prediction module for diverse materials. Machine learning enables us to predict the drop speed and radius for particular material and printer electrical signal configuration. We verify our prediction results with untested graphene oxide ink. Thereafter, we study automated pattern generation and evaluation algorithms for inkjet printing via computer vision schema for several shapes, scales and finalize the best sequencing method in terms of comparative pattern quality, along with the underlying causes. In a nutshell, we develop and validate an automated vision methodology to optimize any given two-dimensional patterns. We show that traditional raster printing is inferior to other promising methods such as contour printing, segmented matrix printing, depending on the shape and dimension of the designed pattern. Our proposed vision-based printing algorithm eliminates manual printing configuration workload and is intelligent enough to decide on which segment of the pattern should be printed in which order and sequence. Besides, process defect monitoring and tracking has shown promising results equivalent to manual short circuit, open circuit, and sheet resistance testing for deciding over pattern acceptance or rejection with reduced device testing time. Drop behavior forecast, automatic pattern optimization, and defect quantization compared with the designed image allow dynamic adaptation of any materials properties with regards to any substrate and sophisticated design as established here with varying material properties; complex design features such as corners, edges, and miniature scale can be achieved

    Learning-Based Data-Driven and Vision Methodology for Optimized Printed Electronics

    Get PDF
    Inkjet printing is an active domain of additive manufacturing and printed electronics due to its promising features, starting from low-cost, scalability, non-contact printing, and microscale on-demand pattern customization. Up until now, mainstream research has been making headway in the development of ink material and printing process optimization through traditional methods, with almost no work concentrated on machine learning and vision-based drop behavior prediction, pattern generation, and enhancement. In this work, we first carry out a systematic piezoelectric drop on demand inkjet drop generation and characterization study to structure our dataset, which is later used to develop a drop formulation prediction module for diverse materials. Machine learning enables us to predict the drop speed and radius for particular material and printer electrical signal configuration. We verify our prediction results with untested graphene oxide ink. Thereafter, we study automated pattern generation and evaluation algorithms for inkjet printing via computer vision schema for several shapes, scales and finalize the best sequencing method in terms of comparative pattern quality, along with the underlying causes. In a nutshell, we develop and validate an automated vision methodology to optimize any given two-dimensional patterns. We show that traditional raster printing is inferior to other promising methods such as contour printing, segmented matrix printing, depending on the shape and dimension of the designed pattern. Our proposed vision-based printing algorithm eliminates manual printing configuration workload and is intelligent enough to decide on which segment of the pattern should be printed in which order and sequence. Besides, process defect monitoring and tracking has shown promising results equivalent to manual short circuit, open circuit, and sheet resistance testing for deciding over pattern acceptance or rejection with reduced device testing time. Drop behavior forecast, automatic pattern optimization, and defect quantization compared with the designed image allow dynamic adaptation of any materials properties with regards to any substrate and sophisticated design as established here with varying material properties; complex design features such as corners, edges, and miniature scale can be achieved

    Direct occlusion handling for high level image processing algorithms

    Get PDF
    Many high-level computer vision algorithms suffer in the presence of occlusions caused by multiple objects overlapping in a view. Occlusions remove the direct correspondence between visible areas of objects and the objects themselves by introducing ambiguity in the interpretation of the shape of the occluded object. Ignoring this ambiguity allows the perceived geometry of overlapping objects to be deformed or even fractured. Supplementing the raw image data with a vectorized structural representation which predicts object completions could stabilize high-level algorithms which currently disregard occlusions. Studies in the neuroscience community indicate that the feature points located at the intersection of junctions may be used by the human visual system to produce these completions. Geiger, Pao, and Rubin have successfully used these features in a purely rasterized setting to complete objects in a fashion similar to what is demonstrated by human perception. This work proposes using these features in a vectorized approach to solving the mid-level computer vision problem of object stitching. A system has been implemented which is able extract L and T-junctions directly from the edges of an image using scale-space and robust statistical techniques. The system is sensitive enough to be able to isolate the corners on polygons with 24 sides or more, provided sufficient image resolution is available. Areas of promising development have been identified and several directions for further research are proposed

    Artistic Content Representation and Modelling based on Visual Style Features

    Get PDF
    This thesis aims to understand visual style in the context of computer science, using traditionally intangible artistic properties to enhance existing content manipulation algorithms and develop new content creation methods. The developed algorithms can be used to apply extracted properties to other drawings automatically; transfer a selected style; categorise images based upon perceived style; build 3D models using style features from concept artwork; and other style-based actions that change our perception of an object without changing our ability to recognise it. The research in this thesis aims to provide the style manipulation abilities that are missing from modern digital art creation pipelines

    An Evolutionary Approach to Adaptive Image Analysis for Retrieving and Long-term Monitoring Historical Land Use from Spatiotemporally Heterogeneous Map Sources

    Get PDF
    Land use changes have become a major contributor to the anthropogenic global change. The ongoing dispersion and concentration of the human species, being at their orders unprecedented, have indisputably altered Earth’s surface and atmosphere. The effects are so salient and irreversible that a new geological epoch, following the interglacial Holocene, has been announced: the Anthropocene. While its onset is by some scholars dated back to the Neolithic revolution, it is commonly referred to the late 18th century. The rapid development since the industrial revolution and its implications gave rise to an increasing awareness of the extensive anthropogenic land change and led to an urgent need for sustainable strategies for land use and land management. By preserving of landscape and settlement patterns at discrete points in time, archival geospatial data sources such as remote sensing imagery and historical geotopographic maps, in particular, could give evidence of the dynamic land use change during this crucial period. In this context, this thesis set out to explore the potentials of retrospective geoinformation for monitoring, communicating, modeling and eventually understanding the complex and gradually evolving processes of land cover and land use change. Currently, large amounts of geospatial data sources such as archival maps are being worldwide made online accessible by libraries and national mapping agencies. Despite their abundance and relevance, the usage of historical land use and land cover information in research is still often hindered by the laborious visual interpretation, limiting the temporal and spatial coverage of studies. Thus, the core of the thesis is dedicated to the computational acquisition of geoinformation from archival map sources by means of digital image analysis. Based on a comprehensive review of literature as well as the data and proposed algorithms, two major challenges for long-term retrospective information acquisition and change detection were identified: first, the diversity of geographical entity representations over space and time, and second, the uncertainty inherent to both the data source itself and its utilization for land change detection. To address the former challenge, image segmentation is considered a global non-linear optimization problem. The segmentation methods and parameters are adjusted using a metaheuristic, evolutionary approach. For preserving adaptability in high level image analysis, a hybrid model- and data-driven strategy, combining a knowledge-based and a neural net classifier, is recommended. To address the second challenge, a probabilistic object- and field-based change detection approach for modeling the positional, thematic, and temporal uncertainty adherent to both data and processing, is developed. Experimental results indicate the suitability of the methodology in support of land change monitoring. In conclusion, potentials of application and directions for further research are given

    Information Preserving Processing of Noisy Handwritten Document Images

    Get PDF
    Many pre-processing techniques that normalize artifacts and clean noise induce anomalies due to discretization of the document image. Important information that could be used at later stages may be lost. A proposed composite-model framework takes into account pre-printed information, user-added data, and digitization characteristics. Its benefits are demonstrated by experiments with statistically significant results. Separating pre-printed ruling lines from user-added handwriting shows how ruling lines impact people\u27s handwriting and how they can be exploited for identifying writers. Ruling line detection based on multi-line linear regression reduces the mean error of counting them from 0.10 to 0.03, 6.70 to 0.06, and 0.13 to 0.02, com- pared to an HMM-based approach on three standard test datasets, thereby reducing human correction time by 50%, 83%, and 72% on average. On 61 page images from 16 rule-form templates, the precision and recall of form cell recognition are increased by 2.7% and 3.7%, compared to a cross-matrix approach. Compensating for and exploiting ruling lines during feature extraction rather than pre-processing raises the writer identification accuracy from 61.2% to 67.7% on a 61-writer noisy Arabic dataset. Similarly, counteracting page-wise skew by subtracting it or transforming contours in a continuous coordinate system during feature extraction improves the writer identification accuracy. An implementation study of contour-hinge features reveals that utilizing the full probabilistic probability distribution function matrix improves the writer identification accuracy from 74.9% to 79.5%

    Transforming the Reading Experience of Scientific Documents with Polymorphism

    Get PDF
    Despite the opportunities created by digital reading, documents remain mostly static and mimic paper. Any improvement in the shape or form of documents has to come from authors who contend with current digital formats, workflows, and software and who impose a presentation to readers. Instead, I propose the concept of polymorphic documents which are documents that can change in form to offer better representations of the information they contain. I believe that multiple representations of the same information can help readers, and that any document can be made polymorphic, with no intervention from the original author. This thesis presents four projects investigating what information can be obtained from existing documents, how this information can be better represented, and how these representations can be generated using only the source document. To do so, I draw upon theories showing the benefit of presenting information using multiple representations; the design of interactive systems to support morphing representations; and user studies to evaluate system usability and the benefits of the new representations on reader comprehension
    corecore