481 research outputs found

    Computer supported estimation of input data for transportation models

    Get PDF
    Control and management of transportation systems frequently rely on optimization or simulation methods based on a suitable model. Such a model uses optimization or simulation procedures and correct input data. The input data define transportation infrastructure and transportation flows. Data acquisition is a costly process and so an efficient approach is highly desirable. The infrastructure can be recognized from drawn maps using segmentation, thinning and vectorization. The accurate definition of network topology and nodes position is the crucial part of the process. Transportation flows can be analyzed as vehicle’s behavior based on video sequences of typical traffic situations. Resulting information consists of vehicle position, actual speed and acceleration along the road section. Data for individual vehicles are statistically processed and standard vehicle characteristics can be recommended for vehicle generator in simulation models

    A complete hand-drawn sketch vectorization framework

    Full text link
    Vectorizing hand-drawn sketches is a challenging task, which is of paramount importance for creating CAD vectorized versions for the fashion and creative workflows. This paper proposes a complete framework that automatically transforms noisy and complex hand-drawn sketches with different stroke types in a precise, reliable and highly-simplified vectorized model. The proposed framework includes a novel line extraction algorithm based on a multi-resolution application of Pearson's cross correlation and a new unbiased thinning algorithm that can get rid of scribbles and variable-width strokes to obtain clean 1-pixel lines. Other contributions include variants of pruning, merging and edge linking procedures to post-process the obtained paths. Finally, a modification of the original Schneider's vectorization algorithm is designed to obtain fewer control points in the resulting Bezier splines. All the proposed steps of the framework have been extensively tested and compared with state-of-the-art algorithms, showing (both qualitatively and quantitatively) its outperformance

    Line tracking algorithm for scribbled drawings

    Get PDF
    This paper describes a line tracking algorithm that may be used to extract lines from paper based scribbles. The proposed algorithm improves the performance of existing sparse-pixel line tracking techniques that are used in vectorization by introducing perceptual saliency and Kalman filtering concepts to the line tracking. Furthermore, an adaptive sampling size is used such that it is possible to adjust the size of the tracking step to reflect the stroke curvature.peer-reviewe

    Deep Vectorization of Technical Drawings

    Full text link
    We present a new method for vectorization of technical line drawings, such as floor plans, architectural drawings, and 2D CAD images. Our method includes (1) a deep learning-based cleaning stage to eliminate the background and imperfections in the image and fill in missing parts, (2) a transformer-based network to estimate vector primitives, and (3) optimization procedure to obtain the final primitive configurations. We train the networks on synthetic data, renderings of vector line drawings, and manually vectorized scans of line drawings. Our method quantitatively and qualitatively outperforms a number of existing techniques on a collection of representative technical drawings

    Building Footprint Extraction in Dense Areas using Super Resolution and Frame Field Learning

    Full text link
    Despite notable results on standard aerial datasets, current state-of-the-arts fail to produce accurate building footprints in dense areas due to challenging properties posed by these areas and limited data availability. In this paper, we propose a framework to address such issues in polygonal building extraction. First, super resolution is employed to enhance the spatial resolution of aerial image, allowing for finer details to be captured. This enhanced imagery serves as input to a multitask learning module, which consists of a segmentation head and a frame field learning head to effectively handle the irregular building structures. Our model is supervised by adaptive loss weighting, enabling extraction of sharp edges and fine-grained polygons which is difficult due to overlapping buildings and low data quality. Extensive experiments on a slum area in India that mimics a dense area demonstrate that our proposed approach significantly outperforms the current state-of-the-art methods by a large margin.Comment: Accepted at The 12th International Conference on Awareness Science and Technolog

    Performance Measure That Indicates Geometry Sufficiency of State Highways: Volume II—Clear Zones and Cross-Section Information Extraction

    Get PDF
    Evaluation method employed for the proposed corridor projects by Indiana Department of Transportation (INDOT) consider road geometry improvements by a generalized categorization. A new method which considers the change in geometry improvements requires additional information regarding cross section elements. Part of this information is readily available but some information like the embankment slopes and obstructions near traveled way needs to be acquired. This study investigates available data sources and methods to obtain cross-section and clear zone information in a feasible way for this purpose. We have employed color infrared (CIR) orthophotos, LiDAR point clouds, digital elevation and surface models for the extraction of the paved surface, average grade, embankment slopes, and obstructions near the traveled way like trees and man-made structures. We propose a framework which first performs a support vector machine (SVM) classification of the paved surface, then determines the medial axis and reconstructs the paved surface. Once the paved surface is obtained, the clear zones are defined and the features within the clear zones are extracted by the classification of LiDAR point clouds. SVM classification of the paved surface from CIR orthophotos in the study area results with a classification accuracy over 90% which suggests the suitability of high resolution CIR images for the classification of paved surface via SVM. A total of 21.3 miles of relevant road network has been extracted. This corresponds to approximately 90% of the actual road network due to missing parts in the paved surface classification results and parts which were removed during cleaning, simplification and generalization process. Branches due to connecting driveways, adjacent parking lots, etc. were also extracted together with the main road alignment as by-product. This information may also be utilized if found necessary with further effort to filter out irrelevant pieces that do not correspond to any actual branches. Based on the extracted centerline and classification results, we have estimated the paved surface as observed on the orthophotos. Based on the estimated paved surface centerline and width, we have generated cross section lines and calculated the side slopes. We have extracted the buildings and trees within the clear-zones that are also defined based on the reconstruction of the paved surface. Among 86 objects detected as buildings, 14% were false positives due to confusion with bridges or trees which present planar structure

    Artistic Content Representation and Modelling based on Visual Style Features

    Get PDF
    This thesis aims to understand visual style in the context of computer science, using traditionally intangible artistic properties to enhance existing content manipulation algorithms and develop new content creation methods. The developed algorithms can be used to apply extracted properties to other drawings automatically; transfer a selected style; categorise images based upon perceived style; build 3D models using style features from concept artwork; and other style-based actions that change our perception of an object without changing our ability to recognise it. The research in this thesis aims to provide the style manipulation abilities that are missing from modern digital art creation pipelines

    Correct and efficient accelerator programming

    Get PDF
    This report documents the program and the outcomes of Dagstuhl Seminar 13142 “Correct and Efficient Accelerator Programming”. The aim of this Dagstuhl seminar was to bring together researchers from various sub-disciplines of computer science to brainstorm and discuss the theoretical foundations, design and implementation of techniques and tools for correct and efficient accelerator programming

    Extracting chemical structure from printed diagrams

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 117-118).Over the years, a vast amount of literature in the field of chemistry has accumulated, and searching for documents about specific molecules is a formidable task. To the extent that the literature is textual, services like Google enable relatively easy search. While search indexes like Google are very good at finding such things, its difficult to describe molecules completely using text because text can't easily indicate molecular structure, and molecular structure defines chemical properties. ChemWARD is a system that extracts the molecular structure from the printed diagrams that are ubiquitous in chemistry literature and converts them to a machine readable format in order to allow chemists to search the literature by drawing a molecular structure instead of typing a chemical formula. We describe the architecture of the system and report on its performance, demonstrating its ability to achieve an overall accuracy rate of 85.5% on printed diagrams extracted from published chemical literature.by Angelique Moscicki.M.Eng

    AUTOMATING DATA-LAYOUT DECISIONS IN DOMAIN-SPECIFIC LANGUAGES

    Get PDF
    A long-standing challenge in High-Performance Computing (HPC) is the simultaneous achievement of programmer productivity and hardware computational efficiency. The challenge has been exacerbated by the onset of multi- and many-core CPUs and accelerators. Only a few expert programmers have been able to hand-code domain-specific data transformations and vectorization schemes needed to extract the best possible performance on such architectures. In this research, we examined the possibility of automating these methods by developing a Domain-Specific Language (DSL) framework. Our DSL approach extends C++14 by embedding into it a high-level data-parallel array language, and by using a domain-specific compiler to compile to hybrid-parallel code. We also implemented an array index-space transformation algebra within this high-level array language to manipulate array data-layouts and data-distributions. The compiler introduces a novel method for SIMD auto-vectorization based on array data-layouts. Our new auto-vectorization technique is shown to outperform the default auto-vectorization strategy by up to 40% for stencil computations. The compiler also automates distributed data movement with overlapping of local compute with remote data movement using polyhedral integer set analysis. Along with these main innovations, we developed a new technique using C++ template metaprogramming for developing embedded DSLs using C++. We also proposed a domain-specific compiler intermediate representation that simplifies data flow analysis of abstract DSL constructs. We evaluated our framework by constructing a DSL for the HPC grand-challenge domain of lattice quantum chromodynamics. Our DSL yielded performance gains of up to twice the flop rate over existing production C code for selected kernels. This gain in performance was obtained while using less than one-tenth the lines of code. The performance of this DSL was also competitive with the best hand-optimized and hand-vectorized code, and is an order of magnitude better than existing production DSLs.Doctor of Philosoph
    corecore