301 research outputs found

    Enhanced Query Processing on Complex Spatial and Temporal Data

    Get PDF
    Innovative technologies in the area of multimedia and mechanical engineering as well as novel methods for data acquisition in different scientific subareas, including geo-science, environmental science, medicine, biology and astronomy, enable a more exact representation of the data, and thus, a more precise data analysis. The resulting quantitative and qualitative growth of specifically spatial and temporal data leads to new challenges for the management and processing of complex structured objects and requires the employment of efficient and effective methods for data analysis. Spatial data denote the description of objects in space by a well-defined extension, a specific location and by their relationships to the other objects. Classical representatives of complex structured spatial objects are three-dimensional CAD data from the sector "mechanical engineering" and two-dimensional bounded regions from the area "geography". For industrial applications, efficient collision and intersection queries are of great importance. Temporal data denote data describing time dependent processes, as for instance the duration of specific events or the description of time varying attributes of objects. Time series belong to one of the most popular and complex type of temporal data and are the most important form of description for time varying processes. An elementary type of query in time series databases is the similarity query which serves as basic query for data mining applications. The main target of this thesis is to develop an effective and efficient algorithm supporting collision queries on spatial data as well as similarity queries on temporal data, in particular, time series. The presented concepts are based on the efficient management of interval sequences which are suitable for spatial and temporal data. The effective analysis of the underlying objects will be efficiently supported by adequate access methods. First, this thesis deals with collision queries on complex spatial objects which can be reduced to intersection queries on interval sequences. We introduce statistical methods for the grouping of subsequences. Involving the concept of multi-step query processing, these methods enable the user to accelerate the query process drastically. Furthermore, in this thesis we will develop a cost model for the multi-step query process of interval sequences in distributed systems. The proposed approach successfully supports a cost based query strategy. Second, we introduce a novel similarity measure for time series. It allows the user to focus specific time series amplitudes for the similarity measurement. The new similarity model defines two time series to be similar iff they show similar temporal behavior w.r.t. being below or above a specific threshold. This type of query is primarily required in natural science applications. The main goal of this new query method is the detection of anomalies and the adaptation to new claims in the area of data mining in time series databases. In addition, a semi-supervised cluster analysis method will be presented which is based on the introduced similarity model for time series. The efficiency and effectiveness of the proposed techniques will be extensively discussed and the advantages against existing methods experimentally proofed by means of datasets derived from real-world applications

    APRIL: Approximating Polygons as Raster Interval Lists

    Full text link
    The spatial intersection join an important spatial query operation, due to its popularity and high complexity. The spatial join pipeline takes as input two collections of spatial objects (e.g., polygons). In the filter step, pairs of object MBRs that intersect are identified and passed to the refinement step for verification of the join predicate on the exact object geometries. The bottleneck of spatial join evaluation is in the refinement step. We introduce APRIL, a powerful intermediate step in the pipeline, which is based on raster interval approximations of object geometries. Our technique applies a sequence of interval joins on 'intervalized' object approximations to determine whether the objects intersect or not. Compared to previous work, APRIL approximations are simpler, occupy much less space, and achieve similar pruning effectiveness at a much higher speed. Besides intersection joins between polygons, APRIL can directly be applied and has high effectiveness for polygonal range queries, within joins, and polygon-linestring joins. By applying a lightweight compression technique, APRIL approximations may occupy even less space than object MBRs. Furthermore, APRIL can be customized to apply on partitioned data and on polygons of varying sizes, rasterized at different granularities. Our last contribution is a novel algorithm that computes the APRIL approximation of a polygon without having to rasterize it in full, which is orders of magnitude faster than the computation of other raster approximations. Experiments on real data demonstrate the effectiveness and efficiency of APRIL; compared to the state-of-the-art intermediate filter, APRIL occupies 2x-8x less space, is 3.5x-8.5x more time-efficient, and reduces the end-to-end join cost up to 3 times.Comment: 12 page

    Sparse Volumetric Deformation

    Get PDF
    Volume rendering is becoming increasingly popular as applications require realistic solid shape representations with seamless texture mapping and accurate filtering. However rendering sparse volumetric data is difficult because of the limited memory and processing capabilities of current hardware. To address these limitations, the volumetric information can be stored at progressive resolutions in the hierarchical branches of a tree structure, and sampled according to the region of interest. This means that only a partial region of the full dataset is processed, and therefore massive volumetric scenes can be rendered efficiently. The problem with this approach is that it currently only supports static scenes. This is because it is difficult to accurately deform massive amounts of volume elements and reconstruct the scene hierarchy in real-time. Another problem is that deformation operations distort the shape where more than one volume element tries to occupy the same location, and similarly gaps occur where deformation stretches the elements further than one discrete location. It is also challenging to efficiently support sophisticated deformations at hierarchical resolutions, such as character skinning or physically based animation. These types of deformation are expensive and require a control structure (for example a cage or skeleton) that maps to a set of features to accelerate the deformation process. The problems with this technique are that the varying volume hierarchy reflects different feature sizes, and manipulating the features at the original resolution is too expensive; therefore the control structure must also hierarchically capture features according to the varying volumetric resolution. This thesis investigates the area of deforming and rendering massive amounts of dynamic volumetric content. The proposed approach efficiently deforms hierarchical volume elements without introducing artifacts and supports both ray casting and rasterization renderers. This enables light transport to be modeled both accurately and efficiently with applications in the fields of real-time rendering and computer animation. Sophisticated volumetric deformation, including character animation, is also supported in real-time. This is achieved by automatically generating a control skeleton which is mapped to the varying feature resolution of the volume hierarchy. The output deformations are demonstrated in massive dynamic volumetric scenes

    Deep Learning Assisted Intelligent Visual and Vehicle Tracking Systems

    Get PDF
    Sensor fusion and tracking is the ability to bring together measurements from multiple sensors of the current and past time to estimate the current state of a system. The resulting state estimate is more accurate compared with the direct sensor measurement because it balances between the state prediction based on the assumed motion model and the noisy sensor measurement. Systems can then use the information provided by the sensor fusion and tracking process to support more-intelligent actions and achieve autonomy in a system like an autonomous vehicle. In the past, widely used sensor data are structured, which can be directly used in the tracking system, e.g., distance, temperature, acceleration, and force. The measurements\u27 uncertainty can be estimated from experiments. However, currently, a large number of unstructured data sources can be generated from sensors such as cameras and LiDAR sensors, which bring new challenges to the fusion and tracking system. The traditional algorithm cannot directly use these unstructured data, and it needs another method or process to “understand” them first. For example, if a system tries to track a particular person in a video sequence, it needs to understand where the person is in the first place. However, the traditional tracking method cannot finish such a task. The measurement model for unstructured data is usually difficult to construct. Deep learning techniques provide promising solutions to this type of problem. A deep learning method can learn and understand the unstructured data to accomplish tasks such as object detection in images, object localization in LiDAR point clouds, and driver behavior prediction from the current traffic conditions. Deep-learning architectures such as deep neural networks, deep belief networks, recurrent neural networks, and convolutional neural networks have been applied to fields including computer vision, speech recognition, natural language processing, audio recognition, social network filtering, and machine translation, where they have produced results comparable with human expert performance. How to incorporate information obtained via deep learning into our tracking system is one of the topics of this dissertation. Another challenging task is using learning methods to improve a tracking filter\u27s performance. In a tracking system, many manually tuned system parameters affect the tracking performance, e.g., the process noise covariance and measurement noise covariance in a Kalman Filter (KF). These parameters used to be estimated by running the tracking algorithm several times and selecting the one that gives the optimal performance. How to learn the system parameters automatically from data, and how to use machine learning techniques directly to provide useful information to the tracking systems are critical to the proposed tracking system. The proposed research on the intelligent tracking system has two objectives. The first objective is to make a visual tracking filter smart enough to understand unstructured data sources. The second objective is to apply learning algorithms to improve a tracking filter\u27s performance. The goal is to develop an intelligent tracking system that can understand the unstructured data and use the data to improve itself

    Improving Collaborative Drawing using HTML5

    Get PDF
    This research looks into improving online web-based collaborative drawing using HTML5. Although many systems have been developed over a number of years, none of the applications released have been satisfactory for many artists; the core drawing experience was too different from a stand-alone drawing applications. Stand-alone drawing applications have better freedom of control with functions like undo and allow artists to work efficiently with hotkeys. The advent of the HTML5 Canvas Element and Websockets in recent browsers has provided new opportunities for collaborative online interaction. This research used an incremental development approach to build a prototype HTML5 drawing application providing new functionality for online collaborative drawing. The project was supported by two experienced artists throughout investigation, design, implementation and testing. The project artists helped validate design decisions and evaluate the implementation. As a result, a robust HTML5 collaborative drawing application was built. The prototype contains core drawing functionality that existing applications did not. Features include: undo and redo, free canvas transformation, complex hotkey interaction, custom canvas size support, colour wheel, and layers. All these features work smoothly in a fully synchronized network environment under a client-server model. The collaboration system uses an authoritative server structure with local prediction and re-synchronization to hide latency. Although the result is only a prototype, the evaluations from the project artists were very positive. Once more functionality targeted towards social interaction is built, the prototype will be ready for mass public testing. Although there are some issues caused by the immaturity of HTML5 technology, this project affirms its capability for collaborative web applications

    Effects of Aerial LiDAR Data Density on the Accuracy of Building Reconstruction

    Get PDF
    Previous work has identified a positive relationship between the density of aerial LiDAR input for building reconstruction and the accuracy of the resulting reconstructed models. We hypothesize a point of diminished returns at which higher data density no longer contributes meaningfully to higher accuracy in the end product. We investigate this relationship by subsampling a high-density dataset from the City of Surrey, BC to different densities and inputting each subsampled dataset to reconstruction using two different reconstruction methods. We then determine the accuracy of reconstruction based on manually created reference data, in terms of both 2D footprint accuracy and 3D model accuracy. We find that there is no quantitative evidence for meaningfully improved output accuracy from densities higher than 4 p/m2 for either method, although aesthetic improvements at higher point cloud densities are noted for one method

    Pixelating Vector Art

    Get PDF
    Pixel art is a popular style of digital art often found in video games. It is typically characterized by its low resolution and use of limited colour palettes. Pixel art is created manually with little automation because it requires attention to pixel-level details. Working with individual pixels is a challenging and abstract task, whereas manipulating higher-level objects in vector graphics is much more intuitive. However, it is difficult to bridge this gap because although many rasterization algorithms exist, they are not well-suited for the particular needs of pixel artists, particularly at low resolutions. In this thesis, we introduce a class of rasterization algorithms called pixelation that is tailored to pixel art needs. We describe how our algorithm suppresses artifacts when pixelating vector paths and preserves shape-level features when pixelating geometric primitives. We also developed methods inspired by pixel art for drawing lines and angles more effectively at low resolutions. We compared our results to rasterization algorithms, rasterizers used in commercial software, and human subjects---both amateurs and pixel artists. Through formal analyses of our user study studies and a close collaboration with professional pixel artists, we showed that, in general, our pixelation algorithms produce more visually appealing results than na\"{i}ve rasterization algorithms do
    corecore