1,123 research outputs found

    Multi-scale window specification over streaming trajectories

    Get PDF
    Enormous amounts of positional information are collected by monitoring applications in domains such as fleet management cargo transport wildlife protection etc. With the advent of modern location-based services processing such data mostly focuses on providing real-time response to a variety of user requests in continuous and scalable fashion. An important class of such queries concerns evolving trajectories that continuously trace the streaming locations of moving objects like GPS-equipped vehicles commodities with RFID\u27s people with smartphones etc. In this work we propose an advanced windowing operator that enables online incremental examination of recent motion paths at multiple resolutions for numerous point entities. When applied against incoming positions this window can abstract trajectories at coarser representations towards the past while retaining progressively finer features closer to the present. We explain the semantics of such multi-scale sliding windows through parameterized functions that reflect the sequential nature of trajectories and can effectively capture their spatiotemporal properties. Such window specification goes beyond its usual role for non-blocking processing of multiple concurrent queries. Actually it can offer concrete subsequences from each trajectory thus preserving continuity in time and contiguity in space along the respective segments. Further we suggest language extensions in order to express characteristic spatiotemporal queries using windows. Finally we discuss algorithms for nested maintenance of multi-scale windows and evaluate their efficiency against streaming positional data offering empirical evidence of their benefits to online trajectory processing

    Spectromorphology and Spatiomorphology: Wave terrain synthesis as a framework for controlling timbre spatialisation in the frequency domain

    Get PDF
    This research project examines the scope of the technique of timbre spatialisation in the frequency domain that can be realised and controlled in live performance by a single performer. Existing implementations of timbre spatialisation take either a psychoacoustical approach – employing control rate signals for determining azimuth and distance cues – or an adoption of abstract structures for determining frequency-space modulations. This research project aims to overcome the logistical constraints of real-time multi-parameter mapping by developing an overarching multi-signal framework for control: wave terrain synthesis, an interactive control rate and audio rate system. Due to the precise timing requirements of vectorbased FFT processes, spectral control data are generated in frames. Performed in MaxMSP, the project addresses notions of space and immersion using a practice-led methodology contributing to the creation of a number of compositions, performance software and an accompanying exegesis. In addition, the development and evaluation of timbre spatialisation software by the author is accompanied by a categorical definition of the spatial sound shapes generated.https://ro.ecu.edu.au/theses_ebooks/1003/thumbnail.jp

    Computer graphics application in the engineering design integration system

    Get PDF
    The computer graphics aspect of the Engineering Design Integration (EDIN) system and its application to design problems were discussed. Three basic types of computer graphics may be used with the EDIN system for the evaluation of aerospace vehicles preliminary designs: offline graphics systems using vellum-inking or photographic processes, online graphics systems characterized by direct coupled low cost storage tube terminals with limited interactive capabilities, and a minicomputer based refresh terminal offering highly interactive capabilities. The offline line systems are characterized by high quality (resolution better than 0.254 mm) and slow turnaround (one to four days). The online systems are characterized by low cost, instant visualization of the computer results, slow line speed (300 BAUD), poor hard copy, and the early limitations on vector graphic input capabilities. The recent acquisition of the Adage 330 Graphic Display system has greatly enhanced the potential for interactive computer aided design

    Deep Learning Methods for Vessel Trajectory Prediction based on Recurrent Neural Networks

    Full text link
    Data-driven methods open up unprecedented possibilities for maritime surveillance using Automatic Identification System (AIS) data. In this work, we explore deep learning strategies using historical AIS observations to address the problem of predicting future vessel trajectories with a prediction horizon of several hours. We propose novel sequence-to-sequence vessel trajectory prediction models based on encoder-decoder recurrent neural networks (RNNs) that are trained on historical trajectory data to predict future trajectory samples given previous observations. The proposed architecture combines Long Short-Term Memory (LSTM) RNNs for sequence modeling to encode the observed data and generate future predictions with different intermediate aggregation layers to capture space-time dependencies in sequential data. Experimental results on vessel trajectories from an AIS dataset made freely available by the Danish Maritime Authority show the effectiveness of deep-learning methods for trajectory prediction based on sequence-to-sequence neural networks, which achieve better performance than baseline approaches based on linear regression or on the Multi-Layer Perceptron (MLP) architecture. The comparative evaluation of results shows: i) the superiority of attention pooling over static pooling for the specific application, and ii) the remarkable performance improvement that can be obtained with labeled trajectories, i.e., when predictions are conditioned on a low-level context representation encoded from the sequence of past observations, as well as on additional inputs (e.g., port of departure or arrival) about the vessel's high-level intention, which may be available from AIS.Comment: Accepted for publications in IEEE Transactions on Aerospace and Electronic Systems, 17 pages, 9 figure

    Large-scale Continuous Gesture Recognition Using Convolutional Neural Networks

    Full text link
    This paper addresses the problem of continuous gesture recognition from sequences of depth maps using convolutional neutral networks (ConvNets). The proposed method first segments individual gestures from a depth sequence based on quantity of movement (QOM). For each segmented gesture, an Improved Depth Motion Map (IDMM), which converts the depth sequence into one image, is constructed and fed to a ConvNet for recognition. The IDMM effectively encodes both spatial and temporal information and allows the fine-tuning with existing ConvNet models for classification without introducing millions of parameters to learn. The proposed method is evaluated on the Large-scale Continuous Gesture Recognition of the ChaLearn Looking at People (LAP) challenge 2016. It achieved the performance of 0.2655 (Mean Jaccard Index) and ranked 3rd3^{rd} place in this challenge

    The engineering design integration (EDIN) system

    Get PDF
    A digital computer program complex for the evaluation of aerospace vehicle preliminary designs is described. The system consists of a Univac 1100 series computer and peripherals using the Exec 8 operating system, a set of demand access terminals of the alphanumeric and graphics types, and a library of independent computer programs. Modification of the partial run streams, data base maintenance and construction, and control of program sequencing are provided by a data manipulation program called the DLG processor. The executive control of library program execution is performed by the Univac Exec 8 operating system through a user established run stream. A combination of demand and batch operations is employed in the evaluation of preliminary designs. Applications accomplished with the EDIN system are described

    An extended modular processing pipeline for event-based vision in automatic visual inspection

    Get PDF
    Dynamic Vision Sensors differ from conventional cameras in that only intensity changes of individual pixels are perceived and transmitted as an asynchronous stream instead of an entire frame. The technology promises, among other things, high temporal resolution and low latencies and data rates. While such sensors currently enjoy much scientific attention, there are only little publications on practical applications. One field of application that has hardly been considered so far, yet potentially fits well with the sensor principle due to its special properties, is automatic visual inspection. In this paper, we evaluate current state-of-the-art processing algorithms in this new application domain. We further propose an algorithmic approach for the identification of ideal time windows within an event stream for object classification. For the evaluation of our method, we acquire two novel datasets that contain typical visual inspection scenarios, i.e., the inspection of objects on a conveyor belt and during free fall. The success of our algorithmic extension for data processing is demonstrated on the basis of these new datasets by showing that classification accuracy of current algorithms is highly increased. By making our new datasets publicly available, we intend to stimulate further research on application of Dynamic Vision Sensors in machine vision applications

    Developing a flexible and expressive realtime polyphonic wave terrain synthesis instrument based on a visual and multidimensional methodology

    Get PDF
    The Jitter extended library for Max/MSP is distributed with a gamut of tools for the generation, processing, storage, and visual display of multidimensional data structures. With additional support for a wide range of media types, and the interaction between these mediums, the environment presents a perfect working ground for Wave Terrain Synthesis. This research details the practical development of a realtime Wave Terrain Synthesis instrument within the Max/MSP programming environment utilizing the Jitter extended library. Various graphical processing routines are explored in relation to their potential use for Wave Terrain Synthesis

    Multi-alternative decision-making with non-stationary inputs

    Get PDF
    One of the most widely implemented models for multialternative decision-making is the multihypothesis sequential probability ratio test (MSPRT). It is asymptotically optimal, straightforward to implement, and has found application in modelling biological decision-making. However, the MSPRT is limited in application to discrete (‘trial-based’), non-timevarying scenarios. By contrast, real world situations will be continuous and entail stimulus non-stationarity. In these circumstances, decision-making mechanisms (like the MSPRT) which work by accumulating evidence, must be able to discard outdated evidence which becomes progressively irrelevant. To address this issue, we introduce a new decision mechanism by augmenting the MSPRT with a rectangular integration window and a transparent decision boundary. This allows selection and de-selection of options as their evidence changes dynamically. Performance was enhanced by adapting the window size to problem difficulty. Further, we present an alternative windowing method which exponentially decays evidence and does not significantly degrade performance, while greatly reducing the memory resources necessary. The methods presented have proven successful at allowing for the MSPRT algorithm to function in a non-stationary environment
    • …
    corecore