73,057 research outputs found

    The Discrete Representation of Continuously Moving Indeterminate Objects

    Get PDF
    AbstractTo incorporate indeterminacy in spatio-temporal database systems, grey modeling method is used for the calculations of the discrete models of indeterminate two dimension continuously moving objects. The Grey Model GM (1, 1) model generated from the snapshot sequence reduces the randomness of discrete snapshot and generates the holistic measure of object's movements. Comparisons to traditional linear models show that when information is limited this model can be used in the interpolation and near future prediction of uncertain continuously moving spatio-temporal objects

    Recovering metric properties of objects through spatiotemporal interpolation

    Get PDF
    AbstractSpatiotemporal interpolation (STI) refers to perception of complete objects from fragmentary information across gaps in both space and time. It differs from static interpolation in that requirements for interpolation are not met in any static frame. It has been found that STI produced objective performance advantages in a shape discrimination paradigm for both illusory and occluded objects when contours met conditions of spatiotemporal relatability. Here we report psychophysical studies testing whether spatiotemporal interpolation allows recovery of metric properties of objects. Observers viewed virtual triangles specified only by sequential partial occlusions of background elements by their vertices (the STI condition) and made forced choice judgments of the object’s size relative to a reference standard. We found that length could often be accurately recovered for conditions where fragments were relatable and formed illusory triangles. In the first control condition, three moving dots located at the vertices provided the same spatial and timing information as the virtual object in the STI condition but did not induce perception of interpolated contours or a coherent object. In the second control condition oriented line segments were added to the dots and mid-points between the dots in a way that did not induce perception of interpolated contours. Control stimuli did not lead to accurate size judgments. We conclude that spatiotemporal interpolation can produce representations, from fragmentary information, of metric properties in addition to shape

    Kinematic interpolation of movement data

    Get PDF
    Mobile tracking technologies are facilitating the collection of increasingly large and detailed data sets on object movement. Movement data are collected by recording an object’s location at discrete time intervals. Often, of interest is to estimate the unknown position of the object at unrecorded time points to increase the temporal resolution of the data, to correct erroneous or missing data points, or to match the recorded times between multiple data sets. Estimating an object’s unknown location between known locations is termed path interpolation. This paper introduces a new method for path interpolation termed kinematic interpolation. Kinematic interpolation incorporates object kinematics (i.e. velocity and acceleration) into the interpolation process. Six empirical data sets (two types of correlated random walks, caribou, cyclist, hurricane and athlete tracking data) are used to compare kinematic interpolation to other interpolation algorithms. Results showed kinematic interpolation to be a suitable interpolation method with fast-moving objects (e.g. the cyclist, hurricane and athlete tracking data), while other algorithms performed best with the correlated random walk and caribou data. Several issues associated with path interpolation tasks are discussed along with potential applications where kinematic interpolation can be useful. Finally, code for performing path interpolation is provided (for each method compared within) using the statistical software R.PostprintPeer reviewe

    Cubic Spline Interpolation by Solving a Recurrence Equation Instead of a Tridiagonal Matrix

    Get PDF
    The cubic spline interpolation method is proba- bly the most widely-used polynomial interpolation method for functions of one variable. However, the cubic spline method requires solving a tridiagonal matrix-vector equation with an O(n) computational time complexity where n is the number of data measurements. Even an O(n) time complexity may be too much in some time-ciritical applications, such as continuously estimating and updating the flight paths of moving objects. This paper shows that under certain boundary conditions the tridiagonal matrix solving step of the cubic spline method could be entirely eliminated and instead the coefficients of the unknown cubic polynomials can be found by solving a single recurrence equation in much faster time

    Trajectory Representation in Location-Based Services: Problems and Solution

    Get PDF
    Recently, much work has been done in feasibility studies on services offered to moving objects in an environment equipped with mobile telephony, network technology and GIS. However, despite of all work on GIS and databases, the situations in which the whereabouts of objects are constantly monitored and stored for future analysis are an important class of problems that present-day database/GIS has difficulty to handle. Considering the fact that data about whereabouts of moving objects are acquired in a discrete way, providing the data when no observation is available is a must. Therefore, obtaining a "faithful representation" of trajectories with a sufficient number of discrete (though possibly erroneous) data points is the objective of this research

    Real-time detection and tracking of multiple objects with partial decoding in H.264/AVC bitstream domain

    Full text link
    In this paper, we show that we can apply probabilistic spatiotemporal macroblock filtering (PSMF) and partial decoding processes to effectively detect and track multiple objects in real time in H.264|AVC bitstreams with stationary background. Our contribution is that our method cannot only show fast processing time but also handle multiple moving objects that are articulated, changing in size or internally have monotonous color, even though they contain a chaotic set of non-homogeneous motion vectors inside. In addition, our partial decoding process for H.264|AVC bitstreams enables to improve the accuracy of object trajectories and overcome long occlusion by using extracted color information.Comment: SPIE Real-Time Image and Video Processing Conference 200

    Motion Cooperation: Smooth Piece-Wise Rigid Scene Flow from RGB-D Images

    Get PDF
    We propose a novel joint registration and segmentation approach to estimate scene flow from RGB-D images. Instead of assuming the scene to be composed of a number of independent rigidly-moving parts, we use non-binary labels to capture non-rigid deformations at transitions between the rigid parts of the scene. Thus, the velocity of any point can be computed as a linear combination (interpolation) of the estimated rigid motions, which provides better results than traditional sharp piecewise segmentations. Within a variational framework, the smooth segments of the scene and their corresponding rigid velocities are alternately refined until convergence. A K-means-based segmentation is employed as an initialization, and the number of regions is subsequently adapted during the optimization process to capture any arbitrary number of independently moving objects. We evaluate our approach with both synthetic and real RGB-D images that contain varied and large motions. The experiments show that our method estimates the scene flow more accurately than the most recent works in the field, and at the same time provides a meaningful segmentation of the scene based on 3D motion.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech. Spanish Government under the grant programs FPI-MICINN 2012 and DPI2014- 55826-R (co-founded by the European Regional Development Fund), as well as by the EU ERC grant Convex Vision (grant agreement no. 240168)

    SceneFlowFields: Dense Interpolation of Sparse Scene Flow Correspondences

    Full text link
    While most scene flow methods use either variational optimization or a strong rigid motion assumption, we show for the first time that scene flow can also be estimated by dense interpolation of sparse matches. To this end, we find sparse matches across two stereo image pairs that are detected without any prior regularization and perform dense interpolation preserving geometric and motion boundaries by using edge information. A few iterations of variational energy minimization are performed to refine our results, which are thoroughly evaluated on the KITTI benchmark and additionally compared to state-of-the-art on MPI Sintel. For application in an automotive context, we further show that an optional ego-motion model helps to boost performance and blends smoothly into our approach to produce a segmentation of the scene into static and dynamic parts.Comment: IEEE Winter Conference on Applications of Computer Vision (WACV), 201

    Real Time Turbulent Video Perfecting by Image Stabilization and Super-Resolution

    Full text link
    Image and video quality in Long Range Observation Systems (LOROS) suffer from atmospheric turbulence that causes small neighbourhoods in image frames to chaotically move in different directions and substantially hampers visual analysis of such image and video sequences. The paper presents a real-time algorithm for perfecting turbulence degraded videos by means of stabilization and resolution enhancement. The latter is achieved by exploiting the turbulent motion. The algorithm involves generation of a reference frame and estimation, for each incoming video frame, of a local image displacement map with respect to the reference frame; segmentation of the displacement map into two classes: stationary and moving objects and resolution enhancement of stationary objects, while preserving real motion. Experiments with synthetic and real-life sequences have shown that the enhanced videos, generated in real time, exhibit substantially better resolution and complete stabilization for stationary objects while retaining real motion.Comment: Submitted to The Seventh IASTED International Conference on Visualization, Imaging, and Image Processing (VIIP 2007) August, 2007 Palma de Mallorca, Spai

    High-speed Video from Asynchronous Camera Array

    Get PDF
    This paper presents a method for capturing high-speed video using an asynchronous camera array. Our method sequentially fires each sensor in a camera array with a small time offset and assembles captured frames into a high-speed video according to the time stamps. The resulting video, however, suffers from parallax jittering caused by the viewpoint difference among sensors in the camera array. To address this problem, we develop a dedicated novel view synthesis algorithm that transforms the video frames as if they were captured by a single reference sensor. Specifically, for any frame from a non-reference sensor, we find the two temporally neighboring frames captured by the reference sensor. Using these three frames, we render a new frame with the same time stamp as the non-reference frame but from the viewpoint of the reference sensor. Specifically, we segment these frames into super-pixels and then apply local content-preserving warping to warp them to form the new frame. We employ a multi-label Markov Random Field method to blend these warped frames. Our experiments show that our method can produce high-quality and high-speed video of a wide variety of scenes with large parallax, scene dynamics, and camera motion and outperforms several baseline and state-of-the-art approaches.Comment: 10 pages, 82 figures, Published at IEEE WACV 201
    • …
    corecore