21,250 research outputs found

    Multi Stage based Time Series Analysis of User Activity on Touch Sensitive Surfaces in Highly Noise Susceptible Environments

    Full text link
    This article proposes a multistage framework for time series analysis of user activity on touch sensitive surfaces in noisy environments. Here multiple methods are put together in multi stage framework; including moving average, moving median, linear regression, kernel density estimation, partial differential equations and Kalman filter. The proposed three stage filter consisting of partial differential equation based denoising, Kalman filter and moving average method provides ~25% better noise reduction than other methods according to Mean Squared Error (MSE) criterion in highly noise susceptible environments. Apart from synthetic data, we also obtained real world data like hand writing, finger/stylus drags etc. on touch screens in the presence of high noise such as unauthorized charger noise or display noise and validated our algorithms. Furthermore, the proposed algorithm performs qualitatively better than the existing solutions for touch panels of the high end hand held devices available in the consumer electronics market qualitatively.Comment: 9 pages (including 9 figures and 3 tables); International Journal of Computer Applications (published

    Robust and Efficient Recovery of Rigid Motion from Subspace Constraints Solved using Recursive Identification of Nonlinear Implicit Systems

    Get PDF
    The problem of estimating rigid motion from projections may be characterized using a nonlinear dynamical system, composed of the rigid motion transformation and the perspective map. The time derivative of the output of such a system, which is also called the "motion field", is bilinear in the motion parameters, and may be used to specify a subspace constraint on either the direction of translation or the inverse depth of the observed points. Estimating motion may then be formulated as an optimization task constrained on such a subspace. Heeger and Jepson [5], who first introduced this constraint, solve the optimization task using an extensive search over the possible directions of translation. We reformulate the optimization problem in a systems theoretic framework as the the identification of a dynamic system in exterior differential form with parameters on a differentiable manifold, and use techniques which pertain to nonlinear estimation and identification theory to perform the optimization task in a principled manner. The general technique for addressing such identification problems [14] has been used successfully in addressing other problems in computational vision [13, 12]. The application of the general method [14] results in a recursive and pseudo-optimal solution of the motion problem, which has robustness properties far superior to other existing techniques we have implemented. By releasing the constraint that the visible points lie in front of the observer, we may explain some psychophysical effects on the nonrigid percept of rigidly moving shapes. Experiments on real and synthetic image sequences show very promising results in terms of robustness, accuracy and computational efficiency

    SO(3)-invariant asymptotic observers for dense depth field estimation based on visual data and known camera motion

    Full text link
    In this paper, we use known camera motion associated to a video sequence of a static scene in order to estimate and incrementally refine the surrounding depth field. We exploit the SO(3)-invariance of brightness and depth fields dynamics to customize standard image processing techniques. Inspired by the Horn-Schunck method, we propose a SO(3)-invariant cost to estimate the depth field. At each time step, this provides a diffusion equation on the unit Riemannian sphere that is numerically solved to obtain a real time depth field estimation of the entire field of view. Two asymptotic observers are derived from the governing equations of dynamics, respectively based on optical flow and depth estimations: implemented on noisy sequences of synthetic images as well as on real data, they perform a more robust and accurate depth estimation. This approach is complementary to most methods employing state observers for range estimation, which uniquely concern single or isolated feature points.Comment: Submitte

    Synergy-Based Hand Pose Sensing: Optimal Glove Design

    Get PDF
    In this paper we study the problem of improving human hand pose sensing device performance by exploiting the knowledge on how humans most frequently use their hands in grasping tasks. In a companion paper we studied the problem of maximizing the reconstruction accuracy of the hand pose from partial and noisy data provided by any given pose sensing device (a sensorized "glove") taking into account statistical a priori information. In this paper we consider the dual problem of how to design pose sensing devices, i.e. how and where to place sensors on a glove, to get maximum information about the actual hand posture. We study the continuous case, whereas individual sensing elements in the glove measure a linear combination of joint angles, the discrete case, whereas each measure corresponds to a single joint angle, and the most general hybrid case, whereas both continuous and discrete sensing elements are available. The objective is to provide, for given a priori information and fixed number of measurements, the optimal design minimizing in average the reconstruction error. Solutions relying on the geometrical synergy definition as well as gradient flow-based techniques are provided. Simulations of reconstruction performance show the effectiveness of the proposed optimal design.Comment: Submitted to International Journal of Robotics Research 201

    Inertial-sensor bias estimation from brightness/depth images and based on SO(3)-invariant integro/partial-differential equations on the unit sphere

    Full text link
    Constant biases associated to measured linear and angular velocities of a moving object can be estimated from measurements of a static scene by embedded brightness and depth sensors. We propose here a Lyapunov-based observer taking advantage of the SO(3)-invariance of the partial differential equations satisfied by the measured brightness and depth fields. The resulting asymptotic observer is governed by a non-linear integro/partial differential system where the two independent scalar variables indexing the pixels live on the unit sphere of the 3D Euclidian space. The observer design and analysis are strongly simplified by coordinate-free differential calculus on the unit sphere equipped with its natural Riemannian structure. The observer convergence is investigated under C^1 regularity assumptions on the object motion and its scene. It relies on Ascoli-Arzela theorem and pre-compactness of the observer trajectories. It is proved that the estimated biases converge towards the true ones, if and only if, the scene admits no cylindrical symmetry. The observer design can be adapted to realistic sensors where brightness and depth data are only available on a subset of the unit sphere. Preliminary simulations with synthetic brightness and depth images (corrupted by noise around 10%) indicate that such Lyapunov-based observers should be robust and convergent for much weaker regularity assumptions.Comment: 30 pages, 6 figures, submitte

    Multi-Scale 3D Scene Flow from Binocular Stereo Sequences

    Full text link
    Scene flow methods estimate the three-dimensional motion field for points in the world, using multi-camera video data. Such methods combine multi-view reconstruction with motion estimation. This paper describes an alternative formulation for dense scene flow estimation that provides reliable results using only two cameras by fusing stereo and optical flow estimation into a single coherent framework. Internally, the proposed algorithm generates probability distributions for optical flow and disparity. Taking into account the uncertainty in the intermediate stages allows for more reliable estimation of the 3D scene flow than previous methods allow. To handle the aperture problems inherent in the estimation of optical flow and disparity, a multi-scale method along with a novel region-based technique is used within a regularized solution. This combined approach both preserves discontinuities and prevents over-regularization – two problems commonly associated with the basic multi-scale approaches. Experiments with synthetic and real test data demonstrate the strength of the proposed approach.National Science Foundation (CNS-0202067, IIS-0208876); Office of Naval Research (N00014-03-1-0108

    Vision and Learning for Deliberative Monocular Cluttered Flight

    Full text link
    Cameras provide a rich source of information while being passive, cheap and lightweight for small and medium Unmanned Aerial Vehicles (UAVs). In this work we present the first implementation of receding horizon control, which is widely used in ground vehicles, with monocular vision as the only sensing mode for autonomous UAV flight in dense clutter. We make it feasible on UAVs via a number of contributions: novel coupling of perception and control via relevant and diverse, multiple interpretations of the scene around the robot, leveraging recent advances in machine learning to showcase anytime budgeted cost-sensitive feature selection, and fast non-linear regression for monocular depth prediction. We empirically demonstrate the efficacy of our novel pipeline via real world experiments of more than 2 kms through dense trees with a quadrotor built from off-the-shelf parts. Moreover our pipeline is designed to combine information from other modalities like stereo and lidar as well if available
    corecore