59,256 research outputs found

    Balanced data assimilation for highly-oscillatory mechanical systems

    Get PDF
    Data assimilation algorithms are used to estimate the states of a dynamical system using partial and noisy observations. The ensemble Kalman filter has become a popular data assimilation scheme due to its simplicity and robustness for a wide range of application areas. Nevertheless, the ensemble Kalman filter also has limitations due to its inherent Gaussian and linearity assumptions. These limitations can manifest themselves in dynamically inconsistent state estimates. We investigate this issue in this paper for highly oscillatory Hamiltonian systems with a dynamical behavior which satisfies certain balance relations. We first demonstrate that the standard ensemble Kalman filter can lead to estimates which do not satisfy those balance relations, ultimately leading to filter divergence. We also propose two remedies for this phenomenon in terms of blended time-stepping schemes and ensemble-based penalty methods. The effect of these modifications to the standard ensemble Kalman filter are discussed and demonstrated numerically for two model scenarios. First, we consider balanced motion for highly oscillatory Hamiltonian systems and, second, we investigate thermally embedded highly oscillatory Hamiltonian systems. The first scenario is relevant for applications from meteorology while the second scenario is relevant for applications of data assimilation to molecular dynamics

    Online Video Deblurring via Dynamic Temporal Blending Network

    Full text link
    State-of-the-art video deblurring methods are capable of removing non-uniform blur caused by unwanted camera shake and/or object motion in dynamic scenes. However, most existing methods are based on batch processing and thus need access to all recorded frames, rendering them computationally demanding and time consuming and thus limiting their practical use. In contrast, we propose an online (sequential) video deblurring method based on a spatio-temporal recurrent network that allows for real-time performance. In particular, we introduce a novel architecture which extends the receptive field while keeping the overall size of the network small to enable fast execution. In doing so, our network is able to remove even large blur caused by strong camera shake and/or fast moving objects. Furthermore, we propose a novel network layer that enforces temporal consistency between consecutive frames by dynamic temporal blending which compares and adaptively (at test time) shares features obtained at different time steps. We show the superiority of the proposed method in an extensive experimental evaluation.Comment: 10 page

    Real Time Animation of Virtual Humans: A Trade-off Between Naturalness and Control

    Get PDF
    Virtual humans are employed in many interactive applications using 3D virtual environments, including (serious) games. The motion of such virtual humans should look realistic (or ‘natural’) and allow interaction with the surroundings and other (virtual) humans. Current animation techniques differ in the trade-off they offer between motion naturalness and the control that can be exerted over the motion. We show mechanisms to parametrize, combine (on different body parts) and concatenate motions generated by different animation techniques. We discuss several aspects of motion naturalness and show how it can be evaluated. We conclude by showing the promise of combinations of different animation paradigms to enhance both naturalness and control

    Loss-resilient Coding of Texture and Depth for Free-viewpoint Video Conferencing

    Full text link
    Free-viewpoint video conferencing allows a participant to observe the remote 3D scene from any freely chosen viewpoint. An intermediate virtual viewpoint image is commonly synthesized using two pairs of transmitted texture and depth maps from two neighboring captured viewpoints via depth-image-based rendering (DIBR). To maintain high quality of synthesized images, it is imperative to contain the adverse effects of network packet losses that may arise during texture and depth video transmission. Towards this end, we develop an integrated approach that exploits the representation redundancy inherent in the multiple streamed videos a voxel in the 3D scene visible to two captured views is sampled and coded twice in the two views. In particular, at the receiver we first develop an error concealment strategy that adaptively blends corresponding pixels in the two captured views during DIBR, so that pixels from the more reliable transmitted view are weighted more heavily. We then couple it with a sender-side optimization of reference picture selection (RPS) during real-time video coding, so that blocks containing samples of voxels that are visible in both views are more error-resiliently coded in one view only, given adaptive blending will erase errors in the other view. Further, synthesized view distortion sensitivities to texture versus depth errors are analyzed, so that relative importance of texture and depth code blocks can be computed for system-wide RPS optimization. Experimental results show that the proposed scheme can outperform the use of a traditional feedback channel by up to 0.82 dB on average at 8% packet loss rate, and by as much as 3 dB for particular frames

    Text-based Editing of Talking-head Video

    No full text
    Editing talking-head video to change the speech content or to remove filler words is challenging. We propose a novel method to edit talking-head video based on its transcript to produce a realistic output video in which the dialogue of the speaker has been modified, while maintaining a seamless audio-visual flow (i.e. no jump cuts). Our method automatically annotates an input talking-head video with phonemes, visemes, 3D face pose and geometry, reflectance, expression and scene illumination per frame. To edit a video, the user has to only edit the transcript, and an optimization strategy then chooses segments of the input corpus as base material. The annotated parameters corresponding to the selected segments are seamlessly stitched together and used to produce an intermediate video representation in which the lower half of the face is rendered with a parametric face model. Finally, a recurrent video generation network transforms this representation to a photorealistic video that matches the edited transcript. We demonstrate a large variety of edits, such as the addition, removal, and alteration of words, as well as convincing language translation and full sentence synthesis

    Nonlinear dance motion analysis and motion editing using Hilbert-Huang transform

    Full text link
    Human motions (especially dance motions) are very noisy, and it is hard to analyze and edit the motions. To resolve this problem, we propose a new method to decompose and modify the motions using the Hilbert-Huang transform (HHT). First, HHT decomposes a chromatic signal into "monochromatic" signals that are the so-called Intrinsic Mode Functions (IMFs) using an Empirical Mode Decomposition (EMD) [6]. After applying the Hilbert Transform to each IMF, the instantaneous frequencies of the "monochromatic" signals can be obtained. The HHT has the advantage to analyze non-stationary and nonlinear signals such as human-joint-motions over FFT or Wavelet transform. In the present paper, we propose a new framework to analyze and extract some new features from a famous Japanese threesome pop singer group called "Perfume", and compare it with Waltz and Salsa dance. Using the EMD, their dance motions can be decomposed into motion (choreographic) primitives or IMFs. Therefore we can scale, combine, subtract, exchange, and modify those IMFs, and can blend them into new dance motions self-consistently. Our analysis and framework can lead to a motion editing and blending method to create a new dance motion from different dance motions.Comment: 6 pages, 10 figures, Computer Graphics International 2017, Conference short pape

    Virtual Exploration of Underwater Archaeological Sites : Visualization and Interaction in Mixed Reality Environments

    Get PDF
    This paper describes the ongoing developments in Photogrammetry and Mixed Reality for the Venus European project (Virtual ExploratioN of Underwater Sites, http://www.venus-project.eu). The main goal of the project is to provide archaeologists and the general public with virtual and augmented reality tools for exploring and studying deep underwater archaeological sites out of reach of divers. These sites have to be reconstructed in terms of environment (seabed) and content (artifacts) by performing bathymetric and photogrammetric surveys on the real site and matching points between geolocalized pictures. The base idea behind using Mixed Reality techniques is to offer archaeologists and general public new insights on the reconstructed archaeological sites allowing archaeologists to study directly from within the virtual site and allowing the general public to immersively explore a realistic reconstruction of the sites. Both activities are based on the same VR engine but drastically differ in the way they present information. General public activities emphasize the visually and auditory realistic aspect of the reconstruction while archaeologists activities emphasize functional aspects focused on the cargo study rather than realism which leads to the development of two parallel VR demonstrators. This paper will focus on several key points developed for the reconstruction process as well as both VR demonstrators (archaeological and general public) issues. The ?rst developed key point concerns the densi?cation of seabed points obtained through photogrammetry in order to obtain high quality terrain reproduction. The second point concerns the development of the Virtual and Augmented Reality (VR/AR) demonstrators for archaeologists designed to exploit the results of the photogrammetric reconstruction. And the third point concerns the development of the VR demonstrator for general public aimed at creating awareness of both the artifacts that were found and of the process with which they were discovered by recreating the dive process from ship to seabed

    Blending in Gravitational Microlensing Experiments: Source Confusion And Related Systematics

    Get PDF
    Gravitational microlensing surveys target very dense stellar fields in the local group. As a consequence the microlensed source stars are often blended with nearby unresolved stars. The presence of `blending' is a cause of major uncertainty when determining the lensing properties of events towards the Galactic centre. After demonstrating empirical cases of blending we utilize Monte Carlo simulations to probe the effects of blending. We generate artificial microlensing events using an HST luminosity function convolved to typical ground-based seeing, adopting a range of values for the stellar density and seeing. We find that a significant fraction of bright events are blended, contrary to the oft-quoted assumption that bright events should be free from blending. We probe the effect that this erroneous assumption has on both the observed event timescale distribution and the optical depth, using realistic detection criteria relevent to the different surveys. Importantly, under this assumption the latter quantity appears to be reasonably unaffected across our adopted values for seeing and density. The timescale distribution is however biased towards smaller values, even for the least dense fields. The dominant source of blending is from lensing of faint source stars, rather than lensing of bright source stars blended with nearby fainter stars. We also explore other issues, such as the centroid motion of blended events and the phenomena of `negative' blending. Furthermore, we breifly note that blending can affect the determination of the centre of the red clump giant region from an observed luminosity function. This has implications for a variety of studies, e.g. mapping extinction towards the bulge and attempts to constrain the parameters of the Galactic bar through red clump giant number counts. (Abridged)Comment: 18 pages, 10 figures. MNRAS (in press
    • 

    corecore