3,484 research outputs found

    A new numerical strategy with space-time adaptivity and error control for multi-scale streamer discharge simulations

    Get PDF
    This paper presents a new resolution strategy for multi-scale streamer discharge simulations based on a second order time adaptive integration and space adaptive multiresolution. A classical fluid model is used to describe plasma discharges, considering drift-diffusion equations and the computation of electric field. The proposed numerical method provides a time-space accuracy control of the solution, and thus, an effective accurate resolution independent of the fastest physical time scale. An important improvement of the computational efficiency is achieved whenever the required time steps go beyond standard stability constraints associated with mesh size or source time scales for the resolution of the drift-diffusion equations, whereas the stability constraint related to the dielectric relaxation time scale is respected but with a second order precision. Numerical illustrations show that the strategy can be efficiently applied to simulate the propagation of highly nonlinear ionizing waves as streamer discharges, as well as highly multi-scale nanosecond repetitively pulsed discharges, describing consistently a broad spectrum of space and time scales as well as different physical scenarios for consecutive discharge/post-discharge phases, out of reach of standard non-adaptive methods.Comment: Support of Ecole Centrale Paris is gratefully acknowledged for several month stay of Z. Bonaventura at Laboratory EM2C as visiting Professor. Authors express special thanks to Christian Tenaud (LIMSI-CNRS) for providing the basis of the multiresolution kernel of MR CHORUS, code developed for compressible Navier-Stokes equations (D\'eclaration d'Invention DI 03760-01). Accepted for publication; Journal of Computational Physics (2011) 1-2

    Spatial image polynomial decomposition with application to video classification

    No full text
    International audienceThis paper addresses the use of orthogonal polynomial basis transform in video classification due to its multiple advantages, especially for multiscale and multiresolution analysis similar to the wavelet transform. In our approach, we benefit from these advantages to reduce the resolution of the video by using a multiscale/multiresolution decomposition to define a new algorithm that decomposes a color image into geometry and texture component by projecting the image on a bivariate polynomial basis and considering the geometry component as the partial reconstruction and the texture component as the remaining part, and finally to model the features (like motion and texture) extracted from reduced image sequences by projecting them into a bivariate polynomial basis in order to construct a hybrid polynomial motion texture video descriptor. To evaluate our approach, we consider two visual recognition tasks, namely the classification of dynamic textures and recognition of human actions. The experimental section shows that the proposed approach achieves a perfect recognition rate in the Weizmann database and highest accuracy in the Dyntex++ database compared to existing methods

    Efficient multiscale regularization with applications to the computation of optical flow

    Get PDF
    Includes bibliographical references (p. 28-31).Supported by the Air Force Office of Scientific Research. AFOSR-92-J-0002 Supported by the Draper Laboratory IR&D Program. DL-H-418524 Supported by the Office of Naval Research. N00014-91-J-1004 Supported by the Army Research Office. DAAL03-92-G-0115Mark R. Luettgen, W. Clem Karl, Alan S. Willsky

    Stochastic uncertainty models for the luminance consistency assumption

    Get PDF
    International audienceIn this paper, a stochastic formulation of the brightness consistency used in many computer vision problems involving dynamic scenes (motion estimation or point tracking for instance) is proposed. Usually, this model which assumes that the luminance of a point is constant along its trajectory is expressed in a differential form through the total derivative of the luminance function. This differential equation links linearly the point velocity to the spatial and temporal gradients of the luminance function. However when dealing with images, the available informations only hold at discrete time and on a discrete grid. In this paper we formalize the image luminance as a continuous function transported by a flow known only up to some uncertainties related to such a discretization process. Relying on stochastic calculus, we define a formulation of the luminance function preservation in which these uncertainties are taken into account. From such a framework, it can be shown that the usual deterministic optical flow constraint equation corresponds to our stochastic evolution under some strong constraints. These constraints can be relaxed by imposing a weaker temporal assumption on the luminance function and also in introducing anisotropic intensity-based uncertainties. We in addition show that these uncertainties can be computed at each point of the image grid from the image data and provide hence meaningful information on the reliability of the motion estimates. To demonstrate the benefit of such a stochastic formulation of the brightness consistency assumption, we have considered a local least squares motion estimator relying on this new constraint. This new motion estimator improves significantly the quality of the results

    Convolutional Neural Network on Three Orthogonal Planes for Dynamic Texture Classification

    Get PDF
    Dynamic Textures (DTs) are sequences of images of moving scenes that exhibit certain stationarity properties in time such as smoke, vegetation and fire. The analysis of DT is important for recognition, segmentation, synthesis or retrieval for a range of applications including surveillance, medical imaging and remote sensing. Deep learning methods have shown impressive results and are now the new state of the art for a wide range of computer vision tasks including image and video recognition and segmentation. In particular, Convolutional Neural Networks (CNNs) have recently proven to be well suited for texture analysis with a design similar to a filter bank approach. In this paper, we develop a new approach to DT analysis based on a CNN method applied on three orthogonal planes x y , xt and y t . We train CNNs on spatial frames and temporal slices extracted from the DT sequences and combine their outputs to obtain a competitive DT classifier. Our results on a wide range of commonly used DT classification benchmark datasets prove the robustness of our approach. Significant improvement of the state of the art is shown on the larger datasets.Comment: 19 pages, 10 figure

    Visual Importance-Biased Image Synthesis Animation

    Get PDF
    Present ray tracing algorithms are computationally intensive, requiring hours of computing time for complex scenes. Our previous work has dealt with the development of an overall approach to the application of visual attention to progressive and adaptive ray-tracing techniques. The approach facilitates large computational savings by modulating the supersampling rates in an image by the visual importance of the region being rendered. This paper extends the approach by incorporating temporal changes into the models and techniques developed, as it is expected that further efficiency savings can be reaped for animated scenes. Applications for this approach include entertainment, visualisation and simulation

    Optical flow computation via multiscale regularization

    Get PDF
    "EDICS category 1.11."Includes bibliographical references (p. 38-40).Supported by the Air Force Office of Scientific Research. AFOSR-88-0032 Supported by the Draper Laboratory IR&D Program. DL-H-418524 Supported by the Office of Naval Research. N00014-91-J-1004 Supported by the Army Research Office. DAAL03-36-K-0171Mark R. Luettgen, W. Clem Karl, Alan S. Willsky
    corecore