174 research outputs found

    Stability properties of the ENO method

    Full text link
    We review the currently available stability properties of the ENO reconstruction procedure, such as its monotonicity and non-oscillatory properties, the sign property, upper bounds on cell interface jumps and a total variation-type bound. We also outline how these properties can be applied to derive stability and convergence of high-order accurate schemes for conservation laws.Comment: To appear in Handbook of Numerical Methods for Hyperbolic Problem

    Quasilinear subdivision schemes with applications to ENO interpolation

    Get PDF
    AbstractWe analyze the convergence and smoothness of certain class of nonlinear subdivision schemes. We study the stability properties of these schemes and apply this analysis to the specific class based on ENO and weighted-ENO interpolation techniques. Our interest in these techniques is motivated by their application to signal and image processing

    Smoothness of Nonlinear and Non-Separable Subdivision Schemes

    Full text link
    We study in this paper nonlinear subdivision schemes in a multivariate setting allowing arbitrary dilation matrix. We investigate the convergence of such iterative process to some limit function. Our analysis is based on some conditions on the contractivity of the associated scheme for the differences. In particular, we show the regularity of the limit function, in LpL^p and Sobolev spaces

    Interpolatory Nonlinear and Non-Separable Multi-scale Representation: Application to Image Compression

    No full text
    In this paper, we introduce the notion of nonlinear and non-separable multi-scale representation. We show how it can be derived from nonlinear and non-separable subdivision schemes associated to a non-diagonal dilation matrix. We focus on nonlinear multi-scale decomposition where the dilation matrix is either the quincunx or the hexagonal matrix. We then detail the encoding and decoding algorithm of the representation and, in particular, how the EZW (Embedded Zero-tree Wavelet) algorithm adapts in that context. Numerical experiments on image compression conclude the paper.

    Nonlinear thresholding of multiresolution decompositions adapted to the presence of discontinuities

    No full text
    International audienceA new nonlinear representation of multiresolution decompositions and new thresholding adapted to the presence of discontinuities are presented and analyzed. They are based on a nonlinear modification of the multiresolution details coming from an initial (linear or nonlinear) scheme and on a data dependent thresholding. Stability results are derived. Numerical advantages are demonstrated on various numerical experiments

    Multiresolution Wavelet Based Adaptive Numerical Dissipation Control for High Order Methods

    Get PDF
    The recently developed essentially fourth-order or higher low dissipative shockcapturing scheme of Yee, Sandham, and Djomehri [25] aimed at minimizing numerical dissipations for high speed compressible viscous flows containing shocks, shears and turbulence. To detect non-smooth behavior and control the amount of numerical dissipation to be added, Yee et al. employed an artificial compression method (ACM) of Harten [4] but utilize it in an entirely different context than Harten originally intended. The ACM sensor consists of two tuning parameters and is highly physical problem dependent. To minimize the tuning of parameters and physical problem dependence, new sensors with improved detection properties are proposed. The new sensors are derived from utilizing appropriate non-orthogonal wavelet basis functions and they can be used to completely switch off the extra numerical dissipation outside shock layers. The non-dissipative spatial base scheme of arbitrarily high order of accuracy can be maintained without compromising its stability at all parts of the domain where the solution is smooth. Two types of redundant non-orthogonal wavelet basis functions are considered. One is the B-spline wavelet (Mallat and Zhong [14]) used by Gerritsen and Olsson [3] in an adaptive mesh refinement method, to determine regions where refinement should be done. The other is the modification of the multiresolution method of Harten [5] by converting it to a new, redundant, non-orthogonal wavelet. The wavelet sensor is then obtained by computing the estimated Lipschitz exponent of a chosen physical quantity (or vector) to be sensed on a chosen wavelet basis function. Both wavelet sensors can be viewed as dual purpose adaptive methods leading to dynamic numerical dissipation control and improved grid adaptation indicators. Consequently, they are useful not only for shock-turbulence computations but also for computational aeroacoustics and numerical combustion. In addition, these sensors are scheme independent and can be stand-alone options for numerical algorithms other than the Yee et al. scheme

    Nonlinear numerical techniques for the processing of data with discontinuities

    Get PDF
    [SPA] En esta tesis de doctorado, hemos intentado diseñar algoritmos capaces de manejar datos discontinuos. Hemos centrado nuestra atención en tres aplicaciones principales: • Integración numérica más términos de corrección. En esta parte de la tesis, construimos y analizamos una nueva técnica no lineal que permite obtener integraciones numéricas precisas de cualquier orden utilizando datos que contienen discontinuidades, y cuando el integrando solo se conoce en puntos de la malla. La novedad de la técnica consiste en la inclusión de términos de corrección con una expresión cerrada que depende del tamaño de los saltos de la función y sus derivadas en las discontinuidades, cuya posición se supone conocida. La adición de estos términos permite recuperar la precisión de las formulas clásicas de integración numérica cerca de las discontinuidades, ya que estos términos de corrección tienen en cuenta el error que cometen las formulas clásicas de integración hasta su precisión en las zonas de suavidad de los datos. Por lo tanto, los términos de corrección se pueden agregar durante la integración o como un post-proceso, lo cual es útil si el cálculo principal de la integral ya se ha realizado utilizando fórmulas clásicas. Durante nuestra investigación, logramos concluir varios experimentos numéricos que confirmaron las conclusiones teóricas alcanzadas. Los resultados de esta parte de la tesis se incluyeron en el artículo [1], publicado en la revista Mathematics and Computers in Simulation, una revista internacional que pertenece al primer cuartil del Journal of Citation Reports. • Interpolación de Hermite más términos de corrección. Esta técnica (sin términos de corrección) se utiliza clásicamente para reconstruir datos suaves cuando la función y sus derivadas de primer orden están disponibles en ciertos nodos. Si las derivadas de primer orden no están disponibles, es fácil establecer un sistema de ecuaciones imponiendo algunas condiciones de regularidad sobre los nodos. Este proceso conduce a la construcción de un spline de Hermite. El problema del spline de Hermite descrito es que se pierde la precisión si los datos contienen singularidades (nos centraremos fundamentalmente en discontinuidades en la función o en la primera derivada, aunque también analizaremos que ocurre cuando hay discontinuidades en la segunda derivada). La consecuencia es la aparición de oscilaciones, si hay una discontinuidad abrupta en la función, que afecta globalmente la precisión del spline, o el suavizado de las singularidades, si las discontinuidades están en las derivadas de la función. Nuestro objetivo en esta parte de la tesis es la construcción y análisis de una nueva técnica que permite el cálculo preciso de derivadas de primer orden de una función cerca de las singularidades utilizando un spline cúbico de Hermite. La idea es corregir el sistema de ecuaciones del spline para alcanzar la precisión deseada incluso cerca de las singularidades. Una vez que hemos calculado las derivadas de primer orden con suficiente precisión, se agrega un término de corrección al spline de Hermite en los intervalos que contienen una singularidad. El objetivo es reconstruir funciones suaves a trozos con precisión O(h 4 ) incluso cerca de las singularidades. El proceso de adaptación requerirá algún conocimiento sobre la posición del salto, así como del tamaño de los saltos en la función y algunas derivadas en dicha posición. Esta técnica puede usarse como post-proceso, donde agregamos un término de corrección al spline cúbico de Hermite clásico. Durante nuestra investigación, obtuvimos pruebas para la precisión y regularidad del spline corregido y sus derivadas. También analizamos el mecanismo que elimina el fenómeno Gibbs cerca del salto en la función. Además, también realizamos varios experimentos numéricos que confirmaron los resultados teóricos obtenidos. Los resultados de esta parte de la tesis se incluyeron en el artículo [2], publicado en la revista Journal of Scientific Computing, una revista internacional que pertenece al primer cuartil del Journal of Citation Reports. • Super resolución. Aunque se presenta en ´ultima posición, este tema marcó el comienzo de esta tesis, donde centramos nuestra atención en algoritmos de multiresolución. La super resolución busca mejorar la calidad de imágenes y videos con baja resolución agregando detalles más finos, lo que resulta en una salida más nítida y clara. Esta parte de la tesis es muy breve y solo trata de reflejar el trabajo que se realizó para obtener el D.E.A., ya que poco después centramos nuestra atención en otras líneas de investigación que aparentaban ser algo más prometedoras para la elaboración de esta tesis.[ENG] In this PhD thesis we have tried to design algorithms capable of dealing with discontinuous data. We have centred our attention in three main applications: • Numerical integration plus correction terms. In this part of the thesis we constructed and analyzed a new nonlinear technique that allows obtaining accurate numerical integrations of any order using data that contains discontinuities, and when the integrand is only known at grid points. The novelty of the technique consists in the inclusion of correction terms with a closed expression that depends on the size of the jumps of the function and its derivatives at the discontinuities, that are supposed to be known. The addition of these terms allows recovering the accuracy of classical numerical integration formulas close to the discontinuities, as these correction terms account for the error that the classical integration formulas commit up to their accuracy at smooth zones. Thus, the correction terms can be added during the integration or as post-processing, which is useful if the main calculation of the integral has been already done using classical formulas. During our research, we managed to conclude several numerical experiments that confirmed the theoretical conclusions reached. The results of this part of the thesis were included in the article [1] published in the journal Mathematics and Computers in Simulation, an international journal that belongs to the first quartile of the Journal of Citations Report. • Hermite interpolation plus correction terms. This technique (without correction terms) is classically used to reconstruct smooth data when the function and its first order derivatives are available at certain nodes. If first order derivatives are not available, it is easy to set a system of equations imposing some regularity conditions at the data nodes in order to obtain them. This process leads to the construction of a Hermite spline. The problem of the described Hermite splines is that the accuracy is lost if the data contains singularities (we will center our attention on discontinuities in the function or in the first derivative, although we will also analyze what happens when there are discontinuities in the second derivative). The consequence is the appearance of oscillations, if there is a jump discontinuity in the function, that globally a↵ects the accuracy of the spline, or the smearing of singularities, if the discontinuities are in the derivatives of the function.Our objective in this part of the thesis is devoted to the construction and analysis of a new technique that allows for the computation of accurate first order derivatives of a function close to singularities using a cubic Hermite spline. The idea is to correct the system of equations of the spline in order to attain the desired accuracy even close to the singularities. Once we have computed the first order derivatives with enough accuracy, a correction term is added to the Hermite spline in the intervals that contain a singularity. The aim is to reconstruct piecewise smooth functions with O(h 4 ) accuracy even close to the singularities. The process of adaption will require some knowledge about the position of the singularity and the jumps of the function and some of its derivatives at the singularity. The whole process can be used as a post-processing, where a correction term is added to the classical cubic Hermite spline. During our research, we obtained proofs for the accuracy and regularity of the corrected spline and its derivatives. We also analysed the mechanism that eliminates the Gibbs phenomenon close to jump discontinuities in the function. In addition, we also performed several numerical experiments that confirmed the theoretical results obtained. The results of this part of the thesis were included in the article [2] published in the journal Journal of Scientific Computing, an international journal that belongs to the first quartile of the Journal of Citations Report. • Super resolution. While it is presented in the last position, this marked the beginning of this thesis, where we focused our attention on multi-resolution algorithms. Super resolution seeks to enhance the quality of low-resolution images and videos by adding finer details, resulting in a sharper and clearer output. These algorithms operate by analyzing different levels of image data and combining them to create a higher-resolution version. Applications for these algorithms can be found across industries, including surveillance, medical imaging, and media, to improve visual fidelity. Although the study of super resolution was the starting point of the thesis, we soon shifted our focus to the study of other algorithms in the context of numerical approximation. These alternative approaches proved to be more promising in terms of results that could be published. Nevertheless, this first part of the research served to obtain the D.E.A.Escuela Internacional de Doctorado de la Universidad Politécnica de CartagenaUniversidad Politécnica de CartagenaPrograma Doctorado en Tecnologías Industriale

    Development of low dissipative high order filter schemes for multiscale Navier–Stokes/MHD systems

    Get PDF
    Recent progress in the development of a class of low dissipative high order (fourth-order or higher) filter schemes for multiscale Navier–Stokes, and ideal and non-ideal magnetohydrodynamics (MHD) systems is described. The four main features of this class of schemes are: (a) multiresolution wavelet decomposition of the computed flow data as sensors for adaptive numerical dissipative control, (b) multistep filter to accommodate efficient application of different numerical dissipation models and different spatial high order base schemes, (c) a unique idea in solving the ideal conservative MHD system (a non-strictly hyperbolic conservation law) without having to deal with an incomplete eigensystem set while at the same time ensuring that correct shock speeds and locations are computed, and (d) minimization of the divergence of the magnetic field numerical error. By design, the flow sensors, different choice of high order base schemes and numerical dissipation models are stand-alone modules. A whole class of low dissipative high order schemes can be derived at ease, making the resulting computer software very flexible with widely applicable. Performance of multiscale and multiphysics test cases are illustrated with many levels of grid refinement and comparison with commonly used schemes in the literature

    A High-Order Scheme for Image Segmentation via a modified Level-Set method

    Get PDF
    In this paper we propose a high-order accurate scheme for image segmentation based on the level-set method. In this approach, the curve evolution is described as the 0-level set of a representation function but we modify the velocity that drives the curve to the boundary of the object in order to obtain a new velocity with additional properties that are extremely useful to develop a more stable high-order approximation with a small additional cost. The approximation scheme proposed here is the first 2D version of an adaptive "filtered" scheme recently introduced and analyzed by the authors in 1D. This approach is interesting since the implementation of the filtered scheme is rather efficient and easy. The scheme combines two building blocks (a monotone scheme and a high-order scheme) via a filter function and smoothness indicators that allow to detect the regularity of the approximate solution adapting the scheme in an automatic way. Some numerical tests on synthetic and real images confirm the accuracy of the proposed method and the advantages given by the new velocity.Comment: Accepted version for publication in SIAM Journal on Imaging Sciences, 86 figure
    corecore