70 research outputs found

    Nonlinear numerical techniques for the processing of data with discontinuities

    Get PDF
    [SPA] En esta tesis de doctorado, hemos intentado diseñar algoritmos capaces de manejar datos discontinuos. Hemos centrado nuestra atención en tres aplicaciones principales: • Integración numérica más términos de corrección. En esta parte de la tesis, construimos y analizamos una nueva técnica no lineal que permite obtener integraciones numéricas precisas de cualquier orden utilizando datos que contienen discontinuidades, y cuando el integrando solo se conoce en puntos de la malla. La novedad de la técnica consiste en la inclusión de términos de corrección con una expresión cerrada que depende del tamaño de los saltos de la función y sus derivadas en las discontinuidades, cuya posición se supone conocida. La adición de estos términos permite recuperar la precisión de las formulas clásicas de integración numérica cerca de las discontinuidades, ya que estos términos de corrección tienen en cuenta el error que cometen las formulas clásicas de integración hasta su precisión en las zonas de suavidad de los datos. Por lo tanto, los términos de corrección se pueden agregar durante la integración o como un post-proceso, lo cual es útil si el cálculo principal de la integral ya se ha realizado utilizando fórmulas clásicas. Durante nuestra investigación, logramos concluir varios experimentos numéricos que confirmaron las conclusiones teóricas alcanzadas. Los resultados de esta parte de la tesis se incluyeron en el artículo [1], publicado en la revista Mathematics and Computers in Simulation, una revista internacional que pertenece al primer cuartil del Journal of Citation Reports. • Interpolación de Hermite más términos de corrección. Esta técnica (sin términos de corrección) se utiliza clásicamente para reconstruir datos suaves cuando la función y sus derivadas de primer orden están disponibles en ciertos nodos. Si las derivadas de primer orden no están disponibles, es fácil establecer un sistema de ecuaciones imponiendo algunas condiciones de regularidad sobre los nodos. Este proceso conduce a la construcción de un spline de Hermite. El problema del spline de Hermite descrito es que se pierde la precisión si los datos contienen singularidades (nos centraremos fundamentalmente en discontinuidades en la función o en la primera derivada, aunque también analizaremos que ocurre cuando hay discontinuidades en la segunda derivada). La consecuencia es la aparición de oscilaciones, si hay una discontinuidad abrupta en la función, que afecta globalmente la precisión del spline, o el suavizado de las singularidades, si las discontinuidades están en las derivadas de la función. Nuestro objetivo en esta parte de la tesis es la construcción y análisis de una nueva técnica que permite el cálculo preciso de derivadas de primer orden de una función cerca de las singularidades utilizando un spline cúbico de Hermite. La idea es corregir el sistema de ecuaciones del spline para alcanzar la precisión deseada incluso cerca de las singularidades. Una vez que hemos calculado las derivadas de primer orden con suficiente precisión, se agrega un término de corrección al spline de Hermite en los intervalos que contienen una singularidad. El objetivo es reconstruir funciones suaves a trozos con precisión O(h 4 ) incluso cerca de las singularidades. El proceso de adaptación requerirá algún conocimiento sobre la posición del salto, así como del tamaño de los saltos en la función y algunas derivadas en dicha posición. Esta técnica puede usarse como post-proceso, donde agregamos un término de corrección al spline cúbico de Hermite clásico. Durante nuestra investigación, obtuvimos pruebas para la precisión y regularidad del spline corregido y sus derivadas. También analizamos el mecanismo que elimina el fenómeno Gibbs cerca del salto en la función. Además, también realizamos varios experimentos numéricos que confirmaron los resultados teóricos obtenidos. Los resultados de esta parte de la tesis se incluyeron en el artículo [2], publicado en la revista Journal of Scientific Computing, una revista internacional que pertenece al primer cuartil del Journal of Citation Reports. • Super resolución. Aunque se presenta en ´ultima posición, este tema marcó el comienzo de esta tesis, donde centramos nuestra atención en algoritmos de multiresolución. La super resolución busca mejorar la calidad de imágenes y videos con baja resolución agregando detalles más finos, lo que resulta en una salida más nítida y clara. Esta parte de la tesis es muy breve y solo trata de reflejar el trabajo que se realizó para obtener el D.E.A., ya que poco después centramos nuestra atención en otras líneas de investigación que aparentaban ser algo más prometedoras para la elaboración de esta tesis.[ENG] In this PhD thesis we have tried to design algorithms capable of dealing with discontinuous data. We have centred our attention in three main applications: • Numerical integration plus correction terms. In this part of the thesis we constructed and analyzed a new nonlinear technique that allows obtaining accurate numerical integrations of any order using data that contains discontinuities, and when the integrand is only known at grid points. The novelty of the technique consists in the inclusion of correction terms with a closed expression that depends on the size of the jumps of the function and its derivatives at the discontinuities, that are supposed to be known. The addition of these terms allows recovering the accuracy of classical numerical integration formulas close to the discontinuities, as these correction terms account for the error that the classical integration formulas commit up to their accuracy at smooth zones. Thus, the correction terms can be added during the integration or as post-processing, which is useful if the main calculation of the integral has been already done using classical formulas. During our research, we managed to conclude several numerical experiments that confirmed the theoretical conclusions reached. The results of this part of the thesis were included in the article [1] published in the journal Mathematics and Computers in Simulation, an international journal that belongs to the first quartile of the Journal of Citations Report. • Hermite interpolation plus correction terms. This technique (without correction terms) is classically used to reconstruct smooth data when the function and its first order derivatives are available at certain nodes. If first order derivatives are not available, it is easy to set a system of equations imposing some regularity conditions at the data nodes in order to obtain them. This process leads to the construction of a Hermite spline. The problem of the described Hermite splines is that the accuracy is lost if the data contains singularities (we will center our attention on discontinuities in the function or in the first derivative, although we will also analyze what happens when there are discontinuities in the second derivative). The consequence is the appearance of oscillations, if there is a jump discontinuity in the function, that globally a↵ects the accuracy of the spline, or the smearing of singularities, if the discontinuities are in the derivatives of the function.Our objective in this part of the thesis is devoted to the construction and analysis of a new technique that allows for the computation of accurate first order derivatives of a function close to singularities using a cubic Hermite spline. The idea is to correct the system of equations of the spline in order to attain the desired accuracy even close to the singularities. Once we have computed the first order derivatives with enough accuracy, a correction term is added to the Hermite spline in the intervals that contain a singularity. The aim is to reconstruct piecewise smooth functions with O(h 4 ) accuracy even close to the singularities. The process of adaption will require some knowledge about the position of the singularity and the jumps of the function and some of its derivatives at the singularity. The whole process can be used as a post-processing, where a correction term is added to the classical cubic Hermite spline. During our research, we obtained proofs for the accuracy and regularity of the corrected spline and its derivatives. We also analysed the mechanism that eliminates the Gibbs phenomenon close to jump discontinuities in the function. In addition, we also performed several numerical experiments that confirmed the theoretical results obtained. The results of this part of the thesis were included in the article [2] published in the journal Journal of Scientific Computing, an international journal that belongs to the first quartile of the Journal of Citations Report. • Super resolution. While it is presented in the last position, this marked the beginning of this thesis, where we focused our attention on multi-resolution algorithms. Super resolution seeks to enhance the quality of low-resolution images and videos by adding finer details, resulting in a sharper and clearer output. These algorithms operate by analyzing different levels of image data and combining them to create a higher-resolution version. Applications for these algorithms can be found across industries, including surveillance, medical imaging, and media, to improve visual fidelity. Although the study of super resolution was the starting point of the thesis, we soon shifted our focus to the study of other algorithms in the context of numerical approximation. These alternative approaches proved to be more promising in terms of results that could be published. Nevertheless, this first part of the research served to obtain the D.E.A.Escuela Internacional de Doctorado de la Universidad Politécnica de CartagenaUniversidad Politécnica de CartagenaPrograma Doctorado en Tecnologías Industriale

    Efficient Data Driven Multi Source Fusion

    Get PDF
    Data/information fusion is an integral component of many existing and emerging applications; e.g., remote sensing, smart cars, Internet of Things (IoT), and Big Data, to name a few. While fusion aims to achieve better results than what any one individual input can provide, often the challenge is to determine the underlying mathematics for aggregation suitable for an application. In this dissertation, I focus on the following three aspects of aggregation: (i) efficient data-driven learning and optimization, (ii) extensions and new aggregation methods, and (iii) feature and decision level fusion for machine learning with applications to signal and image processing. The Choquet integral (ChI), a powerful nonlinear aggregation operator, is a parametric way (with respect to the fuzzy measure (FM)) to generate a wealth of aggregation operators. The FM has 2N variables and N(2N − 1) constraints for N inputs. As a result, learning the ChI parameters from data quickly becomes impractical for most applications. Herein, I propose a scalable learning procedure (which is linear with respect to training sample size) for the ChI that identifies and optimizes only data-supported variables. As such, the computational complexity of the learning algorithm is proportional to the complexity of the solver used. This method also includes an imputation framework to obtain scalar values for data-unsupported (aka missing) variables and a compression algorithm (lossy or losselss) of the learned variables. I also propose a genetic algorithm (GA) to optimize the ChI for non-convex, multi-modal, and/or analytical objective functions. This algorithm introduces two operators that automatically preserve the constraints; therefore there is no need to explicitly enforce the constraints as is required by traditional GA algorithms. In addition, this algorithm provides an efficient representation of the search space with the minimal set of vertices. Furthermore, I study different strategies for extending the fuzzy integral for missing data and I propose a GOAL programming framework to aggregate inputs from heterogeneous sources for the ChI learning. Last, my work in remote sensing involves visual clustering based band group selection and Lp-norm multiple kernel learning based feature level fusion in hyperspectral image processing to enhance pixel level classification

    Glosarium Matematika

    Get PDF
    273 p.; 24 cm

    Glosarium Matematika

    Get PDF

    (Multi)wavelets increase both accuracy and efficiency of standard Godunov-type hydrodynamic models

    Get PDF
    This paper presents a scaled reformulation of a robust second-order Discontinuous Galerkin (DG2) solver for the Shallow Water Equations (SWE), with guiding principles on how it can be naturally extended to fit into the multiresolution analysis of multiwavelets (MW). Multiresolution analysis applied to the flow and topography data enables the creation of an adaptive MWDG2 solution on a non-uniform grid. The multiresolution analysis also permits control of the adaptive model error by a single user-prescribed parameter. This results in an adaptive MWDG2 solver that can fully exploit the local (de)compression of piecewise-linear modelled data, and from which a first-order finite volume version (FV1) is directly obtainable based on the Haar wavelet (HFV1) for local (de)compression of piecewise-constant modelled data. The behaviour of the adaptive HFV1 and MWDG2 solvers is systematically studied on a number of well-known hydraulic tests that cover all elementary aspects relevant to accurate, efficient and robust modelling. The adaptive solvers are run starting from a baseline mesh with a single element, and their accuracy and efficiency are measured referring to standard FV1 and DG2 simulations on the uniform grid involving the finest resolution accessible by the adaptive solvers. Our findings reveal that the MWDG2 solver can achieve the same accuracy as the DG2 solver but with a greater efficiency than the FV1 solver due to the smoothness of its piecewise-linear basis, which enables more aggressive coarsening than with the piecewise-constant basis in the HFV1 solver. This suggests a great potential for the MWDG2 solver to efficiently handle the depth and breadth in resolution variability, while also being a multiresolution mesh generator. Accompanying model software and simulation data are openly available online

    ADAPTIVE SEARCH AND THE PRELIMINARY DESIGN OF GAS TURBINE BLADE COOLING SYSTEMS

    Get PDF
    This research concerns the integration of Adaptive Search (AS) technique such as the Genetic Algorithms (GA) with knowledge based software to develop a research prototype of an Adaptive Search Manager (ASM). The developed approach allows to utilise both quantitative and qualitative information in engineering design decision making. A Fuzzy Expert System manipulates AS software within the design environment concerning the preliminary design of gas turbine blade cooling systems. Steady state cooling hole geometry models have been developed for the project in collaboration with Rolls Royce plc. The research prototype of ASM uses a hybrid of Adaptive Restricted Tournament Selection (ARTS) and Knowledge Based Hill Climbing (KBHC) to identify multiple "good" design solutions as potential design options. ARTS is a GA technique that is particularly suitable for real world problems having multiple sub-optima. KBHC uses information gathered during the ARTS search as well as information from the designer to perform a deterministic hill climbing. Finally, a local stochastic hill climbing fine tunes the "good" designs. Design solution sensitivity, design variable sensitivities and constraint sensitivities are calculated following Taguchi's methodology, which extracts sensitivity information with a very small number of model evaluations. Each potential design option is then qualitatively evaluated separately for manufacturability, choice of materials and some designer's special preferences using the knowledge of domain experts. In order to guarantee that the qualitative evaluation module can evaluate any design solution from the entire design space with a reasonably small number of rules, a novel knowledge representation technique is developed. The knowledge is first separated in three categories: inter-variable knowledge, intra-variable knowledge and heuristics. Inter-variable knowledge and intra-variable knowledge are then integrated using a concept of compromise. Information about the "good" design solutions is presented to the designer through a designer's interface for decision support.Rolls Royce plc., Bristol (UK

    ISIPTA'07: Proceedings of the Fifth International Symposium on Imprecise Probability: Theories and Applications

    Get PDF
    B
    corecore