86 research outputs found

    Filtering in the numerical simulation of turbulent compressible flow with sysmmetry preserving discretizations

    Get PDF
    The present thesis investigates how explicit filters can be useful in numerical simulations of turbulent, compressible flow with symmetry preserving discretizations. Such explicit filters provide stability to simulations with shocks, provide stability to low-dissipation schemes on smooth flows and are used as test filters in LES turbulence models such as the Variational Multi-Scale eddy viscosity model or regularization models. The present thesis is a step forward in four main aspects. First, a comparative study of the Symmetry Preserving schemes for compressible flow is conducted. It shows that Rozema’s scheme is more stable and accurate than the other schemes compiled fromthe literature. A sligh tmodification on this scheme is presented and shown to be more stable and accurate in unstructured meshes, but lesser accurate and stable in uniform, structured meshes. Second, a theoretical analysis of the properties of filters for CFD and their consequences on the derivation of the LES equations is conducted. The analysis shows how the diffusive properties of filters are necessary for the consistency of the model. Third, a study of explicit filtering on discrete variables identifies the necessary constraints for the fulfillment of the discrete counterpart of the filter properties. It puts emphases on the different possibilities when requiring the filters to be diffusive. After it, a new family of filters has been derived and tested in newly developed tests that allow the independent study of each property. And last, an algorithm to couple adaptive filtering with time integration is reported and tested on the 2D Isentropic Vortex and on the Taylor-Green vortex problem. Filtering is shown to enhance stability at the cost of locally adding diffusion. This saves the simulations from being diffusive everywhere. The resulting methodology is also shown to be potentially useful for shock-capturing purposes with the simulation of a shock-tube in a fully unstructured mesh.Postprint (published version

    Characterising bias in regulatory risk and decision analysis: An analysis of heuristics applied in health technology appraisal, chemicals regulation, and climate change governance

    Get PDF
    In many environmental and public health domains, heuristic methods of risk and decision analysis must be relied upon, either because problem structures are ambiguous, reliable data is lacking, or decisions are urgent. This introduces an additional source of uncertainty beyond model and measurement error – uncertainty stemming from relying on inexact inference rules. Here we identify and analyse heuristics used to prioritise risk objects, to discriminate between signal and noise, to weight evidence, to construct models, to extrapolate beyond datasets, and to make policy. Some of these heuristics are based on causal generalisations, yet can misfire when these relationships are presumed rather than tested (e.g. surrogates in clinical trials). Others are conventions designed to confer stability to decision analysis, yet which may introduce serious error when applied ritualistically (e.g. significance testing). Some heuristics can be traced back to formal justifications, but only subject to strong assumptions that are often violated in practical applications. Heuristic decision rules (e.g. feasibility rules) in principle act as surrogates for utility maximisation or distributional concerns, yet in practice may neglect costs and benefits, be based on arbitrary thresholds, and be prone to gaming. We highlight the problem of rule-entrenchment, where analytical choices that are in principle contestable are arbitrarily fixed in practice, masking uncertainty and potentially introducing bias. Strategies for making risk and decision analysis more rigorous include: formalising the assumptions and scope conditions under which heuristics should be applied; testing rather than presuming their underlying empirical or theoretical justifications; using sensitivity analysis, simulations, multiple bias analysis, and deductive systems of inference (e.g. directed acyclic graphs) to characterise rule uncertainty and refine heuristics; adopting “recovery schemes” to correct for known biases; and basing decision rules on clearly articulated values and evidence, rather than convention

    On parameterized deformations and unsupervised learning

    Get PDF

    Wavelet analysis of non-stationary signals with applications

    Get PDF
    The empirical mode decomposition (EMD) algorithm, introduced by N.E. Huang et al in 1998, is arguably the most popular mathematical scheme for non-stationary signal decomposition and analysis. The objective of EMD is to separate a given signal into a number of components, called intrinsic mode functions (IMF\u27s), after which the instantaneous frequency (IF) and amplitude of each IMF are computed through Hilbert spectral analysis (HSA). On the other hand, the synchrosqueezed wavelet transform (SST), introduced by I. Daubechies and S. Maes in 1996 and further developed by I. Daubechies, J. Lu and H.-T. Wu in 2011, is first applied to estimate the IF\u27s of all signal components of the given signal, based on one single frequency reassignment rule, under the assumption that the signal components satisfy certain strict properties of the so-called adaptive harmonic model, before the signal components of the model are recovered, based on the estimated IF\u27s. The objective of this dissertation is to develop a hybrid EMD-SST computational scheme by applying a modified SST to each IMF produced by a modified EMD, as an alternative approach to the original EMD-HSA method. While our modified SST assures non-negative instantaneous frequencies of the IMF\u27s, application of the EMD scheme eliminates the dependence on a single frequency reassignment rule as well as the guessing work of the number of signal components in the original SST approach. Our modification of the SST consists of applying analytic vanishing moment wavelets (introduced in a recent paper by C.K. Chui, Y.-T. Lin and H.-T. Wu) with stacked knots to process signals on bounded or half-infinite time intervals, and spline curve fitting with optimal smoothing parameter selection through generalized cross-validation. In addition, we modify EMD by formulating a local spline interpolation scheme for bounded intervals, for real-time realization of the EMD sifting process. This scheme improves over the standard global cubic spline interpolation, both in quality and computational cost, particularly when applied to bounded and half-infinite time intervals

    Statistical methods & algorithms for autonomous immunoglobulin repertoire analysis

    Get PDF
    Investigating the immunoglobulin repertoire is a means of understanding the adaptive immune response to infectious disease or vaccine challenge. The data examined are typically generated using high-throughput sequencing on samples of immunoglobulin variable-region genes present in blood or tissue collected from human or animal subjects. The analysis of these large, diverse collections provides a means of gaining insight into the specific molecular mechanisms involved in generating and maintaining a protective immune response. It involves the characterization of distinct clonal populations, specifically through the inference of founding alleles for germline gene segment recombination, as well as the lineage of accumulated mutations acquired during the development of each clone. Germline gene segment inference is currently performed by aligning immunoglobulin sequencing reads against an external reference database and assigning each read to the entry that provides the best score according to the metric used. The problem with this approach is that allelic diversity is greater than can be usefully accommodated in a static database. The absence of the alleles used from the database often leads to the misclassification of single-nucleotide polymorphisms as somatic mutations acquired during affinity maturation. This trend is especially evident with the rhesus macaque, but also affects the comparatively well-catalogued human databases, whose collections are biased towards samples from individuals of European descent. Our project presents novel statistical methods for immunoglobulin repertoire analysis which allow for the de novo inference of germline gene segment libraries directly from next-generation sequencing data, without the need for external reference databases. These methods follow a Bayesian paradigm, which uses an information-theoretic modelling approach to iteratively improve upon internal candidate gene segment libraries. Both candidate libraries and trial analyses given those libraries are incorporated as components of the machine learning evaluation procedure, allowing for the simultaneous optimization of model accuracy and simplicity. Finally, the proposed methods are evaluated using synthetic data designed to mimic known mechanisms for repertoire generation, with pre-designated parameters. We also apply these methods to known biological sources with unknown repertoire generation parameters, and conclude with a discussion on how this method can be used to identify potential novel alleles

    Spectrum analysis of LTI continuous-time systems with constant delays: A literature overview of some recent results

    Get PDF
    In recent decades, increasingly intensive research attention has been given to dynamical systems containing delays and those affected by the after-effect phenomenon. Such research covers a wide range of human activities and the solutions of related engineering problems often require interdisciplinary cooperation. The knowledge of the spectrum of these so-called time-delay systems (TDSs) is very crucial for the analysis of their dynamical properties, especially stability, periodicity, and dumping effect. A great volume of mathematical methods and techniques to analyze the spectrum of the TDSs have been developed and further applied in the most recent times. Although a broad family of nonlinear, stochastic, sampled-data, time-variant or time-varying-delay systems has been considered, the study of the most fundamental continuous linear time-invariant (LTI) TDSs with fixed delays is still the dominant research direction with ever-increasing new results and novel applications. This paper is primarily aimed at a (systematic) literature overview of recent (mostly published between 2013 to 2017) advances regarding the spectrum analysis of the LTI-TDSs. Specifically, a total of 137 collected articles-which are most closely related to the research area-are eventually reviewed. There are two main objectives of this review paper: First, to provide the reader with a detailed literature survey on the selected recent results on the topic and Second, to suggest possible future research directions to be tackled by scientists and engineers in the field. © 2013 IEEE.MSMT-7778/2014, FEDER, European Regional Development Fund; LO1303, FEDER, European Regional Development Fund; CZ.1.05/2.1.00/19.0376, FEDER, European Regional Development FundEuropean Regional Development Fund through the Project CEBIA-Tech Instrumentation [CZ.1.05/2.1.00/19.0376]; National Sustainability Program Project [LO1303 (MSMT-7778/2014)
    • …
    corecore