419,104 research outputs found

    Notes on the Discontinuous Galerkin methods for the numerical simulation of hyperbolic equations 1 General Context 1.1 Bibliography

    Full text link
    The roots of Discontinuous Galerkin (DG) methods is usually attributed to Reed and Hills in a paper published in 1973 on the numerical approximation of the neutron transport equation [18]. In fact, the adventure really started with a rather thoroughfull series of five papers by Cockburn and Shu in the late 80's [7, 5, 9, 6, 8]. Then, the fame of the method, which could be seen as a compromise between Finite Elements (the center of the method being a weak formulation) and Finite Volumes (the basis functions are defined cell-wise, the cells being the elements of the primal mesh) increased and slowly investigated successfully all the domains of Partial Differential Equations numerical integration. In particular, one can cite the ground papers for the common treatment of convection-diffusion equations [4, 3] or the treatment of pure elliptic equations [2, 17]. For more information on the history of Discontinuous Galerkin method, please refer to section 1.1 of [15]. Today, DG methods are widely used in all kind of manners and have applications in almost all fields of applied mathematics. (TODO: cite applications and structured/unstructured meshes, steady/unsteady, etc...). The methods is now mature enough to deserve entire text books, among which I cite a reference book on Nodal DG Methods by Henthaven and Warburton [15] with the ground basis of DG integration, numerical analysis of its linear behavior and generalization to multiple dimensions. Lately, since 2010, thanks to a ground work of Zhang and Shu [26, 27, 25, 28, 29], Discontinuous Galerkin methods are eventually able to combine high order accuracy and certain preservation of convex constraints, such as the positivity of a given quantity, for example. These new steps forward are very promising since it brings us very close to the "Ultimate Conservative Scheme", [23, 1]

    Artificial Sequences and Complexity Measures

    Get PDF
    In this paper we exploit concepts of information theory to address the fundamental problem of identifying and defining the most suitable tools to extract, in a automatic and agnostic way, information from a generic string of characters. We introduce in particular a class of methods which use in a crucial way data compression techniques in order to define a measure of remoteness and distance between pairs of sequences of characters (e.g. texts) based on their relative information content. We also discuss in detail how specific features of data compression techniques could be used to introduce the notion of dictionary of a given sequence and of Artificial Text and we show how these new tools can be used for information extraction purposes. We point out the versatility and generality of our method that applies to any kind of corpora of character strings independently of the type of coding behind them. We consider as a case study linguistic motivated problems and we present results for automatic language recognition, authorship attribution and self consistent-classification.Comment: Revised version, with major changes, of previous "Data Compression approach to Information Extraction and Classification" by A. Baronchelli and V. Loreto. 15 pages; 5 figure

    Modeling Sensor Knowledge of a National Hydrologic Information System

    Get PDF
    In this paper we describe our experience in modeling and using sensor knowledge of a national hydrologic information system in Spain. We developed a web application called VSAIH supported by a knowledge-based system to analyze sensor data and to generate explanations that help users to make decisions based on hydrologic behavior. In the paper, we describe the characteristics of the infrastructure of hydrologic sensors and the representa-tion we used to model sensor knowledge to provide support to the VSAIH application. We also describe semi-automatic procedures that we applied to construct the final model

    Living Knowledge

    Get PDF
    Diversity, especially manifested in language and knowledge, is a function of local goals, needs, competences, beliefs, culture, opinions and personal experience. The Living Knowledge project considers diversity as an asset rather than a problem. With the project, foundational ideas emerged from the synergic contribution of different disciplines, methodologies (with which many partners were previously unfamiliar) and technologies flowed in concrete diversity-aware applications such as the Future Predictor and the Media Content Analyser providing users with better structured information while coping with Web scale complexities. The key notions of diversity, fact, opinion and bias have been defined in relation to three methodologies: Media Content Analysis (MCA) which operates from a social sciences perspective; Multimodal Genre Analysis (MGA) which operates from a semiotic perspective and Facet Analysis (FA) which operates from a knowledge representation and organization perspective. A conceptual architecture that pulls all of them together has become the core of the tools for automatic extraction and the way they interact. In particular, the conceptual architecture has been implemented with the Media Content Analyser application. The scientific and technological results obtained are described in the following

    Measurement of statistical evidence on an absolute scale following thermodynamic principles

    Full text link
    Statistical analysis is used throughout biomedical research and elsewhere to assess strength of evidence. We have previously argued that typical outcome statistics (including p-values and maximum likelihood ratios) have poor measure-theoretic properties: they can erroneously indicate decreasing evidence as data supporting an hypothesis accumulate; and they are not amenable to calibration, necessary for meaningful comparison of evidence across different study designs, data types, and levels of analysis. We have also previously proposed that thermodynamic theory, which allowed for the first time derivation of an absolute measurement scale for temperature (T), could be used to derive an absolute scale for evidence (E). Here we present a novel thermodynamically-based framework in which measurement of E on an absolute scale, for which "one degree" always means the same thing, becomes possible for the first time. The new framework invites us to think about statistical analyses in terms of the flow of (evidential) information, placing this work in the context of a growing literature on connections among physics, information theory, and statistics.Comment: Final version of manuscript as published in Theory in Biosciences (2013

    Getting the message across : ten principles for web animation

    Get PDF
    The growing use of animation in Web pages testifies to the increasing ease with which such multimedia components can be created. This trend indicates a commitment to animation that is often unmatched by the skill of the implementers. The present paper details a set of ten commandments for web animation, intending to sensitise budding animators to key aspects that may impair the communicational effectiveness of their animation. These guidelines are drawn from an extensive literature survey coloured by personal experience of using Web animation packages. Our ten principles are further elucidated by a Web-based on-line tutorial
    • …
    corecore