20,050 research outputs found

    Digital waveguide modeling for wind instruments: building a state-space representation based on the Webster-Lokshin model

    Get PDF
    This paper deals with digital waveguide modeling of wind instruments. It presents the application of state-space representations for the refined acoustic model of Webster-Lokshin. This acoustic model describes the propagation of longitudinal waves in axisymmetric acoustic pipes with a varying cross-section, visco-thermal losses at the walls, and without assuming planar or spherical waves. Moreover, three types of discontinuities of the shape can be taken into account (radius, slope and curvature). The purpose of this work is to build low-cost digital simulations in the time domain based on the Webster-Lokshin model. First, decomposing a resonator into independent elementary parts and isolating delay operators lead to a Kelly-Lochbaum network of input/output systems and delays. Second, for a systematic assembling of elements, their state-space representations are derived in discrete time. Then, standard tools of automatic control are used to reduce the complexity of digital simulations in the time domain. The method is applied to a real trombone, and results of simulations are presented and compared with measurements. This method seems to be a promising approach in term of modularity, complexity of calculation and accuracy, for any acoustic resonators based on tubes

    Accelerating scientific codes by performance and accuracy modeling

    Full text link
    Scientific software is often driven by multiple parameters that affect both accuracy and performance. Since finding the optimal configuration of these parameters is a highly complex task, it extremely common that the software is used suboptimally. In a typical scenario, accuracy requirements are imposed, and attained through suboptimal performance. In this paper, we present a methodology for the automatic selection of parameters for simulation codes, and a corresponding prototype tool. To be amenable to our methodology, the target code must expose the parameters affecting accuracy and performance, and there must be formulas available for error bounds and computational complexity of the underlying methods. As a case study, we consider the particle-particle particle-mesh method (PPPM) from the LAMMPS suite for molecular dynamics, and use our tool to identify configurations of the input parameters that achieve a given accuracy in the shortest execution time. When compared with the configurations suggested by expert users, the parameters selected by our tool yield reductions in the time-to-solution ranging between 10% and 60%. In other words, for the typical scenario where a fixed number of core-hours are granted and simulations of a fixed number of timesteps are to be run, usage of our tool may allow up to twice as many simulations. While we develop our ideas using LAMMPS as computational framework and use the PPPM method for dispersion as case study, the methodology is general and valid for a range of software tools and methods

    Delineating Parameter Unidentifiabilities in Complex Models

    Full text link
    Scientists use mathematical modelling to understand and predict the properties of complex physical systems. In highly parameterised models there often exist relationships between parameters over which model predictions are identical, or nearly so. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, and the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast timescale subsystems, as well as the regimes in which such approximations are valid. We base our algorithm on a novel quantification of regional parametric sensitivity: multiscale sloppiness. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher Information Matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the Likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm provides a tractable alternative. We finally apply our methods to a large-scale, benchmark Systems Biology model of NF-Îș\kappaB, uncovering previously unknown unidentifiabilities

    Towards a Smart World: Hazard Levels for Monitoring of Autonomous Vehicles’ Swarms

    Get PDF
    This work explores the creation of quantifiable indices to monitor the safe operations and movement of families of autonomous vehicles (AV) in restricted highway-like environments. Specifically, this work will explore the creation of ad-hoc rules for monitoring lateral and longitudinal movement of multiple AVs based on behavior that mimics swarm and flock movement (or particle swarm motion). This exploratory work is sponsored by the Emerging Leader Seed grant program of the Mineta Transportation Institute and aims at investigating feasibility of adaptation of particle swarm motion to control families of autonomous vehicles. Specifically, it explores how particle swarm approaches can be augmented by setting safety thresholds and fail-safe mechanisms to avoid collisions in off-nominal situations. This concept leverages the integration of the notion of hazard and danger levels (i.e., measures of the “closeness” to a given accident scenario, typically used in robotics) with the concept of safety distance and separation/collision avoidance for ground vehicles. A draft of implementation of four hazard level functions indicates that safety thresholds can be set up to autonomously trigger lateral and longitudinal motion control based on three main rules respectively based on speed, heading, and braking distance to steer the vehicle and maintain separation/avoid collisions in families of autonomous vehicles. The concepts here presented can be used to set up a high-level framework for developing artificial intelligence algorithms that can serve as back-up to standard machine learning approaches for control and steering of autonomous vehicles. Although there are no constraints on the concept’s implementation, it is expected that this work would be most relevant for highly-automated Level 4 and Level 5 vehicles, capable of communicating with each other and in the presence of a monitoring ground control center for the operations of the swarm
    • 

    corecore