209,835 research outputs found

    Computational Study of a Generic T-tail Transport

    Get PDF
    This paper presents a computational study on the static and dynamic stability characteristics of a generic transport T-tail configuration under a NASA research program to improve stall models for civil transports. The NASA Tetrahedral Unstructured Software System (TetrUSS) was used to obtain both static and periodic dynamic solutions at low speed conditions for three Reynolds number conditions up to 60 deg angle of attack. The computational results are compared to experimental data. The dominant effects of Reynolds number for the static conditions were found to occur in the stall region. The pitch and roll damping coefficients compared well to experimental results up to up to 40 deg angle of attack whereas yaw damping coefficient agreed only up to 20 deg angle of attack

    Characterization and Compensation of Network-Level Anomalies in Mixed-Signal Neuromorphic Modeling Platforms

    Full text link
    Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks

    Negative tension of scroll wave filaments and turbulence in three-dimensional excitable media and application in cardiac dynamics

    Get PDF
    Scroll waves are vortices that occur in three-dimensional excitable media. Scroll waves have been observed in a variety of systems including cardiac tissue, where they are associated with cardiac arrhythmias. The disorganization of scroll waves into chaotic behavior is thought to be the mechanism of ventricular fibrillation, whose lethality is widely known. One possible mechanism for this process of scroll wave instability is negative filament tension. It was discovered in 1987 in a simple two variables model of an excitable medium. Since that time, negative filament tension of scroll waves and the resulting complex, often turbulent dynamics was studied in many generic models of excitable media as well as in physiologically realistic models of cardiac tissue. In this article, we review the work in this area from the first simulations in FitzHugh-Nagumo type models to recent studies involving detailed ionic models of cardiac tissue. We discuss the relation of negative filament tension and tissue excitability and the effects of discreteness in the tissue on the filament tension. Finally, we consider the application of the negative tension mechanism to computational cardiology, where it may be regarded as a fundamental mechanism that explains differences in the onset of arrhythmias in thin and thick tissue

    The 3-D inelastic analysis methods for hot section components: Brief description

    Get PDF
    Advanced 3-D inelastic structural/stress analysis methods and solution strategies for more accurate yet more cost-effective analysis of components subjected to severe thermal gradients and loads in the presence of mechanical loads, with steep stress and strain gradients are being developed. Anisotropy, time and temperature dependent plasticity and creep effects are also addressed. The approach is to develop four different theories, one linear and three higher order theories (polynomial function, special function, general function). The theories are progressively more complex from linear to general function in order to provide streamlined analysis capability with increasing accuracy for each hot section component and for different parts of the same component according to the severity of the local stress, strain and temperature gradients associated with hot spots, cooling holes and surface coating cracks. To further enhance the computational effectiveness, the higher order theories will have embedded singularities (cooling passages, for example) in the generic modeling region. Each of the four theories consists of three formulation models derivable from independent theoretical formulations. These formulation models are based on: (1) mechanics of materials; (2) special finite elements; and (3) an advanced formulation to be recommended by the contractor

    High-Dimensional Bayesian Geostatistics

    Full text link
    With the growing capabilities of Geographic Information Systems (GIS) and user-friendly software, statisticians today routinely encounter geographically referenced data containing observations from a large number of spatial locations and time points. Over the last decade, hierarchical spatiotemporal process models have become widely deployed statistical tools for researchers to better understand the complex nature of spatial and temporal variability. However, fitting hierarchical spatiotemporal models often involves expensive matrix computations with complexity increasing in cubic order for the number of spatial locations and temporal points. This renders such models unfeasible for large data sets. This article offers a focused review of two methods for constructing well-defined highly scalable spatiotemporal stochastic processes. Both these processes can be used as "priors" for spatiotemporal random fields. The first approach constructs a low-rank process operating on a lower-dimensional subspace. The second approach constructs a Nearest-Neighbor Gaussian Process (NNGP) that ensures sparse precision matrices for its finite realizations. Both processes can be exploited as a scalable prior embedded within a rich hierarchical modeling framework to deliver full Bayesian inference. These approaches can be described as model-based solutions for big spatiotemporal datasets. The models ensure that the algorithmic complexity has ∼n\sim n floating point operations (flops), where nn the number of spatial locations (per iteration). We compare these methods and provide some insight into their methodological underpinnings

    MultiBUGS: A Parallel Implementation of the BUGS Modeling Framework for Faster Bayesian Inference

    Get PDF
    MultiBUGS is a new version of the general-purpose Bayesian modeling software BUGS that implements a generic algorithm for parallelizing Markov chain Monte Carlo (MCMC) algorithms to speed up posterior inference of Bayesian models. The algorithm parallelizes evaluation of the product-form likelihoods formed when a parameter has many children in the directed acyclic graph (DAG) representation; and parallelizes sampling of conditionally-independent sets of parameters. A heuristic algorithm is used to decide which approach to use for each parameter and to apportion computation across computational cores. This enables MultiBUGS to automatically parallelize the broad range of statistical models that can be fitted using BUGS-language software, making the dramatic speed-ups of modern multi-core computing accessible to applied statisticians, without requiring any experience of parallel programming. We demonstrate the use of MultiBUGS on simulated data designed to mimic a hierarchical e-health linked-data study of methadone prescriptions including 425,112 observations and 20,426 random effects. Posterior inference for the e-health model takes several hours in existing software, but MultiBUGS can perform inference in only 28 minutes using 48 computational cores

    MultiBUGS: A Parallel Implementation of the BUGS Modeling Framework for Faster Bayesian Inference

    Get PDF
    MultiBUGS is a new version of the general-purpose Bayesian modeling software BUGS that implements a generic algorithm for parallelizing Markov chain Monte Carlo (MCMC) algorithms to speed up posterior inference of Bayesian models. The algorithm parallelizes evaluation of the product-form likelihoods formed when a parameter has many children in the directed acyclic graph (DAG) representation; and parallelizes sampling of conditionally-independent sets of parameters. A heuristic algorithm is used to decide which approach to use for each parameter and to apportion computation across computational cores. This enables MultiBUGS to automatically parallelize the broad range of statistical models that can be fitted using BUGS-language software, making the dramatic speed-ups of modern multi-core computing accessible to applied statisticians, without requiring any experience of parallel programming. We demonstrate the use of MultiBUGS on simulated data designed to mimic a hierarchical e-health linked-data study of methadone prescriptions including 425,112 observations and 20,426 random effects. Posterior inference for the e-health model takes several hours in existing software, but MultiBUGS can perform inference in only 28 minutes using 48 computational core
    • …
    corecore