7,602 research outputs found

    Characterization and Compensation of Network-Level Anomalies in Mixed-Signal Neuromorphic Modeling Platforms

    Full text link
    Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks

    FlexMix Version 2: Finite Mixtures with Concomitant Variables and Varying and Constant Parameters

    Get PDF
    flexmix provides infrastructure for flexible fitting of finite mixture models in R using the expectation-maximization (EM) algorithm or one of its variants. The functionality of the package was enhanced. Now concomitant variable models as well as varying and constant parameters for the component specific generalized linear regression models can be fitted. The application of the package is demonstrated on several examples, the implementation described and examples given to illustrate how new drivers for the component specific models and the concomitant variable models can be defined.

    A Review of Multi- Compartment Infectious Disease Models

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/156488/2/insr12402.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/156488/1/insr12402_am.pd

    Survival of adults with HIV-1 infection or Type 2 diabetes in the South African private sector

    Get PDF
    Background: The scale-up of combination antiretroviral therapy (ART), one of the greatest pharmacological interventions in human history, has reduced adult HIV-related deaths in South Africa by around 70% between the peak in 2005 and 2019, but it is unclear from published studies in South Africa and globally which subgroups of HIV-infected adults, defined by both baseline and current (time-updated) characteristics, may achieve HIV-uninfected levels of mortality and which subgroups have relative mortality that is within the insurance industry’s threshold for insurability. Relative mortality estimates are important in insurance since insurability is measured by relative mortality, not absolute mortality or other measures such as life expectancy. As HIV-infected people survive to increasingly longer durations of ART, there is a need for patients, healthcare practitioners, ART programmes, other modellers, insurers and policymakers to understand the prognosis when measured from later durations on ART based on current characteristics. However, most South African studies are based on baseline characteristics, short follow-up times, and low patient volumes, and they lack an HIVuninfected control selected from the same subpopulation for estimating relative mortality. At the time of initiating this research in 2013/2014, some insurers were declining HIV-infected South Africans applying for higher cover amounts spanning the whole of life. Further, other chronic conditions such as Type 2 Diabetes (DM2) had already been insurable for many years in South Africa. At the same time, the ART Cohort Collaboration (ART-CC) assessed the insurability of HIV-infected people starting ART in Europe and issued an urgent call for a corresponding study in South Africa. This study responds to this call and, to the author’s knowledge, is the first study outside of Europe to assess the insurability of HIV-infected adults starting ART by assessing the relative mortality of South African HIV-infected adults initiating ART using an HIV-uninfected control (comparator) chosen from the same subpopulation, measured from multiple time points on ART using both baseline and current characteristics, long follow-up times, significant patient volumes and accurate mortality ascertainment. The study identifies patient subgroups with insurable levels of relative risk as well as subgroups that attain HIV-uninfected levels of all-cause mortality and is fundamental for evaluating ART programmes and for informing evidence-based insurance decisions that are actuarially sound and treat insurance customers fairly. Methods: A retrospective cohort study is performed using patient data from a large medical scheme population and Aid for AIDS (AfA), a private sector HIV managed care programme in South Africa. Three cohorts are extracted from the same medical scheme population: HIV infected adults starting ART, patients with DM2 starting hypoglycaemic therapy, and an HIV-uninfected and DM2-negative control (comparator). Mortality is ascertained via linkage with the national death registry. Relative all-cause mortality risk (relative risk) is estimated using a generalized linear model (GLM) assuming a Poisson error distribution and with expected numbers of deaths based on the control cohort mortality according to age, gender and population group specified as an offset. To meet insurers’ needs for estimates of future relative risk that remain constant across the policy lifetime and incorporate current characteristics nearest to the time of applying for insurance, relative risk is estimated from each 6-month time point on ART over the remaining follow-up according to the patient’s length of time on ART at the time of applying for insurance, current CD4 count and viral load and baseline CD4 count. Results: In the HIV cohort, 8,920 deaths were observed recorded in 77,325 patients starting ART between 2000 and 2013 followed for 315,341 person years of observation (PYO) (median follow-up of 3.23 years [IQR 2.04;5.30]). In the DM2 cohort, 7,970 deaths were recorded in 67,705 patients starting antihyperglycaemic therapy between 2000 and 2013 followed for 365,547 PYO (median follow-up of 6.20 years [IQR 3.85;9.53]). In the control, 24,838 deaths were recorded in 512,940 patients followed for 3,276,501 PYO. The median CD4 count in the overall HIV cohort reached the lower limit of CD4 count in HIV uninfected people (500 cells/”l) after 5 years on ART and, after 12 months on ART, 77% of patients were virologically suppressed (viral load ≀ 400 copies/ml), increasing to 80% after 10 years on ART. Within the first 6 months on ART, 21% of patients attained both a CD4 count above 200 cells/”l and a suppressed viral load, increasing to 49% in months 6-12, 68% in years 1-2 and 80% after 10 years on ART. In the overall HIV cohort, 90% of patients at risk from all time points 6 months or later since ART initiation were estimated to have relative risk within the insurance industry threshold (<5). Within patients attaining current CD4 counts of 200+ cells/”l and suppressed viral loads (≀400 copies/ml) at 6 months on ART or later, 100% of patients at risk corresponded to relative risk levels well below the insurance industry threshold (<5). 90% of patients at risk from 1 year of ART onwards had a lower or comparable relative risk to the DM2 cohort, implying that the majority of patients on ART had comparable relative risk to those with a chronic condition that is already insurable. Baseline CD4 count was only prognostic for relative risk within the first three years of ART after adjusting for the immunological and virological response to ART. Patients attaining a current CD4 count of 200+ cells/”l and a suppressed viral load (≀400 copies/ml) had the lowest relative risk, reducing with time on ART and approaching 1 after 3 years on ART in the black population group indicating attainment of HIV uninfected mortality levels. However, in the non-black population group, relative risk was 1.59 [95% CI 1.30;1.88] times higher than in the black population group which, while still within the insurance industry threshold, is higher than HIV uninfected levels of mortality. A further sub-analysis showed that while the immunological and virological response to ART was similar to that reported by the ART-CC in Europe, the level of relative risk was similar only in the nonblack population group and the effect of current age on relative risk was strongly modified by population group. Conclusions: The vast majority of this cohort of South African HIV-infected adults starting ART have both insurable levels of relative risk and comparable relative risk to DM2 when measured from multiple time points on ART by baseline and current characteristics. The only subgroup with relative risk exceeding the insurance industry threshold were patients with current CD4 counts ˂200 cells/”l and unsuppressed viral loads (˃400 copies/ml). Mortality in the vast majority of this cohort attained CD4 counts ≄200 cells/”l and suppressed viral loads (≀400 copies/ml) and approached HIV-uninfected levels after 3 years on ART. A novel analytics method is presented for modelling relative risk that better meets insurers’ needs than existing studies reporting relative risk in defined intervals of ART using dated patient characteristics

    Fluctuation scaling in complex systems: Taylor's law and beyond

    Full text link
    Complex systems consist of many interacting elements which participate in some dynamical process. The activity of various elements is often different and the fluctuation in the activity of an element grows monotonically with the average activity. This relationship is often of the form "fluctuations≈const.×averageαfluctuations \approx const.\times average^\alpha", where the exponent α\alpha is predominantly in the range [1/2,1][1/2, 1]. This power law has been observed in a very wide range of disciplines, ranging from population dynamics through the Internet to the stock market and it is often treated under the names \emph{Taylor's law} or \emph{fluctuation scaling}. This review attempts to show how general the above scaling relationship is by surveying the literature, as well as by reporting some new empirical data and model calculations. We also show some basic principles that can underlie the generality of the phenomenon. This is followed by a mean-field framework based on sums of random variables. In this context the emergence of fluctuation scaling is equivalent to some corresponding limit theorems. In certain physical systems fluctuation scaling can be related to finite size scaling.Comment: 33 pages, 20 figures, 2 tables, submitted to Advances in Physic

    25 Years of Self-Organized Criticality: Solar and Astrophysics

    Get PDF
    Shortly after the seminal paper {\sl "Self-Organized Criticality: An explanation of 1/f noise"} by Bak, Tang, and Wiesenfeld (1987), the idea has been applied to solar physics, in {\sl "Avalanches and the Distribution of Solar Flares"} by Lu and Hamilton (1991). In the following years, an inspiring cross-fertilization from complexity theory to solar and astrophysics took place, where the SOC concept was initially applied to solar flares, stellar flares, and magnetospheric substorms, and later extended to the radiation belt, the heliosphere, lunar craters, the asteroid belt, the Saturn ring, pulsar glitches, soft X-ray repeaters, blazars, black-hole objects, cosmic rays, and boson clouds. The application of SOC concepts has been performed by numerical cellular automaton simulations, by analytical calculations of statistical (powerlaw-like) distributions based on physical scaling laws, and by observational tests of theoretically predicted size distributions and waiting time distributions. Attempts have been undertaken to import physical models into the numerical SOC toy models, such as the discretization of magneto-hydrodynamics (MHD) processes. The novel applications stimulated also vigorous debates about the discrimination between SOC models, SOC-like, and non-SOC processes, such as phase transitions, turbulence, random-walk diffusion, percolation, branching processes, network theory, chaos theory, fractality, multi-scale, and other complexity phenomena. We review SOC studies from the last 25 years and highlight new trends, open questions, and future challenges, as discussed during two recent ISSI workshops on this theme.Comment: 139 pages, 28 figures, Review based on ISSI workshops "Self-Organized Criticality and Turbulence" (2012, 2013, Bern, Switzerland

    Modeling the Spread of COVID-19 in Spatio-Temporal Context

    Get PDF
    This study aims to use data provided by the Virginia Department of Public Health to illustrate the changes in trends of the total cases in COVID-19 since they were first recorded in the state. Each of the 93 counties in the state has its COVID-19 dashboard to help inform decision makers and the public of spatial and temporal counts of total cases. Our analysis shows the differences in the relative spread between the counties and compares the evolution in time using Bayesian conditional autoregressive framework. The models are built under the Markov Chain Monte Carlo method and Moran spatial correlations. In addition, Moran\u27s time series modeling techniques were applied to understand the incidence rates. The findings discussed may serve as a template for other studies of similar nature

    Approximate Data Analytics Systems

    Get PDF
    Today, most modern online services make use of big data analytics systems to extract useful information from the raw digital data. The data normally arrives as a continuous data stream at a high speed and in huge volumes. The cost of handling this massive data can be significant. Providing interactive latency in processing the data is often impractical due to the fact that the data is growing exponentially and even faster than Moore’s law predictions. To overcome this problem, approximate computing has recently emerged as a promising solution. Approximate computing is based on the observation that many modern applications are amenable to an approximate, rather than the exact output. Unlike traditional computing, approximate computing tolerates lower accuracy to achieve lower latency by computing over a partial subset instead of the entire input data. Unfortunately, the advancements in approximate computing are primarily geared towards batch analytics and cannot provide low-latency guarantees in the context of stream processing, where new data continuously arrives as an unbounded stream. In this thesis, we design and implement approximate computing techniques for processing and interacting with high-speed and large-scale stream data to achieve low latency and efficient utilization of resources. To achieve these goals, we have designed and built the following approximate data analytics systems: ‱ StreamApprox—a data stream analytics system for approximate computing. This system supports approximate computing for low-latency stream analytics in a transparent way and has an ability to adapt to rapid fluctuations of input data streams. In this system, we designed an online adaptive stratified reservoir sampling algorithm to produce approximate output with bounded error. ‱ IncApprox—a data analytics system for incremental approximate computing. This system adopts approximate and incremental computing in stream processing to achieve high-throughput and low-latency with efficient resource utilization. In this system, we designed an online stratified sampling algorithm that uses self-adjusting computation to produce an incrementally updated approximate output with bounded error. ‱ PrivApprox—a data stream analytics system for privacy-preserving and approximate computing. This system supports high utility and low-latency data analytics and preserves user’s privacy at the same time. The system is based on the combination of privacy-preserving data analytics and approximate computing. ‱ ApproxJoin—an approximate distributed joins system. This system improves the performance of joins — critical but expensive operations in big data systems. In this system, we employed a sketching technique (Bloom filter) to avoid shuffling non-joinable data items through the network as well as proposed a novel sampling mechanism that executes during the join to obtain an unbiased representative sample of the join output. Our evaluation based on micro-benchmarks and real world case studies shows that these systems can achieve significant performance speedup compared to state-of-the-art systems by tolerating negligible accuracy loss of the analytics output. In addition, our systems allow users to systematically make a trade-off between accuracy and throughput/latency and require no/minor modifications to the existing applications

    Time Series Modelling

    Get PDF
    The analysis and modeling of time series is of the utmost importance in various fields of application. This Special Issue is a collection of articles on a wide range of topics, covering stochastic models for time series as well as methods for their analysis, univariate and multivariate time series, real-valued and discrete-valued time series, applications of time series methods to forecasting and statistical process control, and software implementations of methods and models for time series. The proposed approaches and concepts are thoroughly discussed and illustrated with several real-world data examples
    • 

    corecore