11,686 research outputs found

    Optimizing simulation on shared-memory platforms: The smart cities case

    Get PDF
    Modern advancements in computing architectures have been accompanied by new emergent paradigms to run Parallel Discrete Event Simulation models efficiently. Indeed, many new paradigms to effectively use the available underlying hardware have been proposed in the literature. Among these, the Share-Everything paradigm tackles massively-parallel shared-memory machines, in order to support speculative simulation by taking into account the limits and benefits related to this family of architectures. Previous results have shown how this paradigm outperforms traditional speculative strategies (such as data-separated Time Warp systems) whenever the granularity of executed events is small. In this paper, we show performance implications of this simulation-engine organization when the simulation models have a variable granularity. To this end, we have selected a traffic model, tailored for smart cities-oriented simulation. Our assessment illustrates the effects of the various tuning parameters related to the approach, opening to a higher understanding of this innovative paradigm

    Bayesian learning of models for estimating uncertainty in alert systems: application to air traffic conflict avoidance

    Get PDF
    Alert systems detect critical events which can happen in the short term. Uncertainties in data and in the models used for detection cause alert errors. In the case of air traffic control systems such as Short-Term Conflict Alert (STCA), uncertainty increases errors in alerts of separation loss. Statistical methods that are based on analytical assumptions can provide biased estimates of uncertainties. More accurate analysis can be achieved by using Bayesian Model Averaging, which provides estimates of the posterior probability distribution of a prediction. We propose a new approach to estimate the prediction uncertainty, which is based on observations that the uncertainty can be quantified by variance of predicted outcomes. In our approach, predictions for which variances of posterior probabilities are above a given threshold are assigned to be uncertain. To verify our approach we calculate a probability of alert based on the extrapolation of closest point of approach. Using Heathrow airport flight data we found that alerts are often generated under different conditions, variations in which lead to alert detection errors. Achieving 82.1% accuracy of modelling the STCA system, which is a necessary condition for evaluating the uncertainty in prediction, we found that the proposed method is capable of reducing the uncertain component. Comparison with a bootstrap aggregation method has demonstrated a significant reduction of uncertainty in predictions. Realistic estimates of uncertainties will open up new approaches to improving the performance of alert systems

    Analysis of Passive Charge Balancing for Safe Current-Mode Neural Stimulation

    Get PDF
    Charge balancing has been often considered as one of the most critical requirement for neural stimulation circuits. Over the years several solutions have been proposed to precisely balance the charge transferred to the tissue during anodic and cathodic phases. Elaborate dynamic current sources/sinks with improved matching, and feedback loops have been proposed with a penalty on circuit complexity, area or power consumption. Here we review the dominant assumptions in safe stimulation protocols, and derive mathematical models to determine the effectiveness of passive charge balancing in a typical application scenario

    Privacy Implications of In-Network Aggregation Mechanisms for VANETs

    Get PDF
    Research on vehicular ad hoc networks (VANETs) is active and ongoing. Proposed applications range from safety applications, and traffic efficiency applications to entertainment applications. Common to many applications is the need to disseminate possibly privacy-sensitive information, such as location and speed information, over larger distances. In-network aggregation is a promising technology that can help to make such privacy-sensitive information only available in the direct vicinity of vehicles instead of communicating it over larger areas. Further away, only aggregated information that is not privacy-relevant anymore will be known. At the same time, aggregation mechanisms help to cope with the limited available wireless bandwidth. However, the exact privacy properties of aggregation mechanisms have still not been thoroughly researched. In this paper, we propose a metric to measure privacy enhancements provided by in-network aggregation and use it to compare existing schemes

    A bayesian analysis of beta testing

    Get PDF
    In this article, we define a model for fault detection during the beta testing phase of a software design project. Given sampled data, we illustrate how to estimate the failure rate and the number of faults in the software using Bayesian statistical methods with various different prior distributions. Secondly, given a suitable cost function, we also show how to optimise the duration of a further test period for each one of the prior distribution structures considered
    corecore