20,282 research outputs found

    Minimizing Execution Duration in the Presence of Learning-Enabled Components

    Get PDF

    Early error detection predicted by reduced pre-response control process: an ERP topographic mapping study

    Get PDF
    Advanced ERP topographic mapping techniques were used to study error monitoring functions in human adult participants, and test whether proactive attentional effects during the pre-response time period could later influence early error detection mechanisms (as measured by the ERN component) or not. Participants performed a speeded go/nogo task, and made a substantial number of false alarms that did not differ from correct hits as a function of behavioral speed or actual motor response. While errors clearly elicited an ERN component generated within the dACC following the onset of these incorrect responses, I also found that correct hits were associated with a different sequence of topographic events during the pre-response baseline time-period, relative to errors. A main topographic transition from occipital to posterior parietal regions (including primarily the precuneus) was evidenced for correct hits similar to 170-150 ms before the response, whereas this topographic change was markedly reduced for errors. The same topographic transition was found for correct hits that were eventually performed slower than either errors or fast (correct) hits, confirming the involvement of this distinctive posterior parietal activity in top-down attentional control rather than motor preparation. Control analyses further ensured that this pre-response topographic effect was not related to differences in stimulus processing. Furthermore, I found a reliable association between the magnitude of the ERN following errors and the duration of this differential precuneus activity during the pre-response baseline, suggesting a functional link between an anticipatory attentional control component subserved by the precuneus and early error detection mechanisms within the dACC. These results suggest reciprocal links between proactive attention control and decision making processes during error monitoring

    BRAHMA(+): A Framework for Resource Scaling of Streaming and ASAP Time-Varying Workflows

    Get PDF
    Automatic scaling of complex software-as-a-service application workflows is one of the most important problems concerning resource management in clouds. In this paper, we study the automatic workflow resource scaling problem for streaming and ASAP workflows, and its time-varying variant where the workflow resource requirements change over time. Service components of streaming workflows execute concurrently while those of ASAP workflows execute sequentially. We propose an intelligent framework, BRAHMA(+), which possesses the capability to learn the workflow behavior and construct a knowledge base that serves as its decision making engine. The proposed resource provisioning algorithms leverage this learned information curated in the knowledge base to perform informed and intelligent scaling decisions. Additionally, BRAHMA(+) employs the use of online-learning strategies to keep the knowledge base up-to-date, thereby accommodating the changes in the workflow resource requirements over time. We evaluate the proposed algorithms using CloudSim simulations. Results on streaming and ASAP workflows, with both static and time-varying resource requirements show that the proposed algorithms are effective and produce good cost-quality trade-offs. The proactive and hybrid algorithms meet the service level agreements and restrict deadline violations to a small fraction (3%-5% in the considered scenarios), while only suffering a marginal increase in average cost per component compared to the described baseline algorithms

    The Safe and Effective Use of Learning-Enabled Components in Safety-Critical Systems

    Get PDF
    Autonomous systems increasingly use components that incorporate machine learning and other AI-based techniques in order to achieve improved performance. The problem of assuring correctness in safety-critical systems that use such components is considered. A model is proposed in which components are characterized according to both their worst-case and their typical behaviors; it is argued that while safety must be assured under all circumstances, it is reasonable to be concerned with providing a high degree of performance for typical behaviors only. The problem of assuring safety while providing such improved performance is formulated as an optimization problem in which performance under typical circumstances is the objective function to be optimized while safety is a hard constraint that must be satisfied. Algorithmic techniques are applied to derive an optimal solution to this optimization problem. This optimal solution is compared with an alternative approach that optimizes for performance under worst-case conditions, as well as some common-sense heuristics, via simulation experiments on synthetically-generated workloads

    Efficient transfer entropy analysis of non-stationary neural time series

    Full text link
    Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these observations, available estimators assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that deals with the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method. We test the performance and robustness of our implementation on data from simulated stochastic processes and demonstrate the method's applicability to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscientific data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems.Comment: 27 pages, 7 figures, submitted to PLOS ON
    • …
    corecore