24,998 research outputs found

    Reporting ethics committee approval and patient consent by study design in five general medical journals.

    No full text
    BACKGROUND: Authors are required to describe in their manuscripts ethical approval from an appropriate committee and how consent was obtained from participants when research involves human participants. OBJECTIVE: To assess the reporting of these protections for several study designs in general medical journals. DESIGN: A consecutive series of research papers published in the Annals of Internal Medicine, BMJ, JAMA, Lancet and The New England Journal of Medicine between February and May 2003 were reviewed for the reporting of ethical approval and patient consent. Ethical approval, name of approving committee, type of consent, data source and whether the study used data collected as part of a study reported elsewhere were recorded. Differences in failure to report approval and consent by study design, journal and vulnerable study population were evaluated using multivariable logistic regression. RESULTS: Ethical approval and consent were not mentioned in 31% and 47% of manuscripts, respectively. 88 (27%) papers failed to report both approval and consent. Failure to mention ethical approval or consent was significantly more likely in all study designs (except case-control and qualitative studies) than in randomised controlled trials (RCTs). Failure to mention approval was most common in the BMJ and was significantly more likely than in The New England Journal of Medicine. Failure to mention consent was most common in the BMJ and was significantly more likely than in all other journals. No significant differences in approval or consent were found when comparing studies of vulnerable and non-vulnerable participants. CONCLUSION: The reporting of ethical approval and consent in RCTs has improved, but journals are less good at reporting this information for other study designs. Journals should publish this information for all research on human participants

    Symbolic Implementation of Connectors in BIP

    Full text link
    BIP is a component framework for constructing systems by superposing three layers of modeling: Behavior, Interaction, and Priority. Behavior is represented by labeled transition systems communicating through ports. Interactions are sets of ports. A synchronization between components is possible through the interactions specified by a set of connectors. When several interactions are possible, priorities allow to restrict the non-determinism by choosing an interaction, which is maximal according to some given strict partial order. The BIP component framework has been implemented in a language and a tool-set. The execution of a BIP program is driven by a dedicated engine, which has access to the set of connectors and priority model of the program. A key performance issue is the computation of the set of possible interactions of the BIP program from a given state. Currently, the choice of the interaction to be executed involves a costly exploration of enumerative representations for connectors. This leads to a considerable overhead in execution times. In this paper, we propose a symbolic implementation of the execution model of BIP, which drastically reduces this overhead. The symbolic implementation is based on computing boolean representation for components, connectors, and priorities with an existing BDD package

    Model Checking with the Sweep-Line Method

    Get PDF
    Explicit-state model checking is a formal software verification technique that differs from peer review and unit testing, in that model checking does an exhaustive state space search. With model checking one takes a system model, traverse all reachable states, and check theses according to formal stated properties over the variables in the model. The properties can be expressed with linear temporal logic or computation tree logic, and can for example be that the value of some variable x should always be positive. When conducting an explicit state space exploration one is guaranteed that the complete state space is checked according to the given property. This is not the case in for instance unit testing, where only fragments of a system are tested. In the case that a property is violated, the model checking algorithm should present an error trace. The error trace represents an execution path of the model, demonstrating why it does not satisfy the property. The main disadvantage of model checking, is that the number of reachable states may grow exponentially in the number of variables. This is known as the state explosion problem. This thesis focuses on explicit-state model checking using the sweep-line method. To combat the state explosion problem, the sweep-line method exploits the notion of progress that a system makes, and is able to delete states from memory on-the-fly during the verification process. The notion of progress is captured by progress measures. Since the standard model checking algorithms rely upon having the whole state space in memory, they are not directly compatible with the sweep-line method. We survey differences of standard model checking algorithms and the sweep-line method, and present previous research on verifying properties and providing error traces with the sweep-line method. The new contributions of this thesis are as follows: (1) We develop a new general technique for providing an error trace for linear temporal logic properties, verified using the sweep-line method; (2) A new algorithm for verifying two key computation tree logic properties, on models limited to monotonic progress measures; (3) A unified library for the sweep-line method is implemented with the algorithms developed in this thesis, and the previous developed algorithms for verifying safety properties and linear temporal logic property checking. All algorithms implemented, are validated by checking properties on a model of a stop-and-wait communication protocol.Masteroppgave i informatikkINF39

    Exploring Two Novel Features for EEG-based Brain-Computer Interfaces: Multifractal Cumulants and Predictive Complexity

    Get PDF
    In this paper, we introduce two new features for the design of electroencephalography (EEG) based Brain-Computer Interfaces (BCI): one feature based on multifractal cumulants, and one feature based on the predictive complexity of the EEG time series. The multifractal cumulants feature measures the signal regularity, while the predictive complexity measures the difficulty to predict the future of the signal based on its past, hence a degree of how complex it is. We have conducted an evaluation of the performance of these two novel features on EEG data corresponding to motor-imagery. We also compared them to the most successful features used in the BCI field, namely the Band-Power features. We evaluated these three kinds of features and their combinations on EEG signals from 13 subjects. Results obtained show that our novel features can lead to BCI designs with improved classification performance, notably when using and combining the three kinds of feature (band-power, multifractal cumulants, predictive complexity) together.Comment: Updated with more subjects. Separated out the band-power comparisons in a companion article after reviewer feedback. Source code and companion article are available at http://nicolas.brodu.numerimoire.net/en/recherche/publication

    Dynamic Control of Explore/Exploit Trade-Off In Bayesian Optimization

    Full text link
    Bayesian optimization offers the possibility of optimizing black-box operations not accessible through traditional techniques. The success of Bayesian optimization methods such as Expected Improvement (EI) are significantly affected by the degree of trade-off between exploration and exploitation. Too much exploration can lead to inefficient optimization protocols, whilst too much exploitation leaves the protocol open to strong initial biases, and a high chance of getting stuck in a local minimum. Typically, a constant margin is used to control this trade-off, which results in yet another hyper-parameter to be optimized. We propose contextual improvement as a simple, yet effective heuristic to counter this - achieving a one-shot optimization strategy. Our proposed heuristic can be swiftly calculated and improves both the speed and robustness of discovery of optimal solutions. We demonstrate its effectiveness on both synthetic and real world problems and explore the unaccounted for uncertainty in the pre-determination of search hyperparameters controlling explore-exploit trade-off.Comment: Accepted for publication in the proceedings of 2018 Computing Conferenc

    Active Virtual Network Management Prediction: Complexity as a Framework for Prediction, Optimization, and Assurance

    Full text link
    Research into active networking has provided the incentive to re-visit what has traditionally been classified as distinct properties and characteristics of information transfer such as protocol versus service; at a more fundamental level this paper considers the blending of computation and communication by means of complexity. The specific service examined in this paper is network self-prediction enabled by Active Virtual Network Management Prediction. Computation/communication is analyzed via Kolmogorov Complexity. The result is a mechanism to understand and improve the performance of active networking and Active Virtual Network Management Prediction in particular. The Active Virtual Network Management Prediction mechanism allows information, in various states of algorithmic and static form, to be transported in the service of prediction for network management. The results are generally applicable to algorithmic transmission of information. Kolmogorov Complexity is used and experimentally validated as a theory describing the relationship among algorithmic compression, complexity, and prediction accuracy within an active network. Finally, the paper concludes with a complexity-based framework for Information Assurance that attempts to take a holistic view of vulnerability analysis
    • …
    corecore