64,671 research outputs found

    Towards Vulnerability Discovery Using Staged Program Analysis

    Full text link
    Eliminating vulnerabilities from low-level code is vital for securing software. Static analysis is a promising approach for discovering vulnerabilities since it can provide developers early feedback on the code they write. But, it presents multiple challenges not the least of which is understanding what makes a bug exploitable and conveying this information to the developer. In this paper, we present the design and implementation of a practical vulnerability assessment framework, called Melange. Melange performs data and control flow analysis to diagnose potential security bugs, and outputs well-formatted bug reports that help developers understand and fix security bugs. Based on the intuition that real-world vulnerabilities manifest themselves across multiple parts of a program, Melange performs both local and global analyses. To scale up to large programs, global analysis is demand-driven. Our prototype detects multiple vulnerability classes in C and C++ code including type confusion, and garbage memory reads. We have evaluated Melange extensively. Our case studies show that Melange scales up to large codebases such as Chromium, is easy-to-use, and most importantly, capable of discovering vulnerabilities in real-world code. Our findings indicate that static analysis is a viable reinforcement to the software testing tool set.Comment: A revised version to appear in the proceedings of the 13th conference on Detection of Intrusions and Malware & Vulnerability Assessment (DIMVA), July 201

    Neural indicators of fatigue in chronic diseases : A systematic review of MRI studies

    Get PDF
    The authors would like to thank the Sir Jules Thorn Charitable Trust for their financial support.Peer reviewedPublisher PD

    Lack of associations between female hormone levels and visuospatial working memory, divided attention and cognitive bias across two consecutive menstrual cycles

    Get PDF
    Background: Interpretation of observational studies on associations between prefrontal cognitive functioning and hormone levels across the female menstrual cycle is complicated due to small sample sizes and poor replicability. Methods: This observational multisite study comprised data of n = 88 menstruating women from Hannover, Germany, and Zurich, Switzerland, assessed during a first cycle and n = 68 re-assessed during a second cycle to rule out practice effects and false-positive chance findings. We assessed visuospatial working memory, attention, cognitive bias and hormone levels at four consecutive time-points across both cycles. In addition to inter-individual differences we examined intra-individual change over time (i.e., within-subject effects). Results: Estrogen, progesterone and testosterone did not relate to inter-individual differences in cognitive functioning. There was a significant negative association between intra-individual change in progesterone and change in working memory from pre-ovulatory to mid-luteal phase during the first cycle, but that association did not replicate in the second cycle. Intra-individual change in testosterone related negatively to change in cognitive bias from menstrual to pre-ovulatory as well as from pre-ovulatory to mid-luteal phase in the first cycle, but these associations did not replicate in the second cycle. Conclusions: There is no consistent association between women’s hormone levels, in particular estrogen and progesterone, and attention, working memory and cognitive bias. That is, anecdotal findings observed during the first cycle did not replicate in the second cycle, suggesting that these are false-positives attributable to random variation and systematic biases such as practice effects. Due to methodological limitations, positive findings in the published literature must be interpreted with reservation

    Hypothesis exploration with visualization of variance.

    Get PDF
    BackgroundThe Consortium for Neuropsychiatric Phenomics (CNP) at UCLA was an investigation into the biological bases of traits such as memory and response inhibition phenotypes-to explore whether they are linked to syndromes including ADHD, Bipolar disorder, and Schizophrenia. An aim of the consortium was in moving from traditional categorical approaches for psychiatric syndromes towards more quantitative approaches based on large-scale analysis of the space of human variation. It represented an application of phenomics-wide-scale, systematic study of phenotypes-to neuropsychiatry research.ResultsThis paper reports on a system for exploration of hypotheses in data obtained from the LA2K, LA3C, and LA5C studies in CNP. ViVA is a system for exploratory data analysis using novel mathematical models and methods for visualization of variance. An example of these methods is called VISOVA, a combination of visualization and analysis of variance, with the flavor of exploration associated with ANOVA in biomedical hypothesis generation. It permits visual identification of phenotype profiles-patterns of values across phenotypes-that characterize groups. Visualization enables screening and refinement of hypotheses about variance structure of sets of phenotypes.ConclusionsThe ViVA system was designed for exploration of neuropsychiatric hypotheses by interdisciplinary teams. Automated visualization in ViVA supports 'natural selection' on a pool of hypotheses, and permits deeper understanding of the statistical architecture of the data. Large-scale perspective of this kind could lead to better neuropsychiatric diagnostics

    Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE) for the diagnosis of dementia within community dwelling populations

    Get PDF
    <b>Background</b><p></p> Various tools exist for initial assessment of possible dementia with no consensus on the optimal assessment method. Instruments that use collateral sources to assess change in cognitive function over time may have particular utility. The most commonly used informant dementia assessment is the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE).<p></p> A synthesis of the available data regarding IQCODE accuracy will help inform cognitive assessment strategies for clinical practice, research and policy.<p></p> <b>Objectives</b><p></p> Our primary objective was to determine the diagnostic accuracy of the informant based questionnaire IQCODE, for detection of all cause (undifferentiated) dementia in community-dwelling adults with no previous cognitive assessment. We sought to describe the accuracy of IQCODE (the index test) against a clinical diagnosis of dementia (the reference standard).<p></p> Our secondary objective was to describe the effect of heterogeneity on the summary estimates. We were particularly interested in the traditional 26-item scale versus the 16-item short form; and language of administration. We explored the effect of varying the threshold IQCODE score used to define 'test positivity'.<p></p> <b>Search methods</b><p></p> We searched the following sources on 28 January 2013: ALOIS (Cochrane Dementia and Cognitive Improvement Group), MEDLINE (OvidSP), EMBASE (OvidSP), PsycINFO (OvidSP), BIOSIS Previews (ISI Web of Knowledge), Web of Science with Conference Proceedings (ISI Web of Knowledge), LILACS (BIREME). We also searched sources relevant or specific to diagnostic test accuracy: MEDION (Universities of Maastrict and Leuven); DARE (York University); ARIF (Birmingham University). We used sensitive search terms based on MeSH terms and other controlled vocabulary.<p></p> <b>Selection criteria</b><p></p> We selected those studies performed in community settings that used (not necessarily exclusively) the IQCODE to assess for presence of dementia and, where dementia diagnosis was confirmed, with clinical assessment. Our intention with limiting the search to a 'community' setting was to include those studies closest to population level assessment. Within our predefined community inclusion criteria, there were relevant papers that fulfilled our definition of community dwelling but represented a selected population, for example stroke survivors. We included these studies but performed sensitivity analyses to assess the effects of these less representative populations on the summary results.<p></p> <b>Data collection and analysis</b><p></p> We screened all titles generated by the electronic database searches and abstracts of all potentially relevant studies were reviewed. Full papers were assessed for eligibility and data extracted by two independent assessors. For quality assessment (risk of bias and applicability) we used the QUADAS 2 tool. We included test accuracy data on the IQCODE used at predefined diagnostic thresholds. Where data allowed, we performed meta-analyses to calculate summary values of sensitivity and specificity with corresponding 95% confidence intervals (CIs). We pre-specified analyses to describe the effect of IQCODE format (traditional or short form) and language of administration for the IQCODE.<p></p> <b>Main results</b><p></p> From 16,144 citations, 71 papers described IQCODE test accuracy. We included 10 papers (11 independent datasets) representing data from 2644 individuals (n = 379 (14%) with dementia). Using IQCODE cut-offs commonly employed in clinical practice (3.3, 3.4, 3.5, 3.6) the sensitivity and specificity of IQCODE for diagnosis of dementia across the studies were generally above 75%.<p></p> Taking an IQCODE threshold of 3.3 (or closest available) the sensitivity was 0.80 (95% CI 0.75 to 0.85); specificity was 0.84 (95% CI 0.78 to 0.90); positive likelihood ratio was 5.2 (95% CI 3.7 to 7.5) and the negative likelihood ratio was 0.23 (95% CI 0.19 to 0.29).<p></p> Comparative analysis suggested no significant difference in the test accuracy of the 16 and 26-item IQCODE tests and no significant difference in test accuracy by language of administration. There was little difference in sensitivity across our predefined diagnostic cut-points.<p></p> There was substantial heterogeneity in the included studies. Sensitivity analyses removing potentially unrepresentative populations in these studies made little difference to the pooled data estimates. The majority of included papers had potential for bias, particularly around participant selection and sampling. The quality of reporting was suboptimal particularly regarding timing of assessments and descriptors of reproducibility and inter-observer variability.<p></p> <b>Authors' conclusions</b><p></p> Published data suggest that if using the IQCODE for community dwelling older adults, the 16 item IQCODE may be preferable to the traditional scale due to lesser test burden and no obvious difference in accuracy. Although IQCODE test accuracy is in a range that many would consider 'reasonable', in the context of community or population settings the use of the IQCODE alone would result in substantial misdiagnosis and false reassurance. Across the included studies there were issues with heterogeneity, several potential biases and suboptimal reporting quality

    Prospect patents, data markets, and the commons in data-driven medicine : openness and the political economy of intellectual property rights

    Get PDF
    Scholars who point to political influences and the regulatory function of patent courts in the USA have long questioned the courts’ subjective interpretation of what ‘things’ can be claimed as inventions. The present article sheds light on a different but related facet: the role of the courts in regulating knowledge production. I argue that the recent cases decided by the US Supreme Court and the Federal Circuit, which made diagnostics and software very difficult to patent and which attracted criticism for a wealth of different reasons, are fine case studies of the current debate over the proper role of the state in regulating the marketplace and knowledge production in the emerging information economy. The article explains that these patents are prospect patents that may be used by a monopolist to collect data that everybody else needs in order to compete effectively. As such, they raise familiar concerns about failure of coordination emerging as a result of a monopolist controlling a resource such as datasets that others need and cannot replicate. In effect, the courts regulated the market, primarily focusing on ensuring the free flow of data in the emerging marketplace very much in the spirit of the ‘free the data’ language in various policy initiatives, yet at the same time with an eye to boost downstream innovation. In doing so, these decisions essentially endorse practices of personal information processing which constitute a new type of public domain: a source of raw materials which are there for the taking and which have become most important inputs to commercial activity. From this vantage point of view, the legal interpretation of the private and the shared legitimizes a model of data extraction from individuals, the raw material of information capitalism, that will fuel the next generation of data-intensive therapeutics in the field of data-driven medicine

    Dynamic reconfiguration of human brain networks during learning

    Get PDF
    Human learning is a complex phenomenon requiring flexibility to adapt existing brain function and precision in selecting new neurophysiological activities to drive desired behavior. These two attributes -- flexibility and selection -- must operate over multiple temporal scales as performance of a skill changes from being slow and challenging to being fast and automatic. Such selective adaptability is naturally provided by modular structure, which plays a critical role in evolution, development, and optimal network function. Using functional connectivity measurements of brain activity acquired from initial training through mastery of a simple motor skill, we explore the role of modularity in human learning by identifying dynamic changes of modular organization spanning multiple temporal scales. Our results indicate that flexibility, which we measure by the allegiance of nodes to modules, in one experimental session predicts the relative amount of learning in a future session. We also develop a general statistical framework for the identification of modular architectures in evolving systems, which is broadly applicable to disciplines where network adaptability is crucial to the understanding of system performance.Comment: Main Text: 19 pages, 4 figures Supplementary Materials: 34 pages, 4 figures, 3 table
    • …
    corecore