517 research outputs found

    The first NINDS/NIBIB consensus meeting to define neuropathological criteria for the diagnosis of chronic traumatic encephalopathy.

    Get PDF
    Chronic traumatic encephalopathy (CTE) is a neurodegeneration characterized by the abnormal accumulation of hyperphosphorylated tau protein within the brain. Like many other neurodegenerative conditions, at present, CTE can only be definitively diagnosed by post-mortem examination of brain tissue. As the first part of a series of consensus panels funded by the NINDS/NIBIB to define the neuropathological criteria for CTE, preliminary neuropathological criteria were used by 7 neuropathologists to blindly evaluate 25 cases of various tauopathies, including CTE, Alzheimer's disease, progressive supranuclear palsy, argyrophilic grain disease, corticobasal degeneration, primary age-related tauopathy, and parkinsonism dementia complex of Guam. The results demonstrated that there was good agreement among the neuropathologists who reviewed the cases (Cohen's kappa, 0.67) and even better agreement between reviewers and the diagnosis of CTE (Cohen's kappa, 0.78). Based on these results, the panel defined the pathognomonic lesion of CTE as an accumulation of abnormal hyperphosphorylated tau (p-tau) in neurons and astroglia distributed around small blood vessels at the depths of cortical sulci and in an irregular pattern. The group also defined supportive but non-specific p-tau-immunoreactive features of CTE as: pretangles and NFTs affecting superficial layers (layers II-III) of cerebral cortex; pretangles, NFTs or extracellular tangles in CA2 and pretangles and proximal dendritic swellings in CA4 of the hippocampus; neuronal and astrocytic aggregates in subcortical nuclei; thorn-shaped astrocytes at the glial limitans of the subpial and periventricular regions; and large grain-like and dot-like structures. Supportive non-p-tau pathologies include TDP-43 immunoreactive neuronal cytoplasmic inclusions and dot-like structures in the hippocampus, anteromedial temporal cortex and amygdala. The panel also recommended a minimum blocking and staining scheme for pathological evaluation and made recommendations for future study. This study provides the first step towards the development of validated neuropathological criteria for CTE and will pave the way towards future clinical and mechanistic studies

    Programming with a Differentiable Forth Interpreter

    Get PDF
    Given that in practice training data is scarce for all but a small set of problems, a core question is how to incorporate prior knowledge into a model. In this paper, we consider the case of prior procedural knowledge for neural networks, such as knowing how a program should traverse a sequence, but not what local actions should be performed at each step. To this end, we present an end-to-end differentiable interpreter for the programming language Forth which enables programmers to write program sketches with slots that can be filled with behaviour trained from program input-output data. We can optimise this behaviour directly through gradient descent techniques on user-specified objectives, and also integrate the program into any larger neural computation graph. We show empirically that our interpreter is able to effectively leverage different levels of prior program structure and learn complex behaviours such as sequence sorting and addition. When connected to outputs of an LSTM and trained jointly, our interpreter achieves state-of-the-art accuracy for end-to-end reasoning about quantities expressed in natural language stories.Comment: 34th International Conference on Machine Learning (ICML 2017

    Stepping Stones to Inductive Synthesis of Low-Level Looping Programs

    Full text link
    Inductive program synthesis, from input/output examples, can provide an opportunity to automatically create programs from scratch without presupposing the algorithmic form of the solution. For induction of general programs with loops (as opposed to loop-free programs, or synthesis for domain-specific languages), the state of the art is at the level of introductory programming assignments. Most problems that require algorithmic subtlety, such as fast sorting, have remained out of reach without the benefit of significant problem-specific background knowledge. A key challenge is to identify cues that are available to guide search towards correct looping programs. We present MAKESPEARE, a simple delayed-acceptance hillclimbing method that synthesizes low-level looping programs from input/output examples. During search, delayed acceptance bypasses small gains to identify significantly-improved stepping stone programs that tend to generalize and enable further progress. The method performs well on a set of established benchmarks, and succeeds on the previously unsolved "Collatz Numbers" program synthesis problem. Additional benchmarks include the problem of rapidly sorting integer arrays, in which we observe the emergence of comb sort (a Shell sort variant that is empirically fast). MAKESPEARE has also synthesized a record-setting program on one of the puzzles from the TIS-100 assembly language programming game.Comment: AAAI 201

    Data-Oblivious Data Structures

    Get PDF

    Feat: Functional Enumeration of Algebraic Types

    Get PDF
    In mathematics, an enumeration of a set S is a bijective function from (an initial segment of) the natural numbers to S. We define "functional enumerations" as efficiently computable such bijections. This paper describes a theory of functional enumeration and provides an algebra of enumerations closed under sums, products, guarded recursion and bijections. We partition each enumerated set into numbered, finite subsets. We provide a generic enumeration such that the number of each part corresponds to the size of its values (measured in the number of constructors). We implement our ideas in a Haskell library called testing-feat, and make the source code freely available. Feat provides efficient "random access" to enumerated values. The primary application is property-based testing, where it is used to define both random sampling (for example QuickCheck generators) and exhaustive enumeration (in the style of SmallCheck). We claim that functional enumeration is the best option for automatically generating test cases from large groups of mutually recursive syntax tree types. As a case study we use Feat to test the pretty-printer of the Template Haskell library (uncovering several bugs)

    The Design of a System for Online Psychosocial Care: Balancing Privacy and Accountability in Sensitive Online Healthcare Environments

    Get PDF
    The design of sensitive online healthcare systems must balance the requirements of privacy and accountability for the good of individuals, organizations, and society. Via a design science research approach, we build and evaluate a sophisticated software system for the online provision of psychosocial healthcare to distributed and vulnerable populations. Multidisciplinary research capabilities are embedded within the system to investigate the effectiveness of online treatment protocols. Throughout the development cycles of the system, we build an emergent design theory of scrutiny that applies a multi-layer protocol to support governance of privacy and accountability in sensitive online applications. The design goal is to balance stakeholder privacy protections with the need to provide for accountable interventions in critical and well-defined care situations. The research implications for the development and governance of online applications in numerous privacy-sensitive application areas are explore

    Proof pearl: abella formalization of lambda-calculus cube property

    Get PDF
    International audienceIn 1994 Gerard Huet formalized in Coq the cube property of lambda-calculus residuals. His development is based on a clever idea, a beautiful inductive definition of residuals. However, in his formalization there is a lot of noise concerning the representation of terms with binders. We re-interpret his work in Abella, a recent proof assistant based on higher-order abstract syntax and provided with a nominal quantifier. By revisiting Huet's approach and exploiting the features of Abella, we get a strikingly compact and natural development, which makes Huet's idea really shine

    XAI-TRIS: Non-linear benchmarks to quantify ML explanation performance

    Full text link
    The field of 'explainable' artificial intelligence (XAI) has produced highly cited methods that seek to make the decisions of complex machine learning (ML) methods 'understandable' to humans, for example by attributing 'importance' scores to input features. Yet, a lack of formal underpinning leaves it unclear as to what conclusions can safely be drawn from the results of a given XAI method and has also so far hindered the theoretical verification and empirical validation of XAI methods. This means that challenging non-linear problems, typically solved by deep neural networks, presently lack appropriate remedies. Here, we craft benchmark datasets for three different non-linear classification scenarios, in which the important class-conditional features are known by design, serving as ground truth explanations. Using novel quantitative metrics, we benchmark the explanation performance of a wide set of XAI methods across three deep learning model architectures. We show that popular XAI methods are often unable to significantly outperform random performance baselines and edge detection methods. Moreover, we demonstrate that explanations derived from different model architectures can be vastly different; thus, prone to misinterpretation even under controlled conditions.Comment: Under revie
    • …
    corecore