361,373 research outputs found

    A programmable BIST architecture for clusters of Multiple-Port SRAMs

    Get PDF
    This paper presents a BIST architecture, based on a single microprogrammable BIST processor and a set of memory wrappers, designed to simplify the test of a system containing many distributed multi-port SRAMs of different sizes (number of bits, number of words), access protocol (asynchronous, synchronous), and timin

    The relationship between CSF tau markers, hippocampal volume and delayed primacy performance in cognitively intact elderly individuals.

    Get PDF
    BACKGROUND: Primacy performance in recall has been shown to predict cognitive decline in cognitively intact elderly, and conversion from mild cognitive impairment (MCI) to Alzheimer's disease (AD). Delayed primacy performance, but not delayed non-primacy performance, has been shown to be associated with hippocampal volume in cognitively intact older individuals. Since presence of neurofibrillary tangles is an early sign of AD-related pathology, we set out to test whether cerebrospinal fluid (CSF) levels of tau had an effect on delayed primacy performance, while controlling for hippocampal volume and CSF Aβ 1-42 levels. METHODS: Forty-seven individuals, 60 or older and cognitively intact, underwent a multi-session study including lumbar puncture, an MRI scan of the head and memory testing. RESULTS: Our regression analyses show that CSF levels of hyperphosphorylated (P) tau are only associated with reduced delayed primacy performance when hippocampal volumes are smaller. CONCLUSION: Our findings suggest that hippocampal size may play a protective role against the negative effects of P tau on memory

    PDE-Foam - a probability-density estimation method using self-adapting phase-space binning

    Full text link
    Probability Density Estimation (PDE) is a multivariate discrimination technique based on sampling signal and background densities defined by event samples from data or Monte-Carlo (MC) simulations in a multi-dimensional phase space. In this paper, we present a modification of the PDE method that uses a self-adapting binning method to divide the multi-dimensional phase space in a finite number of hyper-rectangles (cells). The binning algorithm adjusts the size and position of a predefined number of cells inside the multi-dimensional phase space, minimising the variance of the signal and background densities inside the cells. The implementation of the binning algorithm PDE-Foam is based on the MC event-generation package Foam. We present performance results for representative examples (toy models) and discuss the dependence of the obtained results on the choice of parameters. The new PDE-Foam shows improved classification capability for small training samples and reduced classification time compared to the original PDE method based on range searching.Comment: 19 pages, 11 figures; replaced with revised version accepted for publication in NIM A and corrected typos in description of Fig. 7 and

    Multi-core job submission and grid resource scheduling for ATLAS AthenaMP

    Get PDF
    AthenaMP is the multi-core implementation of the ATLAS software framework and allows the efficient sharing of memory pages between multiple threads of execution. This has now been validated for production and delivers a significant reduction on the overall application memory footprint with negligible CPU overhead. Before AthenaMP can be routinely run on the LHC Computing Grid it must be determined how the computing resources available to ATLAS can best exploit the notable improvements delivered by switching to this multi-process model. A study into the effectiveness and scalability of AthenaMP in a production environment will be presented. Best practices for configuring the main LRMS implementations currently used by grid sites will be identified in the context of multi-core scheduling optimisation

    Recurrent Saliency Transformation Network: Incorporating Multi-Stage Visual Cues for Small Organ Segmentation

    Full text link
    We aim at segmenting small organs (e.g., the pancreas) from abdominal CT scans. As the target often occupies a relatively small region in the input image, deep neural networks can be easily confused by the complex and variable background. To alleviate this, researchers proposed a coarse-to-fine approach, which used prediction from the first (coarse) stage to indicate a smaller input region for the second (fine) stage. Despite its effectiveness, this algorithm dealt with two stages individually, which lacked optimizing a global energy function, and limited its ability to incorporate multi-stage visual cues. Missing contextual information led to unsatisfying convergence in iterations, and that the fine stage sometimes produced even lower segmentation accuracy than the coarse stage. This paper presents a Recurrent Saliency Transformation Network. The key innovation is a saliency transformation module, which repeatedly converts the segmentation probability map from the previous iteration as spatial weights and applies these weights to the current iteration. This brings us two-fold benefits. In training, it allows joint optimization over the deep networks dealing with different input scales. In testing, it propagates multi-stage visual information throughout iterations to improve segmentation accuracy. Experiments in the NIH pancreas segmentation dataset demonstrate the state-of-the-art accuracy, which outperforms the previous best by an average of over 2%. Much higher accuracies are also reported on several small organs in a larger dataset collected by ourselves. In addition, our approach enjoys better convergence properties, making it more efficient and reliable in practice.Comment: Accepted to CVPR 2018 (10 pages, 6 figures

    An Adaptive Sampling Scheme to Efficiently Train Fully Convolutional Networks for Semantic Segmentation

    Full text link
    Deep convolutional neural networks (CNNs) have shown excellent performance in object recognition tasks and dense classification problems such as semantic segmentation. However, training deep neural networks on large and sparse datasets is still challenging and can require large amounts of computation and memory. In this work, we address the task of performing semantic segmentation on large data sets, such as three-dimensional medical images. We propose an adaptive sampling scheme that uses a-posterior error maps, generated throughout training, to focus sampling on difficult regions, resulting in improved learning. Our contribution is threefold: 1) We give a detailed description of the proposed sampling algorithm to speed up and improve learning performance on large images. We propose a deep dual path CNN that captures information at fine and coarse scales, resulting in a network with a large field of view and high resolution outputs. We show that our method is able to attain new state-of-the-art results on the VISCERAL Anatomy benchmark
    • …
    corecore