4,735 research outputs found

    Learning to Generate Posters of Scientific Papers

    Full text link
    Researchers often summarize their work in the form of posters. Posters provide a coherent and efficient way to convey core ideas from scientific papers. Generating a good scientific poster, however, is a complex and time consuming cognitive task, since such posters need to be readable, informative, and visually aesthetic. In this paper, for the first time, we study the challenging problem of learning to generate posters from scientific papers. To this end, a data-driven framework, that utilizes graphical models, is proposed. Specifically, given content to display, the key elements of a good poster, including panel layout and attributes of each panel, are learned and inferred from data. Then, given inferred layout and attributes, composition of graphical elements within each panel is synthesized. To learn and validate our model, we collect and make public a Poster-Paper dataset, which consists of scientific papers and corresponding posters with exhaustively labelled panels and attributes. Qualitative and quantitative results indicate the effectiveness of our approach.Comment: in Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI'16), Phoenix, AZ, 201

    2-D iteratively reweighted least squares lattice algorithm and its application to defect detection in textured images

    Get PDF
    In this paper, a 2-D iteratively reweighted least squares lattice algorithm, which is robust to the outliers, is introduced and is applied to defect detection problem in textured images. First, the philosophy of using different optimization functions that results in weighted least squares solution in the theory of 1-D robust regression is extended to 2-D. Then a new algorithm is derived which combines 2-D robust regression concepts with the 2-D recursive least squares lattice algorithm. With this approach, whatever the probability distribution of the prediction error may be, small weights are assigned to the outliers so that the least squares algorithm will be less sensitive to the outliers. Implementation of the proposed iteratively reweighted least squares lattice algorithm to the problem of defect detection in textured images is then considered. The performance evaluation, in terms of defect detection rate, demonstrates the importance of the proposed algorithm in reducing the effect of the outliers that generally correspond to false alarms in classification of textures as defective or nondefective

    Numerical methods for coupled reconstruction and registration in digital breast tomosynthesis.

    Get PDF
    Digital Breast Tomosynthesis (DBT) provides an insight into the fine details of normal fibroglandular tissues and abnormal lesions by reconstructing a pseudo-3D image of the breast. In this respect, DBT overcomes a major limitation of conventional X-ray mam- mography by reducing the confounding effects caused by the superposition of breast tissue. In a breast cancer screening or diagnostic context, a radiologist is interested in detecting change, which might be indicative of malignant disease. To help automate this task image registration is required to establish spatial correspondence between time points. Typically, images, such as MRI or CT, are first reconstructed and then registered. This approach can be effective if reconstructing using a complete set of data. However, for ill-posed, limited-angle problems such as DBT, estimating the deformation is com- plicated by the significant artefacts associated with the reconstruction, leading to severe inaccuracies in the registration. This paper presents a mathematical framework, which couples the two tasks and jointly estimates both image intensities and the parameters of a transformation. Under this framework, we compare an iterative method and a simultaneous method, both of which tackle the problem of comparing DBT data by combining reconstruction of a pair of temporal volumes with their registration. We evaluate our methods using various computational digital phantoms, uncom- pressed breast MR images, and in-vivo DBT simulations. Firstly, we compare both iter- ative and simultaneous methods to the conventional, sequential method using an affine transformation model. We show that jointly estimating image intensities and parametric transformations gives superior results with respect to reconstruction fidelity and regis- tration accuracy. Also, we incorporate a non-rigid B-spline transformation model into our simultaneous method. The results demonstrate a visually plausible recovery of the deformation with preservation of the reconstruction fidelity

    Ensemble of random forests One vs. Rest classifiers for MCI and AD prediction using ANOVA cortical and subcortical feature selection and partial least squares.

    Get PDF
    Background: Alzheimer’s disease (AD) is the most common cause of dementia in the elderly and affects approximately 30 million individuals worldwide. Mild cognitive impairment (MCI) is very frequently a prodromal phase of AD, and existing studies have suggested that people with MCI tend to progress to AD at a rate of about 10 % to 15 % per year. However, the ability of clinicians and machine learning systems to predict AD based on MRI biomarkers at an early stage is still a challenging problem that can have a great impact in improving treatments. Method: The proposed system, developed by the SiPBA-UGR team for this challenge, is based on feature standardization, ANOVA feature selection, partial least squares feature dimension reduction and an ensemble of one vs. rest random forest classifiers. With the aim of improving its performance when discriminating healthy controls (HC) from MCI, a second binary classification level was introduced that reconsiders the HC and MCI predictions of the first level. Results: The system was trained and evaluated on an ADNI datasets that consist of T1-weighted MRI morphological measurements from HC, stable MCI, converter MCI and AD subjects. The proposed system yields a 56.25 % classification score on the test subset which consists of 160 real subjects. Comparison with Existing Method(s): The classifier yielded the best performance when compared to: i) One vs. One (OvO), One vs. Rest (OvR) and error correcting output codes (ECOC) as strategies for reducing the multiclass classification task to multiple binary classification problems, ii) support vector machines, gradient boosting classifier and random forest as base binary classifiers, and iii) bagging ensemble learning. Conclusions: A robust method has been proposed for the international challenge on MCI prediction based on MRI data.This work was supported by the MINECO/FEDER under TEC2015-64718-R project, the Consejería de Economía, Innovacion, Ciencia, y Empleo of the Junta de Andalucía under the P11-TIC-7103 Excellence Project and the Salvador de Madariaga Mobility Grants 2017

    Optimized One vs One approach in multiclass classification for early Alzheimer’s Disease and Mild Cognitive Impairment diagnosis

    Get PDF
    The detection of Alzheimer’s Disease in its early stages is crucial for patient care and drugs development. Motivated by this fact, the neuroimaging community has extensively applied machine learning techniques to the early diagnosis problem with promising results. The organization of challenges has helped the community to address different raised problems and to standardize the approaches to the problem. In this work we use the data from international challenge for automated prediction of MCI from MRI data to address the multiclass classification problem. We propose a novel multiclass classification approach that addresses the outlier detection problem, uses pairwise t-test feature selection, project the selected features onto a Partial-Least-Squares multiclass subspace, and applies one-versus-one error correction output codes classification. The proposed method yields to an accuracy of 67 % in the multiclass classification, outperforming all the proposals of the competition.Ministerio de Innovacion y Ciencia Project DEEP-NEUROMAPS RTI2018-098913-B100Consejeria de Economia, Innovacion, Ciencia, y Empleo of the Junta de Andalucia A-TIC-080-UGR18 TIC FRONTERAGerman Research Foundation (DFG) FPU 18/04902United States Department of Health & Human Services National Institutes of Health (NIH) - USA NIH National Institute of Neurological Disorders & Stroke (NINDS) U01 AG024904DOD ADNI Department of Defense W81XWH-12-2-001

    Parkinson’s Disease Detection Using Isosurfaces-Based Features and Convolutional Neural Networks

    Get PDF
    Computer aided diagnosis systems based on brain imaging are an important tool to assist in the diagnosis of Parkinson’s disease, whose ultimate goal is the detection by automatic recognizing of patterns that characterize the disease. In recent times Convolutional Neural Networks (CNN) have proved to be amazingly useful for that task. The drawback, however, is that 3D brain images contain a huge amount of information that leads to complex CNN architectures. When these architectures become too complex, classification performances often degrades because the limitations of the training algorithm and overfitting. Thus, this paper proposes the use of isosurfaces as a way to reduce such amount of data while keeping the most relevant information. These isosurfaces are then used to implement a classification system which uses two of the most well-known CNN architectures, LeNet and AlexNet, to classify DaTScan images with an average accuracy of 95.1% and AUC = 97%, obtaining comparable (slightly better) values to those obtained for most of the recently proposed systems. It can be concluded therefore that the computation of isosurfaces reduces the complexity of the inputs significantly, resulting in high classification accuracies with reduced computational burden.MINECO/FEDER under TEC2015-64718-R, PSI2015-65848-R, PGC2018-098813-B-C32, and RTI2018-098913-B-100 projects

    Feature Extraction

    Get PDF
    Feature extraction is a procedure aimed at selecting and transforming a data set in order to increase the performance of a pattern recognition or machine learning system. Nowadays, since the amount of data available and its dimension is growing exponentially, it is a fundamental procedure to avoid overfitting and the curse of dimensionality, while, in some cases, allowing a interpretative analysis of the data. The topic itself is a thriving discipline of study, and it is difficult to address every single feature extraction algorithm. Therefore, we provide an overview of the topic, introducing widely used techniques, while at the same time presenting some domain-specific feature extraction algorithms. Finally, as a case, study, we will illustrate the vastness of the field by analysing the usage and impact of feature extraction in neuroimaging

    A Review on Data Fusion of Multidimensional Medical and Biomedical Data

    Get PDF
    Data fusion aims to provide a more accurate description of a sample than any one source of data alone. At the same time, data fusion minimizes the uncertainty of the results by combining data from multiple sources. Both aim to improve the characterization of samples and might improve clinical diagnosis and prognosis. In this paper, we present an overview of the advances achieved over the last decades in data fusion approaches in the context of the medical and biomedical fields. We collected approaches for interpreting multiple sources of data in different combinations: image to image, image to biomarker, spectra to image, spectra to spectra, spectra to biomarker, and others. We found that the most prevalent combination is the image-to-image fusion and that most data fusion approaches were applied together with deep learning or machine learning methods
    corecore