326,310 research outputs found

    Multi-Planar Deep Segmentation Networks for Cardiac Substructures from MRI and CT

    Full text link
    Non-invasive detection of cardiovascular disorders from radiology scans requires quantitative image analysis of the heart and its substructures. There are well-established measurements that radiologists use for diseases assessment such as ejection fraction, volume of four chambers, and myocardium mass. These measurements are derived as outcomes of precise segmentation of the heart and its substructures. The aim of this paper is to provide such measurements through an accurate image segmentation algorithm that automatically delineates seven substructures of the heart from MRI and/or CT scans. Our proposed method is based on multi-planar deep convolutional neural networks (CNN) with an adaptive fusion strategy where we automatically utilize complementary information from different planes of the 3D scans for improved delineations. For CT and MRI, we have separately designed three CNNs (the same architectural configuration) for three planes, and have trained the networks from scratch for voxel-wise labeling for the following cardiac structures: myocardium of left ventricle (Myo), left atrium (LA), left ventricle (LV), right atrium (RA), right ventricle (RV), ascending aorta (Ao), and main pulmonary artery (PA). We have evaluated the proposed method with 4-fold-cross validation on the multi-modality whole heart segmentation challenge (MM-WHS 2017) dataset. The precision and dice index of 0.93 and 0.90, and 0.87 and 0.85 were achieved for CT and MR images, respectively. While a CT volume was segmented about 50 seconds, an MRI scan was segmented around 17 seconds with the GPUs/CUDA implementation.Comment: The paper is accepted to STACOM 201

    Automatic Detection and Categorization of Election-Related Tweets

    Get PDF
    With the rise in popularity of public social media and micro-blogging services, most notably Twitter, the people have found a venue to hear and be heard by their peers without an intermediary. As a consequence, and aided by the public nature of Twitter, political scientists now potentially have the means to analyse and understand the narratives that organically form, spread and decline among the public in a political campaign. However, the volume and diversity of the conversation on Twitter, combined with its noisy and idiosyncratic nature, make this a hard task. Thus, advanced data mining and language processing techniques are required to process and analyse the data. In this paper, we present and evaluate a technical framework, based on recent advances in deep neural networks, for identifying and analysing election-related conversation on Twitter on a continuous, longitudinal basis. Our models can detect election-related tweets with an F-score of 0.92 and can categorize these tweets into 22 topics with an F-score of 0.90.Comment: ICWSM'16, May 17-20, 2016, Cologne, Germany. In Proceedings of the 10th AAAI Conference on Weblogs and Social Media (ICWSM 2016). Cologne, German

    A weighted belief-propagation algorithm to estimate volume-related properties of random polytopes

    Full text link
    In this work we introduce a novel weighted message-passing algorithm based on the cavity method to estimate volume-related properties of random polytopes, properties which are relevant in various research fields ranging from metabolic networks, to neural networks, to compressed sensing. Unlike the usual approach consisting in approximating the real-valued cavity marginal distributions by a few parameters, we propose an algorithm to faithfully represent the entire marginal distribution. We explain various alternatives to implement the algorithm and benchmark the theoretical findings by showing concrete applications to random polytopes. The results obtained with our approach are found to be in very good agreement with the estimates produced by the Hit-and-Run algorithm, known to produce uniform sampling.Comment: 17 pages, 6 figure

    Adaptive Online Sequential ELM for Concept Drift Tackling

    Get PDF
    A machine learning method needs to adapt to over time changes in the environment. Such changes are known as concept drift. In this paper, we propose concept drift tackling method as an enhancement of Online Sequential Extreme Learning Machine (OS-ELM) and Constructive Enhancement OS-ELM (CEOS-ELM) by adding adaptive capability for classification and regression problem. The scheme is named as adaptive OS-ELM (AOS-ELM). It is a single classifier scheme that works well to handle real drift, virtual drift, and hybrid drift. The AOS-ELM also works well for sudden drift and recurrent context change type. The scheme is a simple unified method implemented in simple lines of code. We evaluated AOS-ELM on regression and classification problem by using concept drift public data set (SEA and STAGGER) and other public data sets such as MNIST, USPS, and IDS. Experiments show that our method gives higher kappa value compared to the multiclassifier ELM ensemble. Even though AOS-ELM in practice does not need hidden nodes increase, we address some issues related to the increasing of the hidden nodes such as error condition and rank values. We propose taking the rank of the pseudoinverse matrix as an indicator parameter to detect underfitting condition.Comment: Hindawi Publishing. Computational Intelligence and Neuroscience Volume 2016 (2016), Article ID 8091267, 17 pages Received 29 January 2016, Accepted 17 May 2016. Special Issue on "Advances in Neural Networks and Hybrid-Metaheuristics: Theory, Algorithms, and Novel Engineering Applications". Academic Editor: Stefan Hauf

    Siamese hierarchical attention networks for extractive summarization

    Full text link
    [EN] In this paper, we present an extractive approach to document summarization based on Siamese Neural Networks. Specifically, we propose the use of Hierarchical Attention Networks to select the most relevant sentences of a text to make its summary. We train Siamese Neural Networks using document-summary pairs to determine whether the summary is appropriated for the document or not. By means of a sentence-level attention mechanism the most relevant sentences in the document can be identified. Hence, once the network is trained, it can be used to generate extractive summaries. The experimentation carried out using the CNN/DailyMail summarization corpus shows the adequacy of the proposal. In summary, we propose a novel end-to-end neural network to address extractive summarization as a binary classification problem which obtains promising results in-line with the state-of-the-art on the CNN/DailyMail corpus.This work has been partially supported by the Spanish MINECO and FEDER founds under project AMIC (TIN2017-85854-C4-2-R). Work of Jose-Angel Gonzalez is also financed by Universitat Politecnica de Valencia under grant PAID-01-17.González-Barba, JÁ.; Segarra Soriano, E.; García-Granada, F.; Sanchís Arnal, E.; Hurtado Oliver, LF. (2019). Siamese hierarchical attention networks for extractive summarization. Journal of Intelligent & Fuzzy Systems. 36(5):4599-4607. https://doi.org/10.3233/JIFS-179011S45994607365N. Begum , M. Fattah , and F. Ren . Automatic text summarization using support vector machine 5(7) (2009), 1987–1996.J. Cheng and M. Lapata . Neural summarization by extracting sentences and words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers, 2016.K.M. Hermann , T. Kocisky , E. Grefenstette , L. Espeholt , W. Kay , M. Suleyman , and P. Blunsom . Teaching machines to read and comprehend, CoRR, abs/1506.03340, 2015.D.P. Kingma and J. Ba . Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.Lloret, E., & Palomar, M. (2011). Text summarisation in progress: a literature review. Artificial Intelligence Review, 37(1), 1-41. doi:10.1007/s10462-011-9216-zLouis, A., & Nenkova, A. (2013). Automatically Assessing Machine Summary Content Without a Gold Standard. Computational Linguistics, 39(2), 267-300. doi:10.1162/coli_a_00123Miao, Y., & Blunsom, P. (2016). Language as a Latent Variable: Discrete Generative Models for Sentence Compression. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. doi:10.18653/v1/d16-1031R. Mihalcea and P. Tarau . Textrank: Bringing order into text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, 2004.T. Mikolov , K. Chen , G. S. Corrado , and J. Dean . Efficient estimation of word representations in vector space, CoRR, abs/1301.3781, 2013.Minaee, S., & Liu, Z. (2017). Automatic question-answering using a deep similarity neural network. 2017 IEEE Global Conference on Signal and Information Processing (GlobalSIP). doi:10.1109/globalsip.2017.8309095R. Paulus , C. Xiong , and R. Socher , A deep reinforced model for abstractive summarization. CoRR, abs/1705.04304, 2017.Schuster, M., & Paliwal, K. K. (1997). Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11), 2673-2681. doi:10.1109/78.650093See, A., Liu, P. J., & Manning, C. D. (2017). Get To The Point: Summarization with Pointer-Generator Networks. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). doi:10.18653/v1/p17-1099Takase, S., Suzuki, J., Okazaki, N., Hirao, T., & Nagata, M. (2016). Neural Headline Generation on Abstract Meaning Representation. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. doi:10.18653/v1/d16-1112G. Tur and R. De Mori . Spoken language understanding: Systems for extracting semantic information from speech, John Wiley & Sons, 2011

    Optimization the initial weights of artificial neural networks via genetic algorithm applied to hip bone fracture prediction

    Get PDF
    This paper aims to find the optimal set of initial weights to enhance the accuracy of artificial neural networks (ANNs) by using genetic algorithms (GA). The sample in this study included 228 patients with first low-trauma hip fracture and 215 patients without hip fracture, both of them were interviewed with 78 questions. We used logistic regression to select 5 important factors (i.e., bone mineral density, experience of fracture, average hand grip strength, intake of coffee, and peak expiratory flow rate) for building artificial neural networks to predict the probabilities of hip fractures. Three-layer (one hidden layer) ANNs models with back-propagation training algorithms were adopted. The purpose in this paper is to find the optimal initial weights of neural networks via genetic algorithm to improve the predictability. Area under the ROC curve (AUC) was used to assess the performance of neural networks. The study results showed the genetic algorithm obtained an AUC of 0.858±0.00493 on modeling data and 0.802 ± 0.03318 on testing data. They were slightly better than the results of our previous study (0.868±0.00387 and 0.796±0.02559, resp.). Thus, the preliminary study for only using simple GA has been proved to be effective for improving the accuracy of artificial neural networks.This research was supported by the National Science Council (NSC) of Taiwan (Grant no. NSC98-2915-I-155-005), the Department of Education grant of Excellent Teaching Program of Yuan Ze University (Grant no. 217517) and the Center for Dynamical Biomarkers and Translational Medicine supported by National Science Council (Grant no. NSC 100- 2911-I-008-001)

    Neuroplasticity of language networks in aphasia: advances, updates, and future challenges

    Get PDF
    Researchers have sought to understand how language is processed in the brain, how brain damage affects language abilities, and what can be expected during the recovery period since the early 19th century. In this review, we first discuss mechanisms of damage and plasticity in the post-stroke brain, both in the acute and the chronic phase of recovery. We then review factors that are associated with recovery. First, we review organism intrinsic variables such as age, lesion volume and location and structural integrity that influence language recovery. Next, we review organism extrinsic factors such as treatment that influence language recovery. Here, we discuss recent advances in our understanding of language recovery and highlight recent work that emphasizes a network perspective of language recovery. Finally, we propose our interpretation of the principles of neuroplasticity, originally proposed by Kleim and Jones (1) in the context of extant literature in aphasia recovery and rehabilitation. Ultimately, we encourage researchers to propose sophisticated intervention studies that bring us closer to the goal of providing precision treatment for patients with aphasia and a better understanding of the neural mechanisms that underlie successful neuroplasticity.P50 DC012283 - NIDCD NIH HHSPublished versio
    corecore