258 research outputs found

    Deep Learning in Cardiology

    Full text link
    The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction and intervention. Deep learning is a representation learning method that consists of layers that transform the data non-linearly, thus, revealing hierarchical relationships and structures. In this review we survey deep learning application papers that use structured data, signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    Deep learning for image-based liver analysis — A comprehensive review focusing on malignant lesions

    Get PDF
    Deep learning-based methods, in particular, convolutional neural networks and fully convolutional networks are now widely used in the medical image analysis domain. The scope of this review focuses on the analysis using deep learning of focal liver lesions, with a special interest in hepatocellular carcinoma and metastatic cancer; and structures like the parenchyma or the vascular system. Here, we address several neural network architectures used for analyzing the anatomical structures and lesions in the liver from various imaging modalities such as computed tomography, magnetic resonance imaging and ultrasound. Image analysis tasks like segmentation, object detection and classification for the liver, liver vessels and liver lesions are discussed. Based on the qualitative search, 91 papers were filtered out for the survey, including journal publications and conference proceedings. The papers reviewed in this work are grouped into eight categories based on the methodologies used. By comparing the evaluation metrics, hybrid models performed better for both the liver and the lesion segmentation tasks, ensemble classifiers performed better for the vessel segmentation tasks and combined approach performed better for both the lesion classification and detection tasks. The performance was measured based on the Dice score for the segmentation, and accuracy for the classification and detection tasks, which are the most commonly used metrics.publishedVersio

    Modeling and debiasing feedback loops in collaborative filtering recommender systems.

    Get PDF
    Artificial Intelligence (AI)-driven recommender systems have been gaining increasing ubiquity and influence in our daily lives, especially during time spent online on the World Wide Web or smart devices. The influence of recommender systems on who and what we can find and discover, our choices, and our behavior, has thus never been more concrete. AI can now predict and anticipate, with varying degrees of accuracy, the news article we will read, the music we will listen to, the movies we will watch, the transactions we will make, the restaurants we will eat in, the online courses we will be interested in, and the people we will connect with for various ends and purposes. For all these reasons, the automated predictions and recommendations made by AI can lead to influencing and changing human opinions, behavior, and decision making. When the AI predictions are biased, the influences can have unfair consequences on society, ranging from social polarization to the amplification of misinformation and hate speech. For instance, bias in recommender systems can affect the decision making and shift consumer behavior in an unfair way due to a phenomenon known as the feedback loop. The feedback loop is an inherent component of recommender systems because the latter are dynamic systems that involve continuous interactions with the users, whereby data collected to train a recommender system model is usually affected by the outputs of a previously trained model. This feedback loop is expected to affect the performance of the system. For instance, it can amplify initial bias in the data or model and can lead to other phenomena such as filter bubbles, polarization, and popularity bias. Up to now, it has been difficult to understand the dynamics of recommender system feedback loops, and equally challenging to evaluate the bias and filter bubbles emerging from recommender system models within such an iterative closed loop environment. In this dissertation, we study the feedback loop in the context of Collaborative Filtering (CF) recommender systems. CF systems comprise the leading family of recommender systems that rely mainly on mining the patterns of interaction between the users and items to train models that aim to predict future user interactions. Our research contributions target three aspects of recommendation, namely modeling, debiasing and evaluating feedback loops. Our research advances the state of the art in Fairness in Artificial Intelligence on several fronts: (1) We propose and validate a new theoretical model, based on Martingale differences, to model the recommender system feedback loop, and allow a better understanding of the dynamics of filter bubbles and user discovery. (2) We propose a Transformer-based deep learning architecture and algorithm to learn diverse representations for users and items in order to increase the diversity in the recommendations. Our evaluation experiments on real world datasets demonstrate that our transformer model recommends 14\% more diverse items and improves the novelty of the recommendation by more than 20\%. (3) We propose a new simulation and experimentation framework that allows studying and tracking the evolution of bias metrics in a feedback loop setting, for a variety of recommendation modeling algorithms. Our preliminary findings, using the new simulation framework show that recommender systems are deeply affected by the feedback loop, and that without an adequate debiasing or exploration strategy, this feedback loop limits the discovery of the user and increases the disparity in exposure between items that can be recommended. To help the research and practice community in studying recommender system fairness, all the tools developed to model, debias, and evaluate recommender systems are made available to the public as open source software libraries \footnote{https://github.com/samikhenissi/TheoretUserModeling}. (4) We propose a novel learnable dynamic debiasing strategy that learns an optimal rescaling parameter for the predicted rating and achieves a better trade-off between accuracy and debiasing. We focus on solving the popularity bias of the items and test our method using our proposed simulation framework and show the effectiveness of using a learnable debiasing degree to produce better results
    corecore