1,233 research outputs found

    Fully Automatic Karyotyping via Deep Convolutional Neural Networks

    Get PDF
    Chromosome karyotyping is an important yet labor-intensive procedure for diagnosing genetic diseases. Automating such a procedure drastically reduces the manual work of cytologists and increases congenital disease diagnosing precision. Researchers have contributed to chromosome segmentation and classification for decades. However, very few studies integrate the two tasks as a unified, fully automatic procedure or achieved a promising performance. This paper addresses the gap by presenting: 1) A novel chromosome segmentation module named ChrRender, with the idea of rendering the chromosome instances by combining rich global features from the backbone and coarse mask prediction from Mask R-CNN; 2) A devised chromosome classification module named ChrNet4 that pays more attention to channel-wise dependencies from aggregated informative features and calibrating the channel interdependence; 3) An integrated Render-Attention-Architecture to accomplish fully automatic karyotyping with segmentation and classification modules; 4) A strategy for eliminating differences between training data and segmentation output data to be classified. These proposed methods are implemented in three ways on the public Q-band BioImLab dataset and a G-band private dataset. The results indicate promising performance: 1) on the joint karyotyping task, which predicts a karyotype image by first segmenting an original microscopical image, then classifying each segmentation output with a precision of 89.75% and 94.22% on the BioImLab and private dataset, respectively; 2) On the separate task with two datasets, ChrRender obtained AP50 of 96.652% and 96.809% for segmentation, ChrNet4 achieved 95.24% and 94.07% for classification, respectively. The COCO format annotation files of BioImLab used in this paper are available at https://github.com/Alex17swim/BioImLab The study introduces an integrated workflow to predict a karyotyping image from a Microscopical Chromosome Image. With state-of-the-art performance on a public dataset, the proposed Render-Attention-Architecture has accomplished fully automatic chromosome karyotyping

    IQ Classification via Brainwave Features: Review on Artificial Intelligence Techniques

    Get PDF
    Intelligence study is one of keystone to distinguish individual differences in cognitive psychology. Conventional psychometric tests are limited in terms of assessment time, and existence of biasness issues. Apart from that, there is still lack in knowledge to classify IQ based on EEG signals and intelligent signal processing (ISP) technique. ISP purpose is to extract as much information as possible from signal and noise data using learning and/or other smart techniques. Therefore, as a first attempt in classifying IQ feature via scientific approach, it is important to identify a relevant technique with prominent paradigm that is suitable for this area of application. Thus, this article reviews several ISP approaches to provide consolidated source of information. This in particular focuses on prominent paradigm that suitable for pattern classification in biomedical area. The review leads to selection of ANN since it has been widely implemented for pattern classification in biomedical engineering

    Recent Advances in Embedded Computing, Intelligence and Applications

    Get PDF
    The latest proliferation of Internet of Things deployments and edge computing combined with artificial intelligence has led to new exciting application scenarios, where embedded digital devices are essential enablers. Moreover, new powerful and efficient devices are appearing to cope with workloads formerly reserved for the cloud, such as deep learning. These devices allow processing close to where data are generated, avoiding bottlenecks due to communication limitations. The efficient integration of hardware, software and artificial intelligence capabilities deployed in real sensing contexts empowers the edge intelligence paradigm, which will ultimately contribute to the fostering of the offloading processing functionalities to the edge. In this Special Issue, researchers have contributed nine peer-reviewed papers covering a wide range of topics in the area of edge intelligence. Among them are hardware-accelerated implementations of deep neural networks, IoT platforms for extreme edge computing, neuro-evolvable and neuromorphic machine learning, and embedded recommender systems

    U-Net and its variants for medical image segmentation: theory and applications

    Full text link
    U-net is an image segmentation technique developed primarily for medical image analysis that can precisely segment images using a scarce amount of training data. These traits provide U-net with a very high utility within the medical imaging community and have resulted in extensive adoption of U-net as the primary tool for segmentation tasks in medical imaging. The success of U-net is evident in its widespread use in all major image modalities from CT scans and MRI to X-rays and microscopy. Furthermore, while U-net is largely a segmentation tool, there have been instances of the use of U-net in other applications. As the potential of U-net is still increasing, in this review we look at the various developments that have been made in the U-net architecture and provide observations on recent trends. We examine the various innovations that have been made in deep learning and discuss how these tools facilitate U-net. Furthermore, we look at image modalities and application areas where U-net has been applied.Comment: 42 pages, in IEEE Acces

    Digital image processing for prognostic and diagnostic clinical pathology

    Get PDF
    When digital imaging and image processing methods are applied to clinical diagnostic and prognostic needs, the methods can be seen to increase human understanding and provide objective measurements. Most current clinical applications are limited to providing subjective information to healthcare professionals rather than providing objective measures. This Thesis provides detail of methods and systems that have been developed both for objective and subjective microscopy applications. A system framework is presented that provides a base for the development of microscopy imaging systems. This practical framework is based on currently available hardware and developed with standard software development tools. Image processing methods are applied to counter optical limitations of the bright field microscope, automating the system and allowing for unsupervised image capture and analysis. Current literature provides evidence that 3D visualisation has provided increased insight and application in many clinical areas. There have been recent advancements in the use of 3D visualisation for the study of soft tissue structures, but its clinical application within histology remains limited. Methods and applications have been researched and further developed which allow for the 3D reconstruction and visualisation of soft tissue structures using microtomed serial histological sections specimens. A system has been developed suitable for this need is presented giving considerations to image capture, data registration and 3D visualisation, requirements. The developed system has been used to explore and increase 3D insight on clinical samples. The area of automated objective image quantification of microscope slides presents the allure of providing objective methods replacing existing objective and subjective methods, increasing accuracy and rsducinq manual burden. One such existing objective test is DNA Image Ploidy which seeks to characterise cancer by the measurement of DNA content within individual cell nuclei, an accepted but manually burdensome method. The main novelty of the work completed lies in the development of an automated system for DNA Image Ploidy measurement, combining methods for automatic specimen focus, segmentation, parametric extraction and the implementation of an automated cell type classification system. A consideration for any clinical image processing system is the correct sampling of the tissue under study. VVhile the image capture requirements for both objective systems and subjective systems are similar there is also an important link between the 3D structures of the tissue. 3D understanding can aid in decisions regarding the sampling criteria of objective tests for as although many tests are completed in the 2D realm the clinical samples are 3D objects. Cancers such as Prostate and Breast cancer are known to be multi-focal, with areas of seeming physically, independent areas of disease within a single site. It is not possible to understand the true 3D nature of the samples using 2D micro-tomed sections in isolation from each other. The 3D systems described in this report provide a platform of the exploration of the true multi focal nature of disease soft tissue structures allowing for the sampling criteria of objective tests such as DNA Image Ploidy to be correctly set. For the Automated DNA Image Ploidy and the 3D reconstruction and visualisation systems, clinical review has been completed to test the increased insights provided. Datasets which have been reconstructed from microtomed serial sections and visualised with the developed 3D system area presented. For the automated DNA Image Ploidy system, the developed system is compared with the existing manual method to qualify the quality of data capture, operational speed and correctness of nuclei classification. Conclusions are presented for the work that has been completed and discussion given as to future areas of research that could be undertaken, extending the areas of study, increasing both clinical insight and practical application

    Reinforcement Learning

    Get PDF
    Brains rule the world, and brain-like computation is increasingly used in computers and electronic devices. Brain-like computation is about processing and interpreting data or directly putting forward and performing actions. Learning is a very important aspect. This book is on reinforcement learning which involves performing actions to achieve a goal. The first 11 chapters of this book describe and extend the scope of reinforcement learning. The remaining 11 chapters show that there is already wide usage in numerous fields. Reinforcement learning can tackle control tasks that are too complex for traditional, hand-designed, non-learning controllers. As learning computers can deal with technical complexities, the tasks of human operators remain to specify goals on increasingly higher levels. This book shows that reinforcement learning is a very dynamic area in terms of theory and applications and it shall stimulate and encourage new research in this field

    A Landsat-based analysis of tropical forest dynamics in the Central Ecuadorian Amazon : Patterns and causes of deforestation and reforestation

    Get PDF
    Tropical deforestation constitutes a major threat to the Amazon rainforest. Monitoring forest dynamics is therefore necessary for sustainable management of forest resources in this region. However, cloudiness results in scarce good quality satellite observations, and is therefore a major challenge for monitoring deforestation and for detecting subtle processes such as reforestation. Furthermore, varying human pressure highlights the importance of understanding the underlying forces behind these processes at multiple scales but also from an interand transdisciplinary perspective. Against this background, this study analyzes and recommends different methodologies for accomplishing these goals, exemplifying their use with Landsat timeseries and socioeconomic data. The study cases were located in the Central Ecuadorian Amazon (CEA), an area characterized by different deforestation and reforestation processes and socioeconomic and landscape settings. Three objectives guided this research. First, processing and timeseries analysis algorithms for forest dynamics monitoring in areas with limited Landsat data were evaluated, using an innovative approach based in genetic algorithms. Second, a methodology based in image compositing, multisensor data fusion and postclassification change detection is proposed to address the limitations observed in forest dynamics monitoring with timeseries analysis algorithms. Third, the evaluation of the underlying driving forces of deforestation and reforestation in the CEA are conducted using a novel modelling technique called geographically weight ridge regression for improving processing and analysis of socioeconomic data. The methodology for forest dynamics monitoring demonstrates that despite abundant data gaps in the Landsat archive for the CEA, historical patterns of deforestation and reforestation can still be reported biennially with overall accuracies above 70%. Furthermore, the improved methodology for analyzing underlying driving forces of forest dynamics identified local drivers and specific socioeconomic settings that improved the explanations for the high deforestation and reforestation rates in the CEA. The results indicate that the proposed methodologies are an alternative for monitoring and analyzing forest dynamics, particularly in areas where data scarcity and landscape complexity require approaches that are more specialized.Landsat-basierte Analyse der Dynamik tropischer Wälder im Zentral-Ecuadorianischen Amazonasgebiet: Muster und Ursachen von Abholzung und Wiederaufforstung Die tropische Entwaldung stellt eine große Bedrohung für den AmazonasRegenwald dar. Daher ist die Überwachung von Walddynamiken eine notwendige Maßnahme, um eine nachhaltige Bewirtschaftung der Waldressourcen in dieser Region zu gewährleisten. Jedoch verschlechtert Bewölkung die Qualität der Satellitenaufnahmen und stellt die hauptsächliche Herausforderung für die Überwachung der Entwaldung sowie die Detektierung einhergehender Prozesse, wie der Wiederaufforstung, dar. Darüber hinaus zeigt der unterschiedliche menschliche Nutzungsdruck, wie wichtig es ist, die zugrundeliegenden Kräfte hinter diesen Prozessen auf mehreren Ebenen, aber auch interund transdisziplinär, zu verstehen. Variierender anthropogener Einfluss unterstreicht die Notwendigkeit, unterschwellige Prozesse (oder "Driver") auf multiplen Skalen aus interund transdisziplinärer Sicht zu verstehen. Darauf basierend analysiert und empfiehlt die vorliegende Studie unterschiedliche Methoden, welche unter Verwendung von LandsatZeitreihen und sozioökonomischen Daten zur Erreichung dieser Ziele beitragen. Die Untersuchungsgebiete befinden sich im ZentralEcuadorianischen Amazonasgebiet (CEA). Einem Gebiet, das einerseits durch differenzierte Entwaldungsund Aufforstungsprozesse, andererseits durch seine sozioökonomischen und landschaftlichen Gegebenheiten geprägt ist. Das Forschungsprojekt hat drei Zielvorgaben. Erstens werden auf genetischen Algorithmen basierten Verfahren zur Verarbeitung der Zeitreihenanalyse für die Überwachung der Walddynamik in Gebieten, für die nur begrenzte LandsatDaten vorhanden waren, bewertet. Zweitens soll eine Methode in Anlehnung an Satellitenbildkompositen, Datenfusion von mehreren Satellitenbildern und Veränderungsdetektion gefunden werden, die Einschränkungen der Walddynamik durch Entwaldung mithilfe von ZeitreihenAlgorithmen thematisiert. Drittens werden die Ursachen der Entwaldung/Abholzung im CEA anhand der geographischen gewichteten RidgeRegression, die zur einen verbesserten Analyse der sozioökonomischen Information beiträgt, bewertet. Die Methodik für das WalddynamikMonitoring zeigt, dass trotz umfangreicher Datenlücken im LandsatArchiv für das CEA alle zwei Jahre die historischen Entwaldungsund Wiederaufforstungsmuster mit einer Genauigkeit von über 70% gemeldet werden können. Eine verbesserte Analysemethode trägt außerdem dazu bei, die für die Walddynamik verantwortlichen treibenden Kräfte zu identifizieren, sowie lokale Treiber und spezifische sozioökonomische Rahmenbedingungen auszumachen, die eine bessere Erklärung für die hohen Entwaldungsund Wiederaufforstungsraten im CEA aufzeigen. Die erzielten Ergebnisse machen deutlich, dass die vorgeschlagenen Methoden eine Alternative zum Monitoring und zur Analyse der Walddynamik darstellen; Insbesondere in Gebieten, in denen Datenknappheit und Landschaftskomplexität spezialisierte Ansätze erforderlich machen
    corecore