555 research outputs found

    SIGNIFICANCE OF DIMENSIONALITY REDUCTION IN IMAGE PROCESSING

    Get PDF
    The aim of this paper is to present a comparative study of two linear dimension reduction methods namely PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis). The main idea of PCA is to transform the high dimensional input space onto the feature space where the maximal variance is displayed. The feature selection in traditional LDA is obtained by maximizing the difference between classes and minimizing the distance within classes. PCA finds the axes with maximum variance for the whole data set where LDA tries to find the axes for best class seperability. The neural network is trained about the reduced feature set (using PCA or LDA) of images in the database for fast searching of images from the database using back propagation algorithm. The proposed method is experimented over a general image database using Matlab. The performance of these systems has been evaluated by Precision and Recall measures. Experimental results show that PCA gives the better performance in terms of higher precision and recall values with lesser computational complexity than LDA

    Artificial intelligence methodologies and their application to diabetes

    Get PDF
    In the past decade diabetes management has been transformed by the addition of continuous glucose monitoring and insulin pump data. More recently, a wide variety of functions and physiologic variables, such as heart rate, hours of sleep, number of steps walked and movement, have been available through wristbands or watches. New data, hydration, geolocation, and barometric pressure, among others, will be incorporated in the future. All these parameters, when analyzed, can be helpful for patients and doctors' decision support. Similar new scenarios have appeared in most medical fields, in such a way that in recent years, there has been an increased interest in the development and application of the methods of artificial intelligence (AI) to decision support and knowledge acquisition. Multidisciplinary research teams integrated by computer engineers and doctors are more and more frequent, mirroring the need of cooperation in this new topic. AI, as a science, can be defined as the ability to make computers do things that would require intelligence if done by humans. Increasingly, diabetes-related journals have been incorporating publications focused on AI tools applied to diabetes. In summary, diabetes management scenarios have suffered a deep transformation that forces diabetologists to incorporate skills from new areas. This recently needed knowledge includes AI tools, which have become part of the diabetes health care. The aim of this article is to explain in an easy and plane way the most used AI methodologies to promote the implication of health care providers?doctors and nurses?in this field

    A Survey of Multimodal Information Fusion for Smart Healthcare: Mapping the Journey from Data to Wisdom

    Full text link
    Multimodal medical data fusion has emerged as a transformative approach in smart healthcare, enabling a comprehensive understanding of patient health and personalized treatment plans. In this paper, a journey from data to information to knowledge to wisdom (DIKW) is explored through multimodal fusion for smart healthcare. We present a comprehensive review of multimodal medical data fusion focused on the integration of various data modalities. The review explores different approaches such as feature selection, rule-based systems, machine learning, deep learning, and natural language processing, for fusing and analyzing multimodal data. This paper also highlights the challenges associated with multimodal fusion in healthcare. By synthesizing the reviewed frameworks and theories, it proposes a generic framework for multimodal medical data fusion that aligns with the DIKW model. Moreover, it discusses future directions related to the four pillars of healthcare: Predictive, Preventive, Personalized, and Participatory approaches. The components of the comprehensive survey presented in this paper form the foundation for more successful implementation of multimodal fusion in smart healthcare. Our findings can guide researchers and practitioners in leveraging the power of multimodal fusion with the state-of-the-art approaches to revolutionize healthcare and improve patient outcomes.Comment: This work has been submitted to the ELSEVIER for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Iot Based Alzheimer’s Disease Diagnosis Model for Providing Security Using Light Weight Hybrid Cryptography

    Get PDF
    Security in the Internet of things (IoT) is a broad yet active research area that focuses on securing the sensitive data being circulated in the network. The data involved in the IoT network comes from various organizations, hospitals, etc., that require a higher range of security from attacks and breaches. The common solution for security attacks is using traditional cryptographic algorithms that can protect the content through encryption and decryption operations. The existing solutions are suffering from major drawbacks, including computational complexities, time and space complexities, slower encryption, etc. Therefore, to overcome such drawbacks, this paper introduces an efficient light weight cryptographic mechanism to secure the images of Alzheimer’s disease (AD) being transmitted in the network. The mechanism involves major stages such as edge detection, key generation, encryption, and decryption. In the case of edge detection, the edge maps are detected using the Prewitt edge detection technique. Then the hybrid elliptic curve cryptography (HECC) algorithm is proposed to encrypt and secure the images being transmitted in the network. For encryption, the HECC algorithm combines blowfish with the elliptic curve algorithm to attain a higher range of security. Another significant advantage of the proposed method is selecting the ideal private key, which is achieved using the enhanced seagull optimization (ESO) algorithm. The proposed work has been tested in the Python tool, and the performance is evaluated with the Alzheimer’s dataset, and the outcomes proved its efficacy over the compared methods

    Comparative Study for Image Fusion using Various Deep Learning Algorithms

    Get PDF

    Automatic Threshold Selections by exploration and exploitation of optimization algorithm in Record Deduplication

    Get PDF
    A deduplication process uses similarity function to identify the two entries are duplicate or not by setting the threshold.  This threshold setting is an important issue to achieve more accuracy and it relies more on human intervention. Swarm Intelligence algorithm such as PSO and ABC have been used for automatic detection of threshold to find the duplicate records. Though the algorithms performed well there is still an insufficiency regarding the solution search equation, which is used to generate new candidate solutions based on the information of previous solutions.  The proposed work addressed two problems: first to find the optimal equation using Genetic Algorithm(GA) and next it adopts an modified  Artificial Bee Colony (ABC) to get the optimal threshold to detect the duplicate records more accurately and also it reduces human intervention. CORA dataset is considered to analyze the proposed algorithm
    • …
    corecore