65 research outputs found

    Digital twin for healthcare immersive services: fundamentals, architectures, and open issues

    Get PDF
    Digital Twin (DT) and Immersive Services (XR) technologies are revolutionizing the medical sector through designing applications that support virtual representation and interactive reality. Both technologies leverage one another to advance healthcare services and provide professionals a virtual environment where they can interact with the digital information of their patients more conveniently. The integration of DT and XR technologies enables the creation of advanced 3D models of patients (e.g., organs or body) based on their accurate real data gathered and processed by the DT improving traditional healthcare treatments such as telemedicine, training, and consultation. This chapter introduces the DT technology in immersive healthcare services and presents its benefits to the medical sector. It discusses the various requirements and protocols to build immersive models of the DT using advanced Artificial Intelligence (AI) and Machine Learning (ML)-based mechanisms. The chapter also proposes various paradigms that can be used to enable rapid deployment of these models, meeting the strict demands of the medical sector in terms of efficiency, accuracy, and precision

    Embed and Conquer: Scalable Embeddings for Kernel k-Means on MapReduce

    Full text link
    The kernel kk-means is an effective method for data clustering which extends the commonly-used kk-means algorithm to work on a similarity matrix over complex data structures. The kernel kk-means algorithm is however computationally very complex as it requires the complete data matrix to be calculated and stored. Further, the kernelized nature of the kernel kk-means algorithm hinders the parallelization of its computations on modern infrastructures for distributed computing. In this paper, we are defining a family of kernel-based low-dimensional embeddings that allows for scaling kernel kk-means on MapReduce via an efficient and unified parallelization strategy. Afterwards, we propose two methods for low-dimensional embedding that adhere to our definition of the embedding family. Exploiting the proposed parallelization strategy, we present two scalable MapReduce algorithms for kernel kk-means. We demonstrate the effectiveness and efficiency of the proposed algorithms through an empirical evaluation on benchmark data sets.Comment: Appears in Proceedings of the SIAM International Conference on Data Mining (SDM), 201

    Eigenvalue and Generalized Eigenvalue Problems: Tutorial

    Full text link
    This paper is a tutorial for eigenvalue and generalized eigenvalue problems. We first introduce eigenvalue problem, eigen-decomposition (spectral decomposition), and generalized eigenvalue problem. Then, we mention the optimization problems which yield to the eigenvalue and generalized eigenvalue problems. We also provide examples from machine learning, including principal component analysis, kernel supervised principal component analysis, and Fisher discriminant analysis, which result in eigenvalue and generalized eigenvalue problems. Finally, we introduce the solutions to both eigenvalue and generalized eigenvalue problems.Comment: 8 pages, Tutorial pape

    Fisher and Kernel Fisher Discriminant Analysis: Tutorial

    Full text link
    This is a detailed tutorial paper which explains the Fisher discriminant Analysis (FDA) and kernel FDA. We start with projection and reconstruction. Then, one- and multi-dimensional FDA subspaces are covered. Scatters in two- and then multi-classes are explained in FDA. Then, we discuss on the rank of the scatters and the dimensionality of the subspace. A real-life example is also provided for interpreting FDA. Then, possible singularity of the scatter is discussed to introduce robust FDA. PCA and FDA directions are also compared. We also prove that FDA and linear discriminant analysis are equivalent. Fisher forest is also introduced as an ensemble of fisher subspaces useful for handling data with different features and dimensionality. Afterwards, kernel FDA is explained for both one- and multi-dimensional subspaces with both two- and multi-classes. Finally, some simulations are performed on AT&T face dataset to illustrate FDA and compare it with PCA