27 research outputs found

    INVARIANT OPERATOR RANGES AND SIMILARITY DOMINANCE IN BANACH AND VON NEUMANN ALGEBRAS

    Get PDF
    \begin{Abstractpage} \setlength{\baselineskip}{1.5\baselineskip} { Suppose M\mathcal{M} is a von Neumann algebra. An \textbf{operator range in }M\mathcal{M} is the range of an operator in M\mathcal{M}. When M=B(H)\mathcal{M}=B\left( H\right) , the algebra of operators on a Hilbert space HH, R. Douglas and C. Foia\c{s} proved that if S,TB(H)S,T\in B\left( H\right) , and TT is not algebraic, and if SS leaves invariant every TT-invariant operator range, then S=f(T)S=f\left( T\right) for some entire function ff. In the first part of this thesis, we prove versions of this result when B(H)B\left( H\right) is replaced with a factor von Neumann algebra M\mathcal{M} and TT is normal. Then using the direct integral theory, we extend our result to an arbitrary von Neumann algebra. In the second part of the thesis, we investigate the notion of \textbf{similarity dominance.} Suppose A\mathcal{A} is a unital Banach algebra and S,TAS,T\in \mathcal{A}. We say that TT sim-dominates SS provided, for every R3˘e0R\u3e0,% sup({A1SA:AA, A invertible, A1TAR})3˘c. \sup \left( \left \{ \left \Vert A^{-1}SA\right \Vert :A\in \mathcal{A},\text{ }A\text{ invertible, }\left \Vert A^{-1}TA\right \Vert \leq R\right \} \right) \u3c\infty \text{.}% When A\mathcal{A} is the algebra B(H)B\left( H\right) , J. B. Conway and D. Hadwin proved that TT sim-dominates SS implies S=φ(T)S=\varphi \left( T\right) for some entire function φ\varphi. We prove this for a large class of operators in a type III factor von Neumann algebra. We also prove, for any unital Banach algebra A\mathcal{A}, if TT sim-dominates SS, then SS is in the approximate double commutant of TT in A\mathcal{A}. Moreover, we prove that sim-domination is preserved under approximate similarity. } \end{Abstractpage

    Semi-supervised Vector-Quantization in Visual SLAM using HGCN

    Full text link
    In this paper, two semi-supervised appearance based loop closure detection technique, HGCN-FABMAP and HGCN-BoW are introduced. Furthermore an extension to the current state of the art localization SLAM algorithm, ORB-SLAM, is presented. The proposed HGCN-FABMAP method is implemented in an off-line manner incorporating Bayesian probabilistic schema for loop detection decision making. Specifically, we let a Hyperbolic Graph Convolutional Neural Network (HGCN) to operate over the SURF features graph space, and perform vector quantization part of the SLAM procedure. This part previously was performed in an unsupervised manner using algorithms like HKmeans, kmeans++,..etc. The main Advantage of using HGCN, is that it scales linearly in number of graph edges. Experimental results shows that HGCN-FABMAP algorithm needs far more cluster centroids than HGCN-ORB, otherwise it fails to detect loop closures. Therefore we consider HGCN-ORB to be more efficient in terms of memory consumption, also we conclude the superiority of HGCN-BoW and HGCN-FABMAP with respect to other algorithms

    Self-supervised Vector-Quantization in Visual SLAM using Deep Convolutional Autoencoders

    Full text link
    In this paper, we introduce AE-FABMAP, a new self-supervised bag of words-based SLAM method. We also present AE-ORB-SLAM, a modified version of the current state of the art BoW-based path planning algorithm. That is, we have used a deep convolutional autoencoder to find loop closures. In the context of bag of words visual SLAM, vector quantization (VQ) is considered as the most time-consuming part of the SLAM procedure, which is usually performed in the offline phase of the SLAM algorithm using unsupervised algorithms such as Kmeans++. We have addressed the loop closure detection part of the BoW-based SLAM methods in a self-supervised manner, by integrating an autoencoder for doing vector quantization. This approach can increase the accuracy of large-scale SLAM, where plenty of unlabeled data is available. The main advantage of using a self-supervised is that it can help reducing the amount of labeling. Furthermore, experiments show that autoencoders are far more efficient than semi-supervised methods like graph convolutional neural networks, in terms of speed and memory consumption. We integrated this method into the state of the art long range appearance based visual bag of word SLAM, FABMAP2, also in ORB-SLAM. Experiments demonstrate the superiority of this approach in indoor and outdoor datasets over regular FABMAP2 in all cases, and it achieves higher accuracy in loop closure detection and trajectory generation

    Pan-cancer classifications of tumor histological images using deep learning

    Get PDF
    Histopathological images are essential for the diagnosis of cancer type and selection of optimal treatment. However, the current clinical process of manual inspection of images is time consuming and prone to intra- and inter-observer variability. Here we show that key aspects of cancer image analysis can be performed by deep convolutional neural networks (CNNs) across a wide spectrum of cancer types. In particular, we implement CNN architectures based on Google Inception v3 transfer learning to analyze 27815 H&E slides from 23 cohorts in The Cancer Genome Atlas in studies of tumor/normal status, cancer subtype, and mutation status. For 19 solid cancer types we are able to classify tumor/normal status of whole slide images with extremely high AUCs (0.995±0.008). We are also able to classify cancer subtypes within 10 tissue types with AUC values well above random expectations (micro-average 0.87±0.1). We then perform a cross-classification analysis of tumor/normal status across tumor types. We find that classifiers trained on one type are often effective in distinguishing tumor from normal in other cancer types, with the relationships among classifiers matching known cancer tissue relationships. For the more challenging problem of mutational status, we are able to classify TP53 mutations in three cancer types with AUCs from 0.65-0.80 using a fully-trained CNN, and with similar cross-classification accuracy across tissues. These studies demonstrate the power of CNNs for not only classifying histopathological images in diverse cancer types, but also for revealing shared biology between tumors. We have made software available at: https://github.com/javadnoorb/HistCNNFirst author draf

    Semisupervised Vector Quantization in Visual SLAM Using HGCN

    Get PDF
    We present a novel vector quantization (VQ) module for the two state-of-the-art long-range simultaneous localization and mapping (SLAM) algorithms. The VQ task in SLAM is generally performed using unsupervised methods. We provide an alternative approach trough embedding a semisupervised hyperbolic graph convolutional neural network (HGCN) in the VQ step of the SLAM processes. The SLAM platforms we have utilized for this purpose are fast appearance-based mapping (FABMAP) and oriented fast and rotated short (ORB), both of which rely on extracting the features of the captured images in their loop closure detection (LCD) module. For the first time, we have considered the space formed by these SURF features, robust image descriptors, as a graph, enabling us to apply an HGCN in the VQ section which results in an improved LCD performance. The HGCN vector quantizes the SURF feature space, leading to a bag-of-word (BoW) representation construction of the images. This representation is subsequently used to determine LCD accuracy and recall. Our approaches in this study are referred to as HGCN-FABMAP and HGCN-ORB. The main advantage of using HGCN in the LCD section is that it scales linearly when the features are accumulated. The benchmarking experiments show the superiority of our methods in terms of both trajectory generation accuracy in small-scale paths and LCD accuracy and recall for large-scale problems

    CUDA and OpenMp Implementation of Boolean Matrix Product with Applications in Visual SLAM

    Get PDF
    In this paper, the concept of ultrametric structure is intertwined with the SLAM procedure. A set of pre-existing transformations has been used to create a new simultaneous localization and mapping (SLAM) algorithm. We have developed two new parallel algorithms that implement the time-consuming Boolean transformations of the space dissimilarity matrix. The resulting matrix is an important input to the vector quantization (VQ) step in SLAM processes. These algorithms, written in Compute Unified Device Architecture (CUDA) and Open Multi-Processing (OpenMP) pseudo-codes, make the Boolean transformation computationally feasible on a real-world-size dataset. We expect our newly introduced SLAM algorithm, ultrametric Fast Appearance Based Mapping (FABMAP), to outperform regular FABMAP2 since ultrametric spaces are more clusterable than regular Euclidean spaces. Another scope of the presented research is the development of a novel measure of ultrametricity, along with creation of Ultrametric-PAM clustering algorithm. Since current measures have computational time complexity order, O(n3) a new measure with lower time complexity, O(n2) , has a potential significance

    Deep learning-based cross-classifications reveal conserved spatial behaviors within tumor histological images.

    Get PDF
    Histopathological images are a rich but incompletely explored data type for studying cancer. Manual inspection is time consuming, making it challenging to use for image data mining. Here we show that convolutional neural networks (CNNs) can be systematically applied across cancer types, enabling comparisons to reveal shared spatial behaviors. We develop CNN architectures to analyze 27,815 hematoxylin and eosin scanned images from The Cancer Genome Atlas for tumor/normal, cancer subtype, and mutation classification. Our CNNs are able to classify TCGA pathologist-annotated tumor/normal status of whole slide images (WSIs) in 19 cancer types with consistently high AUCs (0.995 ± 0.008), as well as subtypes with lower but significant accuracy (AUC 0.87 ± 0.1). Remarkably, tumor/normal CNNs trained on one tissue are effective in others (AUC 0.88 ± 0.11), with classifier relationships also recapitulating known adenocarcinoma, carcinoma, and developmental biology. Moreover, classifier comparisons reveal intra-slide spatial similarities, with an average tile-level correlation of 0.45 ± 0.16 between classifier pairs. Breast cancers, bladder cancers, and uterine cancers have spatial patterns that are particularly easy to detect, suggesting these cancers can be canonical types for image analysis. Patterns for TP53 mutations can also be detected, with WSI self- and cross-tissue AUCs ranging from 0.65-0.80. Finally, we comparatively evaluate CNNs on 170 breast and colon cancer images with pathologist-annotated nuclei, finding that both cellular and intercellular regions contribute to CNN accuracy. These results demonstrate the power of CNNs not only for histopathological classification, but also for cross-comparisons to reveal conserved spatial behaviors across tumors

    Deep learning-based cross-classifications reveal conserved spatial behaviors within tumor histological images

    Get PDF
    Histopathological images are a rich but incompletely explored data type for studying cancer. Manual inspection is time consuming, making it challenging to use for image data mining. Here we show that convolutional neural networks (CNNs) can be systematically applied across cancer types, enabling comparisons to reveal shared spatial behaviors. We develop CNN architectures to analyze 27,815 hematoxylin and eosin scanned images from The Cancer Genome Atlas for tumor/normal, cancer subtype, and mutation classification. Our CNNs are able to classify TCGA pathologist-annotated tumor/normal status of whole slide images (WSIs) in 19 cancer types with consistently high AUCs (0.995 ± 0.008), as well as subtypes with lower but significant accuracy (AUC 0.87 ± 0.1). Remarkably, tumor/normal CNNs trained on one tissue are effective in others (AUC 0.88 ± 0.11), with classifier relationships also recapitulating known adenocarcinoma, carcinoma, and developmental biology. Moreover, classifier comparisons reveal intra-slide spatial similarities, with an average tile-level correlation of 0.45 ± 0.16 between classifier pairs. Breast cancers, bladder cancers, and uterine cancers have spatial patterns that are particularly easy to detect, suggesting these cancers can be canonical types for image analysis. Patterns for TP53 mutations can also be detected, with WSI self- and cross-tissue AUCs ranging from 0.65-0.80. Finally, we comparatively evaluate CNNs on 170 breast and colon cancer images with pathologist-annotated nuclei, finding that both cellular and intercellular regions contribute to CNN accuracy. These results demonstrate the power of CNNs not only for histopathological classification, but also for cross-comparisons to reveal conserved spatial behaviors across tumors.R01 CA230031 - NCI NIH HHSPublished versio

    Understanding pore formation and the effect on mechanical properties of high speed sintered polyamide-12 parts: A focus on energy input

    Get PDF
    High Speed Sintering is a novel powder-bed fusion Additive Manufacturing technique that uses an infrared lamp to provide intensive thermal energy to sinter polymer powders. The amount of thermal energy is critical to particle coalescence related defects such as porosity. This study investigates the effect of energy input on porosity and the resulting mechanical properties of polyamide-12 parts. Samples were produced at different lamp speeds, generating varying amount of energy input from a low to a high level. They were then scanned using X-ray Computed Tomography technique, following which they were subject to tensile testing. A strong correlation between energy input, porosity and mechanical properties was found, whereby pore formation was fundamentally caused by insufficient energy input. A greater amount of energy input resulted in a reduced porosity level, which in turn led to improved mechanical properties. The porosity, ultimate tensile strength and elongation achieved were 0.58%, 42.4 MPa and 10.0%, respectively, by using the standard parameters. Further increasing the energy input resulted in the lowest porosity of 0.14% and the highest ultimate tensile strength and elongation of 44.4 MPa and 13.5%, respectively. Pore morphology, volume, number density and spatial distribution were investigated, which were found to be closely linked with energy input and mechanical properties

    CUDA and OpenMp Implementation of Boolean Matrix Product with Applications in Visual SLAM

    No full text
    In this paper, the concept of ultrametric structure is intertwined with the SLAM procedure. A set of pre-existing transformations has been used to create a new simultaneous localization and mapping (SLAM) algorithm. We have developed two new parallel algorithms that implement the time-consuming Boolean transformations of the space dissimilarity matrix. The resulting matrix is an important input to the vector quantization (VQ) step in SLAM processes. These algorithms, written in Compute Unified Device Architecture (CUDA) and Open Multi-Processing (OpenMP) pseudo-codes, make the Boolean transformation computationally feasible on a real-world-size dataset. We expect our newly introduced SLAM algorithm, ultrametric Fast Appearance Based Mapping (FABMAP), to outperform regular FABMAP2 since ultrametric spaces are more clusterable than regular Euclidean spaces. Another scope of the presented research is the development of a novel measure of ultrametricity, along with creation of Ultrametric-PAM clustering algorithm. Since current measures have computational time complexity order, O(n3) a new measure with lower time complexity, O(n2), has a potential significance
    corecore