4,235 research outputs found

    Neural Collaborative Subspace Clustering

    Full text link
    We introduce the Neural Collaborative Subspace Clustering, a neural model that discovers clusters of data points drawn from a union of low-dimensional subspaces. In contrast to previous attempts, our model runs without the aid of spectral clustering. This makes our algorithm one of the kinds that can gracefully scale to large datasets. At its heart, our neural model benefits from a classifier which determines whether a pair of points lies on the same subspace or not. Essential to our model is the construction of two affinity matrices, one from the classifier and the other from a notion of subspace self-expressiveness, to supervise training in a collaborative scheme. We thoroughly assess and contrast the performance of our model against various state-of-the-art clustering algorithms including deep subspace-based ones.Comment: Accepted to ICML 201

    The Unreasonable Effectiveness of Deep Features as a Perceptual Metric

    Full text link
    While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on ImageNet classification has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called "perceptual losses"? What elements are critical for their success? To answer these questions, we introduce a new dataset of human perceptual similarity judgments. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by large margins on our dataset. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations.Comment: Accepted to CVPR 2018; Code and data available at https://www.github.com/richzhang/PerceptualSimilarit

    Evolving Ensemble Models for Image Segmentation Using Enhanced Particle Swarm Optimization

    Get PDF
    In this paper, we propose particle swarm optimization (PSO)-enhanced ensemble deep neural networks and hybrid clustering models for skin lesion segmentation. A PSO variant is proposed, which embeds diverse search actions including simulated annealing, levy flight, helix behavior, modified PSO, and differential evolution operations with spiral search coefficients. These search actions work in a cascade manner to not only equip each individual with different search operations throughout the search process but also assign distinctive search actions to different particles simultaneously in every single iteration. The proposed PSO variant is used to optimize the learning hyper-parameters of convolutional neural networks (CNNs) and the cluster centroids of classical Fuzzy C-Means clustering respectively to overcome performance barriers. Ensemble deep networks and hybrid clustering models are subsequently constructed based on the optimized CNN and hybrid clustering segmenters for lesion segmentation. We evaluate the proposed ensemble models using three skin lesion databases, i.e., PH2, ISIC 2017, and Dermofit Image Library, and a blood cancer data set, i.e., ALL-IDB2. The empirical results indicate that our models outperform other hybrid ensemble clustering models combined with advanced PSO variants, as well as state-of-the-art deep networks in the literature for diverse challenging image segmentation tasks

    Transfer learning for radio galaxy classification

    Full text link
    In the context of radio galaxy classification, most state-of-the-art neural network algorithms have been focused on single survey data. The question of whether these trained algorithms have cross-survey identification ability or can be adapted to develop classification networks for future surveys is still unclear. One possible solution to address this issue is transfer learning, which re-uses elements of existing machine learning models for different applications. Here we present radio galaxy classification based on a 13-layer Deep Convolutional Neural Network (DCNN) using transfer learning methods between different radio surveys. We find that our machine learning models trained from a random initialization achieve accuracies comparable to those found elsewhere in the literature. When using transfer learning methods, we find that inheriting model weights pre-trained on FIRST images can boost model performance when re-training on lower resolution NVSS data, but that inheriting pre-trained model weights from NVSS and re-training on FIRST data impairs the performance of the classifier. We consider the implication of these results in the context of future radio surveys planned for next-generation radio telescopes such as ASKAP, MeerKAT, and SKA1-MID
    • …
    corecore