248,769 research outputs found
Joint Projection Learning and Tensor Decomposition Based Incomplete Multi-view Clustering
Incomplete multi-view clustering (IMVC) has received increasing attention
since it is often that some views of samples are incomplete in reality. Most
existing methods learn similarity subgraphs from original incomplete multi-view
data and seek complete graphs by exploring the incomplete subgraphs of each
view for spectral clustering. However, the graphs constructed on the original
high-dimensional data may be suboptimal due to feature redundancy and noise.
Besides, previous methods generally ignored the graph noise caused by the
inter-class and intra-class structure variation during the transformation of
incomplete graphs and complete graphs. To address these problems, we propose a
novel Joint Projection Learning and Tensor Decomposition Based method (JPLTD)
for IMVC. Specifically, to alleviate the influence of redundant features and
noise in high-dimensional data, JPLTD introduces an orthogonal projection
matrix to project the high-dimensional features into a lower-dimensional space
for compact feature learning.Meanwhile, based on the lower-dimensional space,
the similarity graphs corresponding to instances of different views are
learned, and JPLTD stacks these graphs into a third-order low-rank tensor to
explore the high-order correlations across different views. We further consider
the graph noise of projected data caused by missing samples and use a
tensor-decomposition based graph filter for robust clustering.JPLTD decomposes
the original tensor into an intrinsic tensor and a sparse tensor. The intrinsic
tensor models the true data similarities. An effective optimization algorithm
is adopted to solve the JPLTD model. Comprehensive experiments on several
benchmark datasets demonstrate that JPLTD outperforms the state-of-the-art
methods. The code of JPLTD is available at https://github.com/weilvNJU/JPLTD.Comment: IEEE Transactions on Neural Networks and Learning Systems, 202
Structure fusion based on graph convolutional networks for semi-supervised classification
Suffering from the multi-view data diversity and complexity for
semi-supervised classification, most of existing graph convolutional networks
focus on the networks architecture construction or the salient graph structure
preservation, and ignore the the complete graph structure for semi-supervised
classification contribution. To mine the more complete distribution structure
from multi-view data with the consideration of the specificity and the
commonality, we propose structure fusion based on graph convolutional networks
(SF-GCN) for improving the performance of semi-supervised classification.
SF-GCN can not only retain the special characteristic of each view data by
spectral embedding, but also capture the common style of multi-view data by
distance metric between multi-graph structures. Suppose the linear relationship
between multi-graph structures, we can construct the optimization function of
structure fusion model by balancing the specificity loss and the commonality
loss. By solving this function, we can simultaneously obtain the fusion
spectral embedding from the multi-view data and the fusion structure as
adjacent matrix to input graph convolutional networks for semi-supervised
classification. Experiments demonstrate that the performance of SF-GCN
outperforms that of the state of the arts on three challenging datasets, which
are Cora,Citeseer and Pubmed in citation networks
A Deep Learning Reconstruction Framework for Differential Phase-Contrast Computed Tomography with Incomplete Data
Differential phase-contrast computed tomography (DPC-CT) is a powerful
analysis tool for soft-tissue and low-atomic-number samples. Limited by the
implementation conditions, DPC-CT with incomplete projections happens quite
often. Conventional reconstruction algorithms are not easy to deal with
incomplete data. They are usually involved with complicated parameter selection
operations, also sensitive to noise and time-consuming. In this paper, we
reported a new deep learning reconstruction framework for incomplete data
DPC-CT. It is the tight coupling of the deep learning neural network and DPC-CT
reconstruction algorithm in the phase-contrast projection sinogram domain. The
estimated result is the complete phase-contrast projection sinogram not the
artifacts caused by the incomplete data. After training, this framework is
determined and can reconstruct the final DPC-CT images for a given incomplete
phase-contrast projection sinogram. Taking the sparse-view DPC-CT as an
example, this framework has been validated and demonstrated with synthetic and
experimental data sets. Embedded with DPC-CT reconstruction, this framework
naturally encapsulates the physical imaging model of DPC-CT systems and is easy
to be extended to deal with other challengs. This work is helpful to push the
application of the state-of-the-art deep learning theory in the field of
DPC-CT
High-Resolution Shape Completion Using Deep Neural Networks for Global Structure and Local Geometry Inference
We propose a data-driven method for recovering miss-ing parts of 3D shapes.
Our method is based on a new deep learning architecture consisting of two
sub-networks: a global structure inference network and a local geometry
refinement network. The global structure inference network incorporates a long
short-term memorized context fusion module (LSTM-CF) that infers the global
structure of the shape based on multi-view depth information provided as part
of the input. It also includes a 3D fully convolutional (3DFCN) module that
further enriches the global structure representation according to volumetric
information in the input. Under the guidance of the global structure network,
the local geometry refinement network takes as input lo-cal 3D patches around
missing regions, and progressively produces a high-resolution, complete surface
through a volumetric encoder-decoder architecture. Our method jointly trains
the global structure inference and local geometry refinement networks in an
end-to-end manner. We perform qualitative and quantitative evaluations on six
object categories, demonstrating that our method outperforms existing
state-of-the-art work on shape completion.Comment: 8 pages paper, 11 pages supplementary material, ICCV spotlight pape
Can k-NN imputation improve the performance of C4.5 with small software project data sets? A comparative evaluation
Missing data is a widespread problem that can affect the ability to use data to construct effective prediction systems. We investigate a common machine learning technique that can tolerate missing values, namely C4.5, to predict cost using six real world software project databases. We analyze the predictive performance after using the k-NN missing data imputation technique to see if it is better to tolerate missing data or to try to impute missing values and then apply the C4.5 algorithm. For the investigation, we simulated three missingness mechanisms, three missing data patterns, and five missing data percentages. We found that the k-NN imputation can improve the prediction accuracy of C4.5. At the same time, both C4.5 and k-NN are little affected by the missingness mechanism, but that the missing data pattern and the missing data percentage have a strong negative impact upon prediction (or imputation) accuracy particularly if the missing data percentage exceeds 40%
Semantic Visual Localization
Robust visual localization under a wide range of viewing conditions is a
fundamental problem in computer vision. Handling the difficult cases of this
problem is not only very challenging but also of high practical relevance,
e.g., in the context of life-long localization for augmented reality or
autonomous robots. In this paper, we propose a novel approach based on a joint
3D geometric and semantic understanding of the world, enabling it to succeed
under conditions where previous approaches failed. Our method leverages a novel
generative model for descriptor learning, trained on semantic scene completion
as an auxiliary task. The resulting 3D descriptors are robust to missing
observations by encoding high-level 3D geometric and semantic information.
Experiments on several challenging large-scale localization datasets
demonstrate reliable localization under extreme viewpoint, illumination, and
geometry changes
- âŠ