974 research outputs found
SNE: Signed Network Embedding
Several network embedding models have been developed for unsigned networks.
However, these models based on skip-gram cannot be applied to signed networks
because they can only deal with one type of link. In this paper, we present our
signed network embedding model called SNE. Our SNE adopts the log-bilinear
model, uses node representations of all nodes along a given path, and further
incorporates two signed-type vectors to capture the positive or negative
relationship of each edge along the path. We conduct two experiments, node
classification and link prediction, on both directed and undirected signed
networks and compare with four baselines including a matrix factorization
method and three state-of-the-art unsigned network embedding models. The
experimental results demonstrate the effectiveness of our signed network
embedding.Comment: To appear in PAKDD 201
SANet: Structure-Aware Network for Visual Tracking
Convolutional neural network (CNN) has drawn increasing interest in visual
tracking owing to its powerfulness in feature extraction. Most existing
CNN-based trackers treat tracking as a classification problem. However, these
trackers are sensitive to similar distractors because their CNN models mainly
focus on inter-class classification. To address this problem, we use
self-structure information of object to distinguish it from distractors.
Specifically, we utilize recurrent neural network (RNN) to model object
structure, and incorporate it into CNN to improve its robustness to similar
distractors. Considering that convolutional layers in different levels
characterize the object from different perspectives, we use multiple RNNs to
model object structure in different levels respectively. Extensive experiments
on three benchmarks, OTB100, TC-128 and VOT2015, show that the proposed
algorithm outperforms other methods. Code is released at
http://www.dabi.temple.edu/~hbling/code/SANet/SANet.html.Comment: In CVPR Deep Vision Workshop, 201
Linear-scaling kernels for protein sequences and small molecules outperform deep learning while providing uncertainty quantitation and improved interpretability
Gaussian process (GP) is a Bayesian model which provides several advantages
for regression tasks in machine learning such as reliable quantitation of
uncertainty and improved interpretability. Their adoption has been precluded by
their excessive computational cost and by the difficulty in adapting them for
analyzing sequences (e.g. amino acid and nucleotide sequences) and graphs (e.g.
ones representing small molecules). In this study, we develop efficient and
scalable approaches for fitting GP models as well as fast convolution kernels
which scale linearly with graph or sequence size. We implement these
improvements by building an open-source Python library called xGPR. We compare
the performance of xGPR with the reported performance of various deep learning
models on 20 benchmarks, including small molecule, protein sequence and tabular
data. We show that xGRP achieves highly competitive performance with much
shorter training time. Furthermore, we also develop new kernels for sequence
and graph data and show that xGPR generally outperforms convolutional neural
networks on predicting key properties of proteins and small molecules.
Importantly, xGPR provides uncertainty information not available from typical
deep learning models. Additionally, xGPR provides a representation of the input
data that can be used for clustering and data visualization. These results
demonstrate that xGPR provides a powerful and generic tool that can be broadly
useful in protein engineering and drug discovery.Comment: This is a revised version of the original manuscript with additional
experiment
Higher-order Link Prediction Using Graph Embeddings
Link prediction is an emerging field that predicts if two nodes in a network are likely to be connected or not in the near future. Networks model real-world systems using pairwise interactions of nodes. However, many of these interactions may involve more than two nodes or entities simultaneously. For example, social interactions often occur in groups of people, research collaborations are among more than two authors, and biological networks describe interactions of a group of proteins. An interaction that consists of more than two entities is called a higher-order structure. Predicting the occurrence of such higher-order structures helps us solve problems on various disciplines, such as social network analysis, drug combinations research, and news topic connections. Moreover, we can use our methods to get more knowledge about news topics during the COVID-19 pandemic.
Higher-order link prediction can be accomplished using neural networks and other machine learning techniques. The primary focus of this project is to explore repre- sentations of three-node interactions, called triangles (a special case of higher-order structure). We propose new methods to embed triangles: by generalizing node2vec algorithm using different operators to learn an embedding for a triangle, and by using 1-hop subgraphs of the triangles to learn embeddings using graph2vec algorithm and graph neural networks. The performance of these techniques is evaluated against the benchmark scores on various datasets used in the bibliography. From the results, it is observed that the node2vec based triangle embedding algorithm performs better or similar on most of the datasets compared to benchmark models
0.5 Petabyte Simulation of a 45-Qubit Quantum Circuit
Near-term quantum computers will soon reach sizes that are challenging to
directly simulate, even when employing the most powerful supercomputers. Yet,
the ability to simulate these early devices using classical computers is
crucial for calibration, validation, and benchmarking. In order to make use of
the full potential of systems featuring multi- and many-core processors, we use
automatic code generation and optimization of compute kernels, which also
enables performance portability. We apply a scheduling algorithm to quantum
supremacy circuits in order to reduce the required communication and simulate a
45-qubit circuit on the Cori II supercomputer using 8,192 nodes and 0.5
petabytes of memory. To our knowledge, this constitutes the largest quantum
circuit simulation to this date. Our highly-tuned kernels in combination with
the reduced communication requirements allow an improvement in time-to-solution
over state-of-the-art simulations by more than an order of magnitude at every
scale
Quantum Approaches to Data Science and Data Analytics
In this thesis are explored different research directions related to both the use of classical data analysis techniques for the study of quantum systems and the employment of quantum computing to speed up hard Machine Learning task
- …