3,189 research outputs found
Learning from Ontology Streams with Semantic Concept Drift
Data stream learning has been largely studied for extracting knowledge
structures from continuous and rapid data records. In the semantic Web, data is
interpreted in ontologies and its ordered sequence is represented as an
ontology stream. Our work exploits the semantics of such streams to tackle the
problem of concept drift i.e., unexpected changes in data distribution, causing
most of models to be less accurate as time passes. To this end we revisited (i)
semantic inference in the context of supervised stream learning, and (ii)
models with semantic embeddings. The experiments show accurate prediction with
data from Dublin and Beijing
Video retrieval based on deep convolutional neural network
Recently, with the enormous growth of online videos, fast video retrieval
research has received increasing attention. As an extension of image hashing
techniques, traditional video hashing methods mainly depend on hand-crafted
features and transform the real-valued features into binary hash codes. As
videos provide far more diverse and complex visual information than images,
extracting features from videos is much more challenging than that from images.
Therefore, high-level semantic features to represent videos are needed rather
than low-level hand-crafted methods. In this paper, a deep convolutional neural
network is proposed to extract high-level semantic features and a binary hash
function is then integrated into this framework to achieve an end-to-end
optimization. Particularly, our approach also combines triplet loss function
which preserves the relative similarity and difference of videos and
classification loss function as the optimization objective. Experiments have
been performed on two public datasets and the results demonstrate the
superiority of our proposed method compared with other state-of-the-art video
retrieval methods
Stratified Transfer Learning for Cross-domain Activity Recognition
In activity recognition, it is often expensive and time-consuming to acquire
sufficient activity labels. To solve this problem, transfer learning leverages
the labeled samples from the source domain to annotate the target domain which
has few or none labels. Existing approaches typically consider learning a
global domain shift while ignoring the intra-affinity between classes, which
will hinder the performance of the algorithms. In this paper, we propose a
novel and general cross-domain learning framework that can exploit the
intra-affinity of classes to perform intra-class knowledge transfer. The
proposed framework, referred to as Stratified Transfer Learning (STL), can
dramatically improve the classification accuracy for cross-domain activity
recognition. Specifically, STL first obtains pseudo labels for the target
domain via majority voting technique. Then, it performs intra-class knowledge
transfer iteratively to transform both domains into the same subspaces.
Finally, the labels of target domain are obtained via the second annotation. To
evaluate the performance of STL, we conduct comprehensive experiments on three
large public activity recognition datasets~(i.e. OPPORTUNITY, PAMAP2, and UCI
DSADS), which demonstrates that STL significantly outperforms other
state-of-the-art methods w.r.t. classification accuracy (improvement of 7.68%).
Furthermore, we extensively investigate the performance of STL across different
degrees of similarities and activity levels between domains. And we also
discuss the potential of STL in other pervasive computing applications to
provide empirical experience for future research.Comment: 10 pages; accepted by IEEE PerCom 2018; full paper. (camera-ready
version
- …