4,774 research outputs found
Avoiding vincular patterns on alternating words
A word is alternating if either
(when the word is up-down) or (when the word is
down-up). The study of alternating words avoiding classical permutation
patterns was initiated by the authors in~\cite{GKZ}, where, in particular, it
was shown that 123-avoiding up-down words of even length are counted by the
Narayana numbers.
However, not much was understood on the structure of 123-avoiding up-down
words. In this paper, we fill in this gap by introducing the notion of a
cut-pair that allows us to subdivide the set of words in question into
equivalence classes. We provide a combinatorial argument to show that the
number of equivalence classes is given by the Catalan numbers, which induces an
alternative (combinatorial) proof of the corresponding result in~\cite{GKZ}.
Further, we extend the enumerative results in~\cite{GKZ} to the case of
alternating words avoiding a vincular pattern of length 3. We show that it is
sufficient to enumerate up-down words of even length avoiding the consecutive
pattern and up-down words of odd length avoiding the
consecutive pattern to answer all of our enumerative
questions. The former of the two key cases is enumerated by the Stirling
numbers of the second kind.Comment: 25 pages; To appear in Discrete Mathematic
Pattern-avoiding alternating words
A word is alternating if either
(when the word is up-down) or (when the word is
down-up). In this paper, we initiate the study of (pattern-avoiding)
alternating words. We enumerate up-down (equivalently, down-up) words via
finding a bijection with order ideals of a certain poset. Further, we show that
the number of 123-avoiding up-down words of even length is given by the
Narayana numbers, which is also the case, shown by us bijectively, with
132-avoiding up-down words of even length. We also give formulas for
enumerating all other cases of avoidance of a permutation pattern of length 3
on alternating words
Mining Mid-level Features for Action Recognition Based on Effective Skeleton Representation
Recently, mid-level features have shown promising performance in computer
vision. Mid-level features learned by incorporating class-level information are
potentially more discriminative than traditional low-level local features. In
this paper, an effective method is proposed to extract mid-level features from
Kinect skeletons for 3D human action recognition. Firstly, the orientations of
limbs connected by two skeleton joints are computed and each orientation is
encoded into one of the 27 states indicating the spatial relationship of the
joints. Secondly, limbs are combined into parts and the limb's states are
mapped into part states. Finally, frequent pattern mining is employed to mine
the most frequent and relevant (discriminative, representative and
non-redundant) states of parts in continuous several frames. These parts are
referred to as Frequent Local Parts or FLPs. The FLPs allow us to build
powerful bag-of-FLP-based action representation. This new representation yields
state-of-the-art results on MSR DailyActivity3D and MSR ActionPairs3D
Multilabel Consensus Classification
In the era of big data, a large amount of noisy and incomplete data can be
collected from multiple sources for prediction tasks. Combining multiple models
or data sources helps to counteract the effects of low data quality and the
bias of any single model or data source, and thus can improve the robustness
and the performance of predictive models. Out of privacy, storage and bandwidth
considerations, in certain circumstances one has to combine the predictions
from multiple models or data sources to obtain the final predictions without
accessing the raw data. Consensus-based prediction combination algorithms are
effective for such situations. However, current research on prediction
combination focuses on the single label setting, where an instance can have one
and only one label. Nonetheless, data nowadays are usually multilabeled, such
that more than one label have to be predicted at the same time. Direct
applications of existing prediction combination methods to multilabel settings
can lead to degenerated performance. In this paper, we address the challenges
of combining predictions from multiple multilabel classifiers and propose two
novel algorithms, MLCM-r (MultiLabel Consensus Maximization for ranking) and
MLCM-a (MLCM for microAUC). These algorithms can capture label correlations
that are common in multilabel classifications, and optimize corresponding
performance metrics. Experimental results on popular multilabel classification
tasks verify the theoretical analysis and effectiveness of the proposed
methods
Large-scale Continuous Gesture Recognition Using Convolutional Neural Networks
This paper addresses the problem of continuous gesture recognition from
sequences of depth maps using convolutional neutral networks (ConvNets). The
proposed method first segments individual gestures from a depth sequence based
on quantity of movement (QOM). For each segmented gesture, an Improved Depth
Motion Map (IDMM), which converts the depth sequence into one image, is
constructed and fed to a ConvNet for recognition. The IDMM effectively encodes
both spatial and temporal information and allows the fine-tuning with existing
ConvNet models for classification without introducing millions of parameters to
learn. The proposed method is evaluated on the Large-scale Continuous Gesture
Recognition of the ChaLearn Looking at People (LAP) challenge 2016. It achieved
the performance of 0.2655 (Mean Jaccard Index) and ranked place in
this challenge
Large-scale Isolated Gesture Recognition Using Convolutional Neural Networks
This paper proposes three simple, compact yet effective representations of
depth sequences, referred to respectively as Dynamic Depth Images (DDI),
Dynamic Depth Normal Images (DDNI) and Dynamic Depth Motion Normal Images
(DDMNI). These dynamic images are constructed from a sequence of depth maps
using bidirectional rank pooling to effectively capture the spatial-temporal
information. Such image-based representations enable us to fine-tune the
existing ConvNets models trained on image data for classification of depth
sequences, without introducing large parameters to learn. Upon the proposed
representations, a convolutional Neural networks (ConvNets) based method is
developed for gesture recognition and evaluated on the Large-scale Isolated
Gesture Recognition at the ChaLearn Looking at People (LAP) challenge 2016. The
method achieved 55.57\% classification accuracy and ranked place in
this challenge but was very close to the best performance even though we only
used depth data.Comment: arXiv admin note: text overlap with arXiv:1608.0633
- …