1,249 research outputs found
Multi-task additive models with shared transfer functions based on dictionary learning
Additive models form a widely popular class of regression models which
represent the relation between covariates and response variables as the sum of
low-dimensional transfer functions. Besides flexibility and accuracy, a key
benefit of these models is their interpretability: the transfer functions
provide visual means for inspecting the models and identifying domain-specific
relations between inputs and outputs. However, in large-scale problems
involving the prediction of many related tasks, learning independently additive
models results in a loss of model interpretability, and can cause overfitting
when training data is scarce. We introduce a novel multi-task learning approach
which provides a corpus of accurate and interpretable additive models for a
large number of related forecasting tasks. Our key idea is to share transfer
functions across models in order to reduce the model complexity and ease the
exploration of the corpus. We establish a connection with sparse dictionary
learning and propose a new efficient fitting algorithm which alternates between
sparse coding and transfer function updates. The former step is solved via an
extension of Orthogonal Matching Pursuit, whose properties are analyzed using a
novel recovery condition which extends existing results in the literature. The
latter step is addressed using a traditional dictionary update rule.
Experiments on real-world data demonstrate that our approach compares favorably
to baseline methods while yielding an interpretable corpus of models, revealing
structure among the individual tasks and being more robust when training data
is scarce. Our framework therefore extends the well-known benefits of additive
models to common regression settings possibly involving thousands of tasks
Multimodal Multipart Learning for Action Recognition in Depth Videos
The articulated and complex nature of human actions makes the task of action
recognition difficult. One approach to handle this complexity is dividing it to
the kinetics of body parts and analyzing the actions based on these partial
descriptors. We propose a joint sparse regression based learning method which
utilizes the structured sparsity to model each action as a combination of
multimodal features from a sparse set of body parts. To represent dynamics and
appearance of parts, we employ a heterogeneous set of depth and skeleton based
features. The proper structure of multimodal multipart features are formulated
into the learning framework via the proposed hierarchical mixed norm, to
regularize the structured features of each part and to apply sparsity between
them, in favor of a group feature selection. Our experimental results expose
the effectiveness of the proposed learning method in which it outperforms other
methods in all three tested datasets while saturating one of them by achieving
perfect accuracy
Structured Learning in Time-dependent Cox Models
Cox models with time-dependent coefficients and covariates are widely used in
survival analysis. In high-dimensional settings, sparse regularization
techniques are employed for variable selection, but existing methods for
time-dependent Cox models lack flexibility in enforcing specific sparsity
patterns (i.e., covariate structures). We propose a flexible framework for
variable selection in time-dependent Cox models, accommodating complex
selection rules. Our method can adapt to arbitrary grouping structures,
including interaction selection, temporal, spatial, tree, and directed acyclic
graph structures. It achieves accurate estimation with low false alarm rates.
We develop the sox package, implementing a network flow algorithm for
efficiently solving models with complex covariate structures. Sox offers a
user-friendly interface for specifying grouping structures and delivers fast
computation. Through examples, including a case study on identifying predictors
of time to all-cause death in atrial fibrillation patients, we demonstrate the
practical application of our method with specific selection rules.Comment: 49 pages (with 19 pages of appendix),9 tables, 3 figure
Recommended from our members
Searching for Prosociality in Qualitative Data: Comparing Manual, Closed-Vocabulary, and Open-Vocabulary Methods
Although most people present themselves as possessing prosocial traits, people differ in the extent to which they actually act prosocially in everyday life. Qualitative data that were not ostensibly collected to measure prosociality might contain information about prosocial dispositions that is not distorted by self–presentation concerns. This paper seeks to characterise charitable donors from qualitative data. We compared a manual approach of extracting predictors from participants’ self–described personal strivings to two automated approaches: A summation of words predefined as prosocial and a support vector machine classifier. Although variables extracted by the support vector machine predicted donation behaviour well in the training sample ( N = 984), virtually, no variables from any method significantly predicted donations in a holdout sample ( N = 496). Raters’ attempts to predict donations to charity based on reading participants’ personal strivings were also unsuccessful. However, raters’ predictions were associated with past charitable involvement. In sum, predictors derived from personal strivings did not robustly explain variation in charitable behaviour, but personal strivings may nevertheless contain some information about trait prosociality. The sparseness of personal strivings data, rather than the irrelevance of open–ended text or individual differences in goal pursuit, likely explains their limited value in predicting prosocial behaviour. © 2020 European Association of Personality Psycholog
- …