2,170 research outputs found
06241 Abstracts Collection -- Human Motion - Understanding, Modeling, Capture and Animation. 13th Workshop
From 11.06.06 to 16.06.06, the Dagstuhl Seminar 06241 ``Human Motion - Understanding, Modeling, Capture and Animation. 13th Workshop "Theoretical Foundations of Computer Vision"\u27\u27 was held
in the International Conference and Research Center (IBFI),
Schloss Dagstuhl.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general
Data-Driven Shape Analysis and Processing
Data-driven methods play an increasingly important role in discovering
geometric, structural, and semantic relationships between 3D shapes in
collections, and applying this analysis to support intelligent modeling,
editing, and visualization of geometric data. In contrast to traditional
approaches, a key feature of data-driven approaches is that they aggregate
information from a collection of shapes to improve the analysis and processing
of individual shapes. In addition, they are able to learn models that reason
about properties and relationships of shapes without relying on hard-coded
rules or explicitly programmed instructions. We provide an overview of the main
concepts and components of these techniques, and discuss their application to
shape classification, segmentation, matching, reconstruction, modeling and
exploration, as well as scene analysis and synthesis, through reviewing the
literature and relating the existing works with both qualitative and numerical
comparisons. We conclude our report with ideas that can inspire future research
in data-driven shape analysis and processing.Comment: 10 pages, 19 figure
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
Multi-Task Dynamical Systems
Time series datasets are often composed of a variety of sequences from the
same domain, but from different entities, such as individuals, products, or
organizations. We are interested in how time series models can be specialized
to individual sequences (capturing the specific characteristics) while still
retaining statistical power by sharing commonalities across the sequences. This
paper describes the multi-task dynamical system (MTDS); a general methodology
for extending multi-task learning (MTL) to time series models. Our approach
endows dynamical systems with a set of hierarchical latent variables which can
modulate all model parameters. To our knowledge, this is a novel development of
MTL, and applies to time series both with and without control inputs. We apply
the MTDS to motion-capture data of people walking in various styles using a
multi-task recurrent neural network (RNN), and to patient drug-response data
using a multi-task pharmacodynamic model.Comment: 52 pages, 17 figure
Multi-Task Dynamical Systems
Time series datasets are often composed of a variety of sequences from the
same domain, but from different entities, such as individuals, products, or
organizations. We are interested in how time series models can be specialized
to individual sequences (capturing the specific characteristics) while still
retaining statistical power by sharing commonalities across the sequences. This
paper describes the multi-task dynamical system (MTDS); a general methodology
for extending multi-task learning (MTL) to time series models. Our approach
endows dynamical systems with a set of hierarchical latent variables which can
modulate all model parameters. To our knowledge, this is a novel development of
MTL, and applies to time series both with and without control inputs. We apply
the MTDS to motion-capture data of people walking in various styles using a
multi-task recurrent neural network (RNN), and to patient drug-response data
using a multi-task pharmacodynamic model.Comment: 52 pages, 17 figure
Adversarial Propagation and Zero-Shot Cross-Lingual Transfer of Word Vector Specialization
Semantic specialization is the process of fine-tuning pre-trained
distributional word vectors using external lexical knowledge (e.g., WordNet) to
accentuate a particular semantic relation in the specialized vector space.
While post-processing specialization methods are applicable to arbitrary
distributional vectors, they are limited to updating only the vectors of words
occurring in external lexicons (i.e., seen words), leaving the vectors of all
other words unchanged. We propose a novel approach to specializing the full
distributional vocabulary. Our adversarial post-specialization method
propagates the external lexical knowledge to the full distributional space. We
exploit words seen in the resources as training examples for learning a global
specialization function. This function is learned by combining a standard
L2-distance loss with an adversarial loss: the adversarial component produces
more realistic output vectors. We show the effectiveness and robustness of the
proposed method across three languages and on three tasks: word similarity,
dialog state tracking, and lexical simplification. We report consistent
improvements over distributional word vectors and vectors specialized by other
state-of-the-art specialization frameworks. Finally, we also propose a
cross-lingual transfer method for zero-shot specialization which successfully
specializes a full target distributional space without any lexical knowledge in
the target language and without any bilingual data.Comment: Accepted at EMNLP 201
- …