74 research outputs found
Synthesis of variable dancing styles based on a compact spatiotemporal representation of dance
Dance as a complex expressive form of motion is able to convey emotion, meaning and social idiosyncrasies that opens channels for non-verbal communication, and promotes rich cross-modal interactions with music and the environment. As such, realistic dancing characters may incorporate crossmodal information and variability of the dance forms through compact representations that may describe the movement structure in terms of its spatial and temporal organization. In this paper, we propose a novel method for synthesizing beatsynchronous dancing motions based on a compact topological model of dance styles, previously captured with a motion capture system. The model was based on the Topological Gesture Analysis (TGA) which conveys a discrete three-dimensional point-cloud representation of the dance, by describing the spatiotemporal variability of its gestural trajectories into uniform spherical distributions, according to classes of the musical meter. The methodology for synthesizing the modeled dance traces back the topological representations, constrained with definable metrical and spatial parameters, into complete dance instances whose variability is controlled by stochastic processes that considers both TGA distributions and the kinematic constraints of the body morphology. In order to assess the relevance and flexibility of each parameter into feasibly reproducing the style of the captured dance, we correlated both captured and synthesized trajectories of samba dancing sequences in relation to the level of compression of the used model, and report on a subjective evaluation over a set of six tests. The achieved results validated our approach, suggesting that a periodic dancing style, and its musical synchrony, can be feasibly reproduced from a suitably parametrized discrete spatiotemporal representation of the gestural motion trajectories, with a notable degree of compression
Rehabilitation Exergames: use of motion sensing and machine learning to quantify exercise performance in healthy volunteers
Background: Performing physiotherapy exercises in front of a physiotherapist yields qualitative assessment notes and immediate feedback. However, practicing the exercises at home lacks feedback on how well or not patients are performing the prescribed tasks. The absence of proper feedback might result in patients doing the exercises incorrectly, which could worsen their condition. Objective: We propose the use of two machine learning algorithms, namely Dynamic Time Warping (DTW) and Hidden Markov Model (HMM), to quantitively assess the patient’s performance with respects to a reference. Methods: Movement data were recorded using a Kinect depth sensor, capable of detecting 25 joints in the human skeleton model, and were compared to those of a reference. 16 participants were recruited to perform four different exercises: shoulder abduction, hip abduction, lunge, and sit-to-stand. Their performance was compared to that of a physiotherapist as a reference. Results: Both algorithms show a similar trend in assessing participants' performance. However, their sensitivity level was different. While DTW was more sensitive to small changes, HMM captured a general view of the performance, being less sensitive to the details. Conclusions: The chosen algorithms demonstrated their capacity to objectively assess physical therapy performances. HMM may be more suitable in the early stages of a physiotherapy program to capture and report general performance, whilst DTW could be used later on to focus on the detail
Using music and motion analysis to construct 3D animations and visualisations
This paper presents a study into music analysis, motion analysis and the integration of music and motion to form creative natural human motion in a virtual environment. Motion capture data is extracted to generate a motion library, this places the digital motion model at a fixed posture. The first step in this process is to configure the motion path curve for the database and calculate the possibility that two motions were sequential through the use of a computational algorithm. Every motion is then analysed for the next possible smooth movement to connect to, and at the same time, an interpolation method is used to create the transitions between motions to enable the digital motion models to move fluently. Lastly, a searching algorithm sifts for possible successive motions from the motion path curve according to the music tempo. It was concluded that the higher ratio of rescaling a transition, the lower the degree of natural motio
What does touch tell us about emotions in touchscreen-based gameplay?
This is the post-print version of the Article. The official published version can be accessed from the link below - Copyright @ 2012 ACM. It is posted here by permission of ACM for your personal use. Not for redistribution.Nowadays, more and more people play games on touch-screen mobile phones. This phenomenon raises a very interesting question: does touch behaviour reflect the player’s emotional state? If possible, this would not only be a valuable evaluation indicator for game designers, but also for real-time personalization of the game experience. Psychology studies on acted touch behaviour show the existence of discriminative affective profiles. In this paper, finger-stroke features during gameplay on an iPod were extracted and their discriminative power analysed. Based on touch-behaviour, machine learning algorithms were used to build systems for automatically discriminating between four emotional states (Excited, Relaxed, Frustrated, Bored), two levels of arousal and two levels of valence. The results were very interesting reaching between 69% and 77% of correct discrimination between the four emotional states. Higher results (~89%) were obtained for discriminating between two levels of arousal and two levels of valence
Using music and motion analysis to construct 3D animations and visualizations
This paper presents a study into music analysis, motion analysis and the integration of music and motion to form creative natural human motion in a virtual environment. Motion capture data is extracted to generate a motion library, this places the digital motion model at a fixed posture. The first step in this process is to configure the motion path curve for the database and calculate the possibility that two motions were sequential through the use of a computational algorithm. Every motion is then analysed for the next possible smooth movement to connect to, and at the same time, an interpolation method is used to create the transitions between motions to enable the digital motion models to move fluently. Lastly, a searching algorithm sifts for possible successive motions from the motion path curve according to the music tempo. It was concluded that the higher ratio of rescaling a transition, the lower the degree of natural motion
ZATLAB : recognizing gestures for artistic performance interaction
Most artistic performances rely on human gestures, ultimately resulting in an elaborate
interaction between the performer and the audience.
Humans, even without any kind of formal analysis background in music, dance or
gesture are typically able to extract, almost unconsciously, a great amount of relevant
information from a gesture. In fact, a gesture contains so much information,
why not use it to further enhance a performance?
Gestures and expressive communication are intrinsically connected, and being
intimately attached to our own daily existence, both have a central position in our
(nowadays) technological society. However, the use of technology to understand
gestures is still somehow vaguely explored, it has moved beyond its first steps
but the way towards systems fully capable of analyzing gestures is still long and
difficult (Volpe, 2005). Probably because, if on one hand, the recognition of
gestures is somehow a trivial task for humans, on the other hand, the endeavor of
translating gestures to the virtual world, with a digital encoding is a difficult and illdefined
task. It is necessary to somehow bridge this gap, stimulating a constructive
interaction between gestures and technology, culture and science, performance
and communication. Opening thus, new and unexplored frontiers in the design of
a novel generation of multimodal interactive systems.
This work proposes an interactive, real time, gesture recognition framework called
the Zatlab System (ZtS). This framework is flexible and extensible. Thus, it is in
permanent evolution, keeping up with the different technologies and algorithms that emerge at a fast pace nowadays. The basis of the proposed approach is to partition
a temporal stream of captured movement into perceptually motivated descriptive
features and transmit them for further processing in Machine Learning algorithms.
The framework described will take the view that perception primarily depends on
the previous knowledge or learning. Just like humans do, the framework will have
to learn gestures and their main features so that later it can identify them. It is
however planned to be flexible enough to allow learning gestures on the fly.
This dissertation also presents a qualitative and quantitative experimental validation
of the framework. The qualitative analysis provides the results concerning
the users acceptability of the framework. The quantitative validation provides the
results about the gesture recognizing algorithms. The use of Machine Learning
algorithms in these tasks allows the achievement of final results that compare or
outperform typical and state-of-the-art systems.
In addition, there are also presented two artistic implementations of the framework,
thus assessing its usability amongst the artistic performance domain.
Although a specific implementation of the proposed framework is presented in this
dissertation and made available as open source software, the proposed approach
is flexible enough to be used in other case scenarios, paving the way to applications
that can benefit not only the performative arts domain, but also, probably in the near
future, helping other types of communication, such as the gestural sign language
for the hearing impaired.Grande parte das apresentações artísticas são baseadas em gestos humanos,
ultimamente resultando numa intricada interação entre o performer e o público.
Os seres humanos, mesmo sem qualquer tipo de formação em música, dança ou
gesto são capazes de extrair, quase inconscientemente, uma grande quantidade
de informações relevantes a partir de um gesto. Na verdade, um gesto contém
imensa informação, porque não usá-la para enriquecer ainda mais uma performance?
Os gestos e a comunicação expressiva estão intrinsecamente ligados e estando
ambos intimamente ligados à nossa própria existência quotidiana, têm uma posicão
central nesta sociedade tecnológica actual. No entanto, o uso da tecnologia para
entender o gesto está ainda, de alguma forma, vagamente explorado. Existem
já alguns desenvolvimentos, mas o objetivo de sistemas totalmente capazes de
analisar os gestos ainda está longe (Volpe, 2005). Provavelmente porque, se
por um lado, o reconhecimento de gestos é de certo modo uma tarefa trivial
para os seres humanos, por outro lado, o esforço de traduzir os gestos para
o mundo virtual, com uma codificação digital é uma tarefa difícil e ainda mal
definida. É necessário preencher esta lacuna de alguma forma, estimulando uma
interação construtiva entre gestos e tecnologia, cultura e ciência, desempenho e
comunicação. Abrindo assim, novas e inexploradas fronteiras na concepção de
uma nova geração de sistemas interativos multimodais .
Este trabalho propõe uma framework interativa de reconhecimento de gestos, em tempo real, chamada Sistema Zatlab (ZtS). Esta framework é flexível e extensível.
Assim, está em permanente evolução, mantendo-se a par das diferentes tecnologias
e algoritmos que surgem num ritmo acelerado hoje em dia. A abordagem
proposta baseia-se em dividir a sequência temporal do movimento humano nas
suas características descritivas e transmiti-las para posterior processamento, em
algoritmos de Machine Learning. A framework descrita baseia-se no facto de que
a percepção depende, principalmente, do conhecimento ou aprendizagem prévia.
Assim, tal como os humanos, a framework terá que aprender os gestos e as suas
principais características para que depois possa identificá-los. No entanto, esta
está prevista para ser flexível o suficiente de forma a permitir a aprendizagem de
gestos de forma dinâmica.
Esta dissertação apresenta também uma validação experimental qualitativa e quantitativa
da framework. A análise qualitativa fornece os resultados referentes à
aceitabilidade da framework. A validação quantitativa fornece os resultados sobre
os algoritmos de reconhecimento de gestos. O uso de algoritmos de Machine
Learning no reconhecimento de gestos, permite a obtençãoc¸ ˜ao de resultados finais
que s˜ao comparaveis ou superam outras implementac¸ ˜oes do mesmo g´enero.
Al ´em disso, s˜ao tamb´em apresentadas duas implementac¸ ˜oes art´ısticas da framework,
avaliando assim a sua usabilidade no dom´ınio da performance art´ıstica.
Apesar duma implementac¸ ˜ao espec´ıfica da framework ser apresentada nesta dissertac¸ ˜ao
e disponibilizada como software open-source, a abordagem proposta ´e suficientemente
flex´ıvel para que esta seja usada noutros cen´ arios. Abrindo assim, o
caminho para aplicac¸ ˜oes que poder˜ao beneficiar n˜ao s´o o dom´ınio das artes
performativas, mas tamb´em, provavelmente num futuro pr ´oximo, outros tipos de
comunicac¸ ˜ao, como por exemplo, a linguagem gestual usada em casos de deficiˆencia
auditiva
- …