15 research outputs found

    Modeling human timing behavior

    Get PDF
    In order to understand human motor timing, individuals are instructed to synchronize their movements with repetitive environmental events. There are cognitive models accounting for the empirical findings obtained in such task. Cognitive models are usually formalized as system of equations that receives variables as input and predict output based on the input, the mathematical expression, and the parameters [1]. Because in experiments, there are always variables that can neither be manipulated nor controlled, i.e., there is noise within and beyond the Central Nervous System (CNS), these model are often defined as parametric family of probability distributions. Schulze and Vorberg (2002) developed such a probabilistic cognitive model, called the Linear Phase Correction model (LPC). Our main goal is to provide a method of parameter estimation of the LPC built on multiple short asynchrony series that can be non-stationary, vary in size, and allow for serially correlated errors.Sociedade Portuguesa de Estatística (SPE)info:eu-repo/semantics/publishedVersio

    Parameter estimation of the linear phase correction model by mixed-effects models

    Get PDF
    Dissertação de mestrado em Science in StatisticsThe control of human motor timing is captured by cognitive models that make assumptions about the underlying information processing mechanisms. A paradigm for its inquiry is the Sensorimotor Synchronization (SMS) task, in which an individual is required to synchronize the movements of an effector, like the finger, with repetitive appearing onsets of an oscillating external event. The Linear Phase Correction model (LPC) is a cognitive model that captures the asynchrony dynamics between the finger taps and the event onsets. It assumes cognitive processes that are modeled as independent random variables (perceptual delays, motor delays, timer intervals). There exist methods that estimate the model parameters from the asynchronies recorded in SMS tasks. However, while many natural situations show only very short synchronization periods, the previous methods require long asynchrony sequences to allow for unbiased estimations. Depending on the task, long records may be hard to obtain experimentally. Moreover, in typical SMS tasks, records are repetitively taken to reduce biases. Yet, by averaging parameter estimates from multiple observations, the existing methods do not most appropriately exploit all available information. Therefore, the present work is a new approach of parameter estimation to integrate multiple asynchrony sequences. Based on simulations from the LPC model, we first demonstrate that existing parameter estimation methods are prone to bias when the synchronization periods become shorter. Second, we present an extended Linear Model (eLM) that integrates multiple sequences within a single model and estimates the model parameters of short sequences with a clear reduction of bias. Finally, by using Mixed-E ects Models (MEM), we show that parameters can also be retrieved robustly when there is between-sequence variability of their expected values. Since such between-sequence variability is common in experimental and natural settings, we herewith propose a method that increases the applicability of the LPC model. This method is now able to reduce biases due to fatigue or attentional issues, for example, bringing an experimental control that previous methods are unable to perform.O controlo de factores temporais que ocorrem na execução de movimentos é captado por modelos cognitivos. Estes modelos são aproximações do processamento de informação,que ocorre no sistema nervoso. Para investigar este processo é utilizada a "Sensorimotor Synchronization Task" (SMS) que consiste em sincronizar os movimentos, por exemplo, de um dedo com eventos externos repetitivos. O "Linear Phase Correction Model" (LPC) permite prever a evolução da diacronia entre o movimento e o evento externo. Este modelo inclui variáveis aleatórias independentes, tais como atrasos no processamento da informação e execução da resposta. Para se estimar os parâmetros do LPC são utilizados métodos que incluem as diacronias obtidas na SMS. Estes métodos precisam de sequências longas, no entanto o sincronismo veri fica-se durante curtos períodos de tempo. Além disso, registam-se observações múltiplas para diminuir o viés na estimativa. Contudo, recorrendo à média de múltiplas estimativas, nem toda a informação disponível é considerada. Com vista a colmatar as lacunas identi ficadas, este trabalho apresenta uma nova abordagem ao nível da estimativa dos parâmetros. Num primeiro momento, com base em simulações do LPC, demonstramos que os métodos existentes são enviesados, quando as sequências são curtas. Num segundo momento, apresentamos o "extended Linear Model" (eLM) que integra diacronias múltiplas no mesmo modelo. Por fim, usando o "Mixed-Effects Model" (MEM), mostramos que os parâmetros podem ser estimados quando os valores esperados variam entre sequências. Uma vez que tal variabilidade é frequente e observável em contexto real, o método desenvolvido neste trabalho permite maior aplicabilidade do modelo LPC e reduz o viés causado por factores relacionados com problemas de atenção e de fadiga, introduzindo um novo controlo experimental.Fundação para a Ciência e Tecnologia (FCT) - Project UID/MAT/00013/201

    A multimodal approach to interpersonal gait synchronization

    Get PDF
    Tese de doutoramento em Psicologia (Especialidade Psicologia Básica)When people walk side-by-side, they often synchronize their movements. First, we investigated in three experiments whether audiovisual signals from the walking partner are integrated according to a mechanism operating as a Maximum Likelihood Estimator (MLE). Sensory cues from a walking partner were virtually simulated. In Experiment 1, seven participants synchronized with human-sized Point Light Walkers and/or footstep sounds. Results revealed highest performance when auditory cues were presented, regardless of the visual ones. This auditory dominance effect might have been due to artifacts of the setup. Therefore, in Experiment 2, human-sized virtual mannequins were implemented and the audiovisual stimuli were rendered in real-time in order to guarantee cross-modal congruence, co-localization, and synchrony. All four participants synchronized best with audiovisual cues and for three participants the results are consistent with the MLE model. Finally, Experiment 3 yielded performance decrements for three participants when the cues were temporally incongruent. These findings suggest that the integration of congruent audiovisual cues increase the intentional step synchronization of side-by-side walkers. In a fourth experiment, we tested whether synchronization is achieved by matching global body motion rather than single segments like the feet. Eight pairs of participants walked side-by-side in a large field. Results revealed that asynchronies between signals obtained from the principal components of co-variation of several body segments vary less than the asynchronies computed from individual body segments suggesting a synchronization of the global body motions of the walkers. The overall findings are partially consistent with the information processing approach and the dynamical system approach. The work’s output also highlights that it requires a very high spatiotemporal alignment of the stimuli when using such virtual environment techniques in contexts like rehabilitation or sports.Quando as pessoas caminham lado a lado, muitas vezes sincronizam os seus movimentos. Realizamos três estudos de forma a saber se os sinais audiovisuais oriundos do parceiro são integrados de acordo com um mecanismo que opera como um estimador de Máxima Verossimilhança (MLE). A informação sensorial de um parceiro foi simulada virtualmente. Na Experiência 1, sete participantes sincronizaram a sua passada com um Point Light Walker e/ou com sons de passos. Os resultados revelaram um melhor desempenho durante a presença de sinais auditivos. O efeito de dominância auditiva pode ter sido devido a artefactos da experiência. Portanto, na Experiência 2 foram implementados humanoides virtuais e os estímulos foram renderizados em tempo real para garantir a congruência entre modalidades. Os quatro participantes sincronizaram melhor com informação audiovisual e dos quatro, três são consistentes com o MLE. Posteriormente, na Experiência 3 o desempenho de três participantes diminuiu quando as pistas estavam temporalmente incongruentes. Isto mostra que a integração dos sinais audiovisuais congruentes aumenta a sincronização intencional da marcha. Desta forma, numa quarta experiência, testamos se esta sincronização é conseguida através dos movimentos globais do corpo de ambos os intervenientes. Oito pares de participantes caminharam lado ao lado. Os resultados revelaram que quando a medida de sincronia combina os sinais obtidos através da componente de covariação de vários segmentos do corpo, esta medida é mais precisa do que se for realizada a partir de um qualquer segmento, sugerindo que os humanos se sincronizam com os movimentos globais. Os resultados são consistentes com uma abordagem que assume um controlo intencional de nível cortical superior, bem como com a abordagem de sistemas dinâmicos. O trabalho tem implicações para o uso de ambientes virtuais em contextos como a reabilitação ou o desporto e salienta que os estímulos requerem um alinhamento espaço-temporal muito elevado.This thesis was funded with a doctoral scholarship (bolso de doutoramento: SFRH/BD/88396/2012) by the Fundação para Ciência e Tecnologi

    Bimodal information increases spontaneous interpersonal synchronization of goal directed upper limb movements

    Get PDF
    When interacting with each other, people often synchronize spontaneously their movements, e.g. during pendulum swinging, chair rocking[5], walking [4][7], and when executing periodic forearm movements[3].Although the spatiotemporal information that establishes the coupling, leading to synchronization, might be provided by several perceptual systems, the systematic study of different sensory modalities contribution is widely neglected. Considering a) differences in the sensory dominance on the spatial and temporal dimension[5] , b) different cue combination and integration strategies [1][2], and c) that sensory information might provide different aspects of the same event, synchronization should be moderated by the type of sensory modality. Here, 9 naïve participants placed a bottle periodically between two target zones, 40 times, in 12 conditions while sitting in front of a confederate executing the same task. The participant could a) see and hear, b) see , c) hear the confederate, d) or audiovisual information about the movements of the confederate was absent. The couple started in 3 different relative positions (i.e., in-phase, anti-phase, out of phase). A retro-reflective marker was attached to the top of the bottles. Bottle displacement was captured by a motion capture system. We analyzed the variability of the continuous relative phase reflecting the degree of synchronization. Results indicate the emergence of spontaneous synchronization, an increase with bimodal information, and an influence of the initial phase relation on the particular synchronization pattern. Results have theoretical implication for studying cue combination in interpersonal coordination and are consistent with coupled oscillator models.Fundação Bial (Grant 77/12) and Fundação para a Ciência e Tecnologia - FCT: SFRH/BD/8839 6/2012; EXPL/MHC - PCN/0162/2013; FCOMP - 01 - 0124 - FEDER - 022674 and PEst - C/CTM/U10264/2011; FCOMP - 01 - 0124 - FEDER - 037281 and PEst - C/EEI/LA0014/2013. This work was financed by FEDER grants through the Operational Competitiveness Program – COMPET

    Audiovisual integration increases the intentional step synchronization of side-by-side walkers

    Get PDF
    When people walk side-by-side, they often synchronize their steps. To achieve this, individuals might cross-modally match audiovisual signals from the movements of the partner and kinesthetic, cutaneous, visual and auditory signals from their own movements. Because signals from different sensory systems are processed with noise and asynchronously, the challenge of the CNS is to derive the best estimate based on this conflicting information. This is currently thought to be done by a mechanism operating as a Maximum Likelihood Estimator (MLE). The present work investigated whether audiovisual signals from the partner are integrated according to MLE in order to synchronize steps during walking. Three experiments were conducted in which the sensory cues from a walking partner were virtually simulated. In Experiment 1 seven participants were instructed to synchronize with human-sized Point Light Walkers and/or footstep sounds. Results revealed highest synchronization performance with auditory and audiovisual cues. This was quantified by the time to achieve synchronization and by synchronization variability. However, this auditory dominance effect might have been due to artifacts of the setup. Therefore, in Experiment 2 human-sized virtual mannequins were implemented. Also, audiovisual stimuli were rendered in real-time and thus were synchronous and co-localized. All four participants synchronized best with audiovisual cues. For three of the four participants results point toward their optimal integration consistent with the MLE model. Experiment 3 yielded performance decrements for all three participants when the cues were incongruent. Overall, these findings suggest that individuals might optimally integrate audiovisual cues to synchronize steps during side-by-side walking.info:eu-repo/semantics/publishedVersio

    On the growth rate of minor-closed classes of graphs

    Get PDF
    "Vegeu el resum a l'inici del document del fitxer adjunt"

    Sensorimotor synchronization when walking side by side with a point light walker

    Get PDF
    [Excerpt] Synchronization of periodic movements like side-by-side walking [7] is frequently modeled by coupled oscillators [5] and the coupling strength is defined quantitatively [3]. In contrast, in most studies on sensorimotor synchronization (SMS), simple movements like finger taps are synchronized with simple stimuli like metronomes [4]. While the latter paradigm simplifies matters and allows for the assessment of the relative weights of sensory modalities through systematic variation of the stimuli [1], it might lack ecological validity. Conversely, using more complex movements and stimuli might complicate the specification of mechanisms underlying coupling. We merged the positive aspects of both approaches to study the contribution of auditory and visual information on synchronization during side-by-side walking. As stimuli, we used Point Light Walkers (PLWs) and auralized steps sound; both were constructed from previously captured walking individuals [2][6]. PLWs were retro-projected on a screen and matched according to gender, hip height, and velocity. The participant walked for 7.20m side by side with 1) a PLW, 2) steps sound, or 3) both displayed in temporal congruence. Instruction to participants was to synchronize with the available stimuli. [...]Acknowledgments: [Supported by Fundação Bial (Grant 77/12) and Fundação para a Ciência e Tecnologia - FCT: SFRH/BD/88396/2012; EXPL/MHC-PCN/0162/2013; FCOMP-01-0124 - FEDER-022674 and PEst-C/CTM/U10264/2011; FCOMP-01-0124-FEDER-037281 and PEstC/EEI/LA0014/2013. This work was financed by FEDER grants through the Operational Competitiveness Program – COMPETE]

    The contribution of sensory information on the unintentional synchronization of side-by-side walkers

    Get PDF
    Fundação para a Ciência e a Tecnologia (FCT)FEDER grants through the Operational Competitiveness Program - COMPET

    Twelve-month observational study of children with cancer in 41 countries during the COVID-19 pandemic

    Get PDF
    Introduction Childhood cancer is a leading cause of death. It is unclear whether the COVID-19 pandemic has impacted childhood cancer mortality. In this study, we aimed to establish all-cause mortality rates for childhood cancers during the COVID-19 pandemic and determine the factors associated with mortality. Methods Prospective cohort study in 109 institutions in 41 countries. Inclusion criteria: children <18 years who were newly diagnosed with or undergoing active treatment for acute lymphoblastic leukaemia, non-Hodgkin's lymphoma, Hodgkin lymphoma, retinoblastoma, Wilms tumour, glioma, osteosarcoma, Ewing sarcoma, rhabdomyosarcoma, medulloblastoma and neuroblastoma. Of 2327 cases, 2118 patients were included in the study. The primary outcome measure was all-cause mortality at 30 days, 90 days and 12 months. Results All-cause mortality was 3.4% (n=71/2084) at 30-day follow-up, 5.7% (n=113/1969) at 90-day follow-up and 13.0% (n=206/1581) at 12-month follow-up. The median time from diagnosis to multidisciplinary team (MDT) plan was longest in low-income countries (7 days, IQR 3-11). Multivariable analysis revealed several factors associated with 12-month mortality, including low-income (OR 6.99 (95% CI 2.49 to 19.68); p<0.001), lower middle income (OR 3.32 (95% CI 1.96 to 5.61); p<0.001) and upper middle income (OR 3.49 (95% CI 2.02 to 6.03); p<0.001) country status and chemotherapy (OR 0.55 (95% CI 0.36 to 0.86); p=0.008) and immunotherapy (OR 0.27 (95% CI 0.08 to 0.91); p=0.035) within 30 days from MDT plan. Multivariable analysis revealed laboratory-confirmed SARS-CoV-2 infection (OR 5.33 (95% CI 1.19 to 23.84); p=0.029) was associated with 30-day mortality. Conclusions Children with cancer are more likely to die within 30 days if infected with SARS-CoV-2. However, timely treatment reduced odds of death. This report provides crucial information to balance the benefits of providing anticancer therapy against the risks of SARS-CoV-2 infection in children with cancer

    Growth constants of minor-closed classes of graphs

    Get PDF
    A minor-closed class of graphs is a set\u
    corecore