67 research outputs found

    Autism spectrum disorder classification based on interpersonal neural synchrony: Can classification be improved by dyadic neural biomarkers using unsupervised graph representation learning?

    Full text link
    Research in machine learning for autism spectrum disorder (ASD) classification bears the promise to improve clinical diagnoses. However, recent studies in clinical imaging have shown the limited generalization of biomarkers across and beyond benchmark datasets. Despite increasing model complexity and sample size in neuroimaging, the classification performance of ASD remains far away from clinical application. This raises the question of how we can overcome these barriers to develop early biomarkers for ASD. One approach might be to rethink how we operationalize the theoretical basis of this disease in machine learning models. Here we introduced unsupervised graph representations that explicitly map the neural mechanisms of a core aspect of ASD, deficits in dyadic social interaction, as assessed by dual brain recordings, termed hyperscanning, and evaluated their predictive performance. The proposed method differs from existing approaches in that it is more suitable to capture social interaction deficits on a neural level and is applicable to young children and infants. First results from functional near-infrared spectroscopy data indicate potential predictive capacities of a task-agnostic, interpretable graph representation. This first effort to leverage interaction-related deficits on neural level to classify ASD may stimulate new approaches and methods to enhance existing models to achieve developmental ASD biomarkers in the future.Comment: Accepted in Medical Image Computing and Computer Assisted Intervention - MICCAI 2022: The 5th International Workshop on Machine Learning in Clinical Neuroimagin

    Электрооборудование тепловой насосной станции

    Get PDF
    В данной работе выполнен расчет электроснабжения тепловой насосной станции №2 города Саяногорска. В работе так же рассмотрен механизм управления сетевым насосом при помощи преобразователя частоты с целью снижения потребляемой электрической энергии.In this paper, the power supply to the heat pump station No. 2 in Sayanogorsk has been calculated. In work the mechanism of management of the network pump by means of the frequency converter with the purpose of decrease in consumed electric energy is also considered

    Open video data sharing in developmental and behavioural science

    Full text link
    Video recording is a widely used method for documenting infant and child behaviours in research and clinical practice. Video data has rarely been shared due to ethical concerns of confidentiality, although the need of shared large-scaled datasets remains increasing. This demand is even more imperative when data-driven computer-based approaches are involved, such as screening tools to complement clinical assessments. To share data while abiding by privacy protection rules, a critical question arises whether efforts at data de-identification reduce data utility? We addressed this question by showcasing the Prechtl's general movements assessment (GMA), an established and globally practised video-based diagnostic tool in early infancy for detecting neurological deficits, such as cerebral palsy. To date, no shared expert-annotated large data repositories for infant movement analyses exist. Such datasets would massively benefit training and recalibration of human assessors and the development of computer-based approaches. In the current study, sequences from a prospective longitudinal infant cohort with a total of 19451 available general movements video snippets were randomly selected for human clinical reasoning and computer-based analysis. We demonstrated for the first time that pseudonymisation by face-blurring video recordings is a viable approach. The video redaction did not affect classification accuracy for either human assessors or computer vision methods, suggesting an adequate and easy-to-apply solution for sharing movement video data. We call for further explorations into efficient and privacy rule-conforming approaches for deidentifying video data in scientific and clinical fields beyond movement assessments. These approaches shall enable sharing and merging stand-alone video datasets into large data pools to advance science and public health

    Dysfunctions in brain networks supporting empathy: An fMRI study in adults with autism spectrum disorders

    Get PDF
    The present study aimed at identifying dysfunctions in brain networks that may underlie disturbed empathic behavior in autism spectrum disorders (ASD). During functional magnetic resonance imaging, subjects were asked to identify the emotional state observed in a facial stimulus (other-task) or to evaluate their own emotional response (self-task). Behaviorally, ASD subjects performed equally to the control group during the other-task, but showed less emotionally congruent responses in the self-task. Activations in brain regions related to theory of mind were observed in both groups. Activations of the medial prefrontal cortex (MPFC) were located in dorsal subregions in ASD subjects and in ventral areas in control subjects. During the self-task, ASD subjects activated an additional network of frontal and inferior temporal areas. Frontal areas previously associated with the human mirror system were activated in both tasks in control subjects, while ASD subjects recruited these areas during the self-task only. Activations in the ventral MPFC may provide the basis for one's “emotional bond” with other persons’ emotions. Such atypical patterns of activation may underlie disturbed empathy in individuals with ASD. Subjects with ASD may use an atypical cognitive strategy to gain access to their own emotional state in response to other people's emotions

    Looking While Unhappy: A Mood-Congruent Attention Bias Toward Sad Adult Faces in Children

    Get PDF
    A negative mood-congruent attention bias has been consistently observed, for example, in clinical studies on major depression. This bias is assumed to be dysfunctional in that it supports maintaining a sad mood, whereas a potentially adaptive role has largely been neglected. Previous experiments involving sad mood induction techniques found a negative mood-congruent attention bias specifically for young individuals, explained by an adaptive need for information transfer in the service of mood regulation. In the present study we investigated the attentional bias in typically developing children (aged 6–12 years) when happy and sad moods were induced. Crucially, we manipulated the age (adult vs. child) of the displayed pairs of facial expressions depicting sadness, anger, fear and happiness. The results indicate that sad children indeed exhibited a mood specific attention bias toward sad facial expressions. Additionally, this bias was more pronounced for adult faces. Results are discussed in the context of an information gain which should be stronger when looking at adult faces due to their more expansive life experience. These findings bear implications for both research methods and future interventions

    Reproducibility in the absence of selective reporting : An illustration from large-scale brain asymmetry research

    Get PDF
    Altres ajuts: Max Planck Society (Germany).The problem of poor reproducibility of scientific findings has received much attention over recent years, in a variety of fields including psychology and neuroscience. The problem has been partly attributed to publication bias and unwanted practices such as p-hacking. Low statistical power in individual studies is also understood to be an important factor. In a recent multisite collaborative study, we mapped brain anatomical left-right asymmetries for regional measures of surface area and cortical thickness, in 99 MRI datasets from around the world, for a total of over 17,000 participants. In the present study, we revisited these hemispheric effects from the perspective of reproducibility. Within each dataset, we considered that an effect had been reproduced when it matched the meta-analytic effect from the 98 other datasets, in terms of effect direction and significance threshold. In this sense, the results within each dataset were viewed as coming from separate studies in an "ideal publishing environment," that is, free from selective reporting and p hacking. We found an average reproducibility rate of 63.2% (SD = 22.9%, min = 22.2%, max = 97.0%). As expected, reproducibility was higher for larger effects and in larger datasets. Reproducibility was not obviously related to the age of participants, scanner field strength, FreeSurfer software version, cortical regional measurement reliability, or regional size. These findings constitute an empirical illustration of reproducibility in the absence of publication bias or p hacking, when assessing realistic biological effects in heterogeneous neuroscience data, and given typically-used sample sizes

    Positive Facial Affect – An fMRI Study on the Involvement of Insula and Amygdala.

    Get PDF
    Imitation of facial expressions engages the putative human mirror neuron system as well as the insula and the amygdala as part of the limbic system. The specific function of the latter two regions during emotional actions is still under debate. The current study investigated brain responses during imitation of positive in comparison to non-emotional facial expressions. Differences in brain activation of the amygdala and insula were additionally examined during observation and execution of facial expressions. Participants imitated, executed and observed happy and non-emotional facial expressions, as well as neutral faces. During imitation, higher right hemispheric activation emerged in the happy compared to the non-emotional condition in the right anterior insula and the right amygdala, in addition to the pre-supplementary motor area, middle temporal gyrus and the inferior frontal gyrus. Region-of-interest analyses revealed that the right insula was more strongly recruited by (i) imitation and execution than by observation of facial expressions, that (ii) the insula was significantly stronger activated by happy than by non-emotional facial expressions during observation and imitation and that (iii) the activation differences in the right amygdala between happy and non-emotional facial expressions were increased during imitation and execution, in comparison to sole observation. We suggest that the insula and the amygdala contribute specifically to the happy emotional connotation of the facial expressions depending on the task. The pattern of the insula activity might reflect increased bodily awareness during active execution compared to passive observation and during visual processing of the happy compared to non-emotional facial expressions. The activation specific for the happy facial expression of the amygdala during motor tasks, but not in the observation condition, might reflect increased autonomic activity or feedback from facial muscles to the amygdala

    Temporal Behavioral Parameters of On-Going Gaze Encounters in a Virtual Environment

    No full text
    To navigate the social world, humans heavily rely on gaze for non-verbal communication as it conveys information in a highly dynamic and complex, yet concise manner: For instance, humans utilize gaze effortlessly to direct and infer the attention of a possible interaction partner. Many traditional paradigms in social gaze research though rely on static ways of assessing gaze interaction, e.g., by using images or prerecorded videos as stimulus material. Emerging gaze contingent paradigms, in which algorithmically controlled virtual characters can respond flexibly to the gaze behavior of humans, provide high ecological validity. Ideally, these are based on models of human behavior which allow for precise, parameterized characterization of behavior, and should include variable interactive settings and different communicative states of the interacting agents. The present study provides a complete definition and empirical description of a behavioral parameter space of human gaze behavior in extended gaze encounters. To this end, we (i) modeled a shared 2D virtual environment on a computer screen in which a human could interact via gaze with an agent and simultaneously presented objects to create instances of joint attention and (ii) determined quantitatively the free model parameters (temporal and probabilistic) of behavior within this environment to provide a first complete, detailed description of the behavioral parameter space governing joint attention. This knowledge is essential to enable the modeling of interacting agents with a high degree of ecological validity, be it for cognitive studies or applications in human-robot interaction

    The “Social Gaze Space” : A Taxonomy for Gaze-Based Communication in Triadic Interactions

    No full text
    Humans substantially rely on non-verbal cues in their communication and interaction with others. The eyes represent a “simultaneous input-output device”: While we observe others and obtain information about their mental states (including feelings, thoughts, and intentions-to-act), our gaze simultaneously provides information about our own attention and inner experiences. This substantiates its pivotal role for the coordination of communication. The communicative and coordinative capacities – and their phylogenetic and ontogenetic impacts – become fully apparent in triadic interactions constituted in its simplest form by two persons and an object. Technological advances have sparked renewed interest in social gaze and provide new methodological approaches. Here we introduce the ‘Social Gaze Space’ as a new conceptual framework for the systematic study of gaze behavior during social information processing. It covers all possible categorical states, namely ‘partner-oriented,’ ‘object-oriented,’ ‘introspective,’ ‘initiating joint attention,’ and ‘responding joint attention.’ Different combinations of these states explain several interpersonal phenomena. We argue that this taxonomy distinguishes the most relevant interactional states along their distinctive features, and will showcase the implications for prominent social gaze phenomena. The taxonomy allows to identify research desiderates that have been neglected so far. We argue for a systematic investigation of these phenomena and discuss some related methodological issues
    corecore