8 research outputs found

    Thermally induced multi-stable composites for morphing aircraft applications

    Get PDF
    This research focuses on the realisation of 'shape-adaptable' systems through unsymmetrical laminates.The residual stress field which is built-into this type of laminates, is used to obtain panels with two or more equilibrium states. Such systems provide a possible solution for the realisation of morphing structures because they allow to simultaneously fulfil the contradictory requirements of flexibility and stiffness.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    A new distributed protocol for consensus of discrete-time systems

    No full text
    International audienceIn this paper, a new distributed protocol is proposed to force consensus in a discrete-time network of scalar agents with an arbitrarily assignable convergence rate. Several simulations validate the performances and the improvements with respect to more standard protocols

    Topology-induced containment for general linear systems on weakly connected digraphs

    No full text
    The paper deals with topology-induced containment output feedback for ensuring multi-consensus of homogeneous linear systems evolving over a weakly connected communication digraph. Starting from the extension of a recent characterization of multi-consensus, a decentralized static feedback enforcing multi-consensus is designed based on a suitable network-induced decomposition; a neighborhood state-observer is proposed for completing the design. The results are finally illustrated over a simple simulated example

    Exploring the neural basis of phonological representations from sounds and vision

    No full text
    Speech is a multisensory signal that we can decipher from the voice and/or the lips. If the successive computational steps necessary to transform the auditory signal into meaningful language representations have been extensively explored, little is known on how the visual input of speech is processed in the brain; and how auditory and visual speech information are combined to converge onto a unified linguistic percept. In this study, we aim to identify brain regions that are involved in auditory (phonemes) and visual (visemes) phonology and explore whether some brain regions can be considered as multisensory abstract phonological regions supporting both auditory and visual phonological representations. We rely on functional magnetic resonance imaging (fMRI) in healthy adults to classify brain activity patterns evoked by phonemes and visemes. Preliminary results suggest that a network of visual, motor, auditory and frontal regions are involved in viseme recognition. Interestingly, auditorily defined phonological regions (in superior temporal gyrus - STG) seem to be involved in visual phonological representations as well. Moreover, overlap between auditory and visual decoding in mid- and posterior STG and in motor cortex indicate that these regions could be involved in the integration of auditory and visual speech phonology

    Exploring the neural basis of phonological representations from sounds and vision.

    No full text
    INTRODUCTION : Speech is a multisensory signal that we can decipher from the voice and/or the lips. If the successive computational steps necessary to transform the auditory signal into abstract language representations have been extensively explored, little is known on how the visual input of speech is processed in the brain; and how auditory and visual speech information converge onto a unified linguistic percept. In this study, we focus on the minimal abstract units of language, i.e. the phonological level. We aim to identify brain regions that are involved in auditory phonology (phonemes) and visual phonology (visemes). In particular, we aim to explore whether some brain regions represent both auditory and visual phonological representations, potentially in an abstract fashion. METHOD : We rely on functional magnetic resonance imaging (fMRI) combined with searchlight multivariate patterns analyses (MVPA) in healthy adults to characterize brain regions that represent phonological information from vision and audition. More precisely, we classify brain activity patterns evoked by a limited set of consonant-vowel syllables, composed of 3 perceptually distant consonants and 3 perceptually distant vowels, presented either auditorily (speech) or visually (lipreading). RESULTS : Preliminary analyses suggest that a network of visual, auditory, motor and frontal regions are involved in visemes recognition. Interestingly, auditorily defined phonological regions (in superior temporal gyrus - STG) seem to be involved in visual phonological representations as well. In line with previous literature, we are able to decode auditory phonemes in the classical speech perception network (auditory, motor, frontal areas). Moreover, overlap between auditory and visual decoding in mid- and posterior STG and in motor cortex indicate that these regions could be involved in the integration of auditory and visual speech phonology. We will then perform cross-modal classification between auditory and visual phonological representation in these multisensory regions to evaluate whether they implement a shared abstract representation for auditory and visual phonology. In addition, our analytical approach will be further extended using individually defined regions of interests (namely auditory phonological regions in STG, face- and word selective areas in VOTC) from functional localizers that were acquired in all our participants

    Bidspm: an spm-centric bids app for flexible statistical analysis

    No full text
    Introduction: Great strides have recently been made in standardizing the format of neuroimaging data with initiatives such as the Brain Imaging Data Structure (BIDS, Gorgolewski et al. (2016)) and pipelines like fmriprep (Esteban et al. (2019)). However, the statistical analysis phase of the typical neuroimaging process requires a significant amount of flexibility, which often leads to non-reproducible and heterogeneous scripts. Additionally, scientific publications often lack critical contextual information, making it hard to replicate the analyses from published studies. Even when analysis scripts are shared, they may lack transparency, making it difficult to understand and apply the same model to different datasets. To address these issues, the BIDS Statistical Model (https://bids-standard.github.io/statsmodels/) was recently developed to promote automated model fitting pipelines (see for example fitlins: https://github.com/poldracklab/fitlins). However, there is no integration of the BIDS statistical model with SPM12. bidSPM is a BIDS app to fill this gap and make it easier to leverage this new tool. Methods: The philosophy of bidSPM is to take standardized data and configuration files as input and to return standardized outputs. This should minimize how much code researchers have to write. bidSPM uses the BIDS app CLI (Gorgolewski et al. (2017)) to provide a standardized way to run fMRI analysis of a BIDS dataset with SPM12 and several of its complementary toolboxes. Analyses can be done at the subject and group level, on the whole brain or in a region of interest. Additionally, the bidspm pipeline can be a preparatory step for different kinds of analyses, whether task-free (resting-state) or task-based univariate and multivariate studies. To run a statistical analysis, bidSPM only requires as inputs: - a valid raw BIDS dataset - its BIDS derivatives (preprocessed by fmriprep or by bidSPM itself) - a bids stats model JSON file. The BIDS stats model is used to define the input data, the variables and the confound variables to include in the general linear model (GLM), the contrasts to estimates as well as several options for HRF convolution, model estimation, results to display… Having a single JavaScript Object Notation (JSON) file to define one analysis allows researchers to easily create several models. bidSPM can then help choose the best model as it can perform bayesian model selection via the model assessment, comparison and selection (MACS) toolbox (Soch J (2018)) of SPM12. This feature can be relevant to compare 1) different cognitive models for a given task, 2) denoising strategies. This approach provides a principled way to choose a model for a given dataset without having to peek at the results, and thus may also help prevent procedural overfitting (Yarkoni and Westfall (2017)). Results: In the end bidSPM outputs follow the BIDS derivatives conventions and GLM results are stored in a NIDM results (Maumet et al. (2016)) allowing researchers to upload their results to Neurovault (https://neurovault.org, Gorgolewski et al. (2015)) in a couple of clicks. Additionally, bidSPM can easily provide 4D maps of a subject's GLM output (beta / t-maps) to allow further analysis using MVPA classification frameworks or RSA tools, making it a bridge connecting different frameworks. Conclusions: bidSPM provides researchers with a flexible way to run statistical analysis using SPM12 with a single JSON file and only a few lines of code based on data formatted in BIDS. bidSPM can be run on MATLAB or Octave and is also packaged as docker image and is available on github (https://github.com/cpp-lln-lab/bidSPM) and dockerhub (https://hub.docker.com/repository/docker/cpplab/bidSPM/). We hope that this tool will make it easier for the community to adopt practices that lead to more reproducible results by relying on standardised pipelines that are easily shareable

    ABSTRACTS

    No full text
    corecore