24 research outputs found

    Combining Multi-Shell Diffusion with Conventional MRI Improves Molecular Diagnosis of Diffuse Gliomas with Deep Learning

    Get PDF
    The WHO classification since 2016 confirms the importance of integrating molecular diagnosis for prognosis and treatment decisions of adult-type diffuse gliomas. This motivates the development of non-invasive diagnostic methods, in particular MRI, to predict molecular subtypes of gliomas before surgery. At present, this development has been focused on deep-learning (DL)-based predictive models, mainly with conventional MRI (cMRI), despite recent studies suggesting multi-shell diffusion MRI (dMRI) offers complementary information to cMRI for molecular subtyping. The aim of this work is to evaluate the potential benefit of combining cMRI and multi-shell dMRI in DL-based models. A model implemented with deep residual neural networks was chosen as an illustrative example. Using a dataset of 146 patients with gliomas (from grade 2 to 4), the model was trained and evaluated, with nested cross-validation, on pre-operative cMRI, multi-shell dMRI, and a combination of the two for the following classification tasks: (i) IDH-mutation; (ii) 1p/19q-codeletion; and (iii) three molecular subtypes according to WHO 2021. The results from a subset of 100 patients with lower grades gliomas (2 and 3 according to WHO 2016) demonstrated that combining cMRI and multi-shell dMRI enabled the best performance in predicting IDH mutation and 1p/19q codeletion, achieving an accuracy of 75 ± 9% in predicting the IDH-mutation status, higher than using cMRI and multi-shell dMRI separately (both 70 ± 7%). Similar findings were observed for predicting the 1p/19q-codeletion status, with the accuracy from combining cMRI and multi-shell dMRI (72 ± 4%) higher than from each modality used alone (cMRI: 65 ± 6%; multi-shell dMRI: 66 ± 9%). These findings remain when we considered all 146 patients for predicting the IDH status (combined: 81 ± 5% accuracy; cMRI: 74 ± 5%; multi-shell dMRI: 73 ± 6%) and for the diagnosis of the three molecular subtypes according to WHO 2021 (combined: 60 ± 5%; cMRI: 57 ± 8%; multi-shell dMRI: 56 ± 7%). Together, these findings suggest that combining cMRI and multi-shell dMRI can offer higher accuracy than using each modality alone for predicting the IDH and 1p/19q status and in diagnosing the three molecular subtypes with DL-based models

    “What” versus “Where” in the audiovisual domain: An fMRI study.

    No full text
    Similar “what/where” functional segregations have been proposed for both visual and auditory cortical processing. In this fMRI study, we investigated if the same segregation exists in the crossmodal domain, when visual and auditory stimuli have to be matched in order to perform either a recognition or a localization task. Recent neuroimaging research highlighted the contribution of different heteromodal cortical regions during various forms of crossmodal binding. Interestingly, crossmodal effects during audiovisual speech and object recognition have been found in the superior temporal sulcus, while crossmodal effects during the execution of spatial tasks have been found over the intraparietal sulcus, suggesting an underlying “what/where” segregation. In order to directly compare the specific involvement of these two heteromodal regions, we scanned ten male right-handed subjects during the execution of two crossmodal matching tasks. Participants were simultaneously presented with a picture and an environmental sound, coming from either the same or the opposite hemifield and representing either the same or a different object. The two tasks required a manual YES/NO response respectively about location or semantic matching of the presented stimuli. Both group and individual subject analysis were performed. Task-related differences in BOLD response were observed in the right intraparietal sulcus and in the left superior temporal sulcus, providing a direct confirmation of the “what–where” functional segregation in the crossmodal audiovisual domain

    LMF ML Merger

    No full text
    This is a LMF Lexical Multi level Merger web-service for the automatic merging of Lexical Entries, Syntactic Behaviours and Subcategorization Frames from two distinct LMF lexicons. The web-service takes two LMF lexicons, A and B, and a set of directives in input and outputs one or more LMF merged lexicon(s) according to different merging scenarios. Further details can be found in: Riccardo Del Gratta & Francesca Frontini & Monica Monachini &Valeria Quochi & Francesco Rubino& Matteo Abrate & Angelica Lo Duca. 2012. L-LEME: an Automatic Lexical Merger based on the LMF Standard. In Proceedings of the Workshop on Language Resource Merging (Colocated wiyh LREC 2012), May, 22 2012, Istanbul, Turke

    LMF Merger

    No full text
    This is an LMF Lexical Merger web-service for the automatic merging of Lexical Entries from two distinct LMF lexicons. The web-service takes two LMF lexicons, A and B, and a set of directives in input and outputs one or more LMF merged lexicon(s) according to different merging scenarios. Further details can be found in: Riccardo Del Gratta & Francesca Frontini & Monica Monachini &Valeria Quochi & Francesco Rubino& Matteo Abrate & Angelica Lo Duca. 2012. L-LEME: an Automatic Lexical Merger based on the LMF Standard. In Proceedings of the Workshop on Language Resource Merging (Colocated wiyh LREC 2012), May, 22 2012, Istanbul, Turke

    L-LEME: an Automatic Lexical Merger based on the LMF Standard

    No full text
    The present paper describes LMF LExical MErger (L-LEME), an architecture to combine two lexicons in order to obtain new resource(s). L-LEME relies on standards, thus exploiting the benefits of the ISO Lexical Markup Framework (LMF) to ensure interoperability. L-LEME is meant to be dynamic and heavily adaptable: it allows the users to configure it to meet their specific needs. The L-LEME architecture is composed of two main modules: the Mapper, which takes in input two lexicons A and B and a set of user-defined rules and instructions to guide the mapping process (Directives D) and gives in output all matching entries. The algorithm also calculates a cosine similarity score. The Builder takes in input the previous results, a set of Directives D1 and produces a new LMF lexicon C. The Directives allow the user to define its own building rules and different merging scenarios. L-LEME is applied to a specific concrete task within the PANACEA project, namely the merging of two Italian SubCategorization Frame (SCF) lexicons. The experiment is interesting in that A and B have different philosophies behind, being A built by human introspection and B automatically extracted. Ultimately, L-LEME has interesting repercussions in many language technology applications

    Lexical Merger

    No full text
    This document describes the experiments on the merging of lexical resources performed during the project and the development of two merging components for LMF lexicons

    Intermodal sensory image generation: An fMRI analysis

    No full text
    Although both imagery and perception may be related to more than one sensory input, and information coming from different sensory channels is often integrated in a unique mental representation, most recent neuroimaging literature has focused on visual imaging. Contrasting results have been obtained concerning the sharing of the same mechanisms by visual perception and visual imagery, in part due to assessment techniques and to interindividual variability in brain activation. In recent years, an increasing number of researchers have adopted novel neuroimaging techniques in order to investigate intermodal connections in mental imagery and have reported a high degree of interaction between mental imagery and other cognitive functions. In the present study the specific nature of mental imagery was investigated by means of fMRI on a more extensive set of perceptual experiences (shapes, sounds, touches, odours, flavours, self-perceived movements, and internal sensations). Results show that the left middle-inferior temporal area is recruited by mental imagery for all modalities investigated and not only for the visual one, while parietal and prefrontal areas exhibit a more heterogeneous pattern of activation across modalities. The prominent left lateralisation observed for almost all the conditions suggests that verbal cues affect the processes underlying the generation of images
    corecore