15 research outputs found
Learning About Others Dynamically Changes Behavior And The Brain
Humans are social beings. The ability to interact socially requires associating perceptual and social information about other people. While prior work has elucidated the cognitive and neural basis of general social knowledge, less is known about how person-specific information is learned and remembered. The goal of this dissertation was to explore how learning associations between visual and abstract information could influence conceptual representations of specific individuals. Across three studies people learned social and reward values associated with different faces. Chapter 2 examined how the learned values influenced explicit judgments by measuring behavioral face similarity spaces before and after learning. While pre-learning spaces were structured by the visual similarity of the faces, social values selectively determined the post-learning spatial organization, and generalized to expectations of behavior in a future social context. Chapter 3 investigated the neural correlates of the face-value associations. Using functional magnetic resonance imaging (fMRI), brain activity patterns were measured while participants viewed faces and performed a task unrelated to the values, once before and once after learning. A region in the left anterior temporal lobe (ATL) had activity patterns that were biased by the social values after learning, such that faces of more similar social values evoked more similar activity patterns, and the magnitude of these learning-induced changes was directly related to an individual’s learning performance as a function of value type. Additionally, activity pattern similarity in the left inferior parietal lobe (IPL) tracked the spatial organization of individual behavioral similarity spaces after learning. Chapter 4 assessed whether there were perceptual consequences of such behavioral and neural modulations and whether effects were domain-general. A categorical perception paradigm was used to test whether learned values implicitly influenced face discrimination. Preliminary evidence indicated that both social and reward values affected discrimination performance for face and flower stimuli, however the effect of social values did not persist over a long-term delay and was susceptible to task order effects. Together, this work indicates that learned associations between visual and social attributes of other people can warp behavioral and neural representations, and such changes have downstream consequences on face perception and social preferences
Reconstructing Representations of Dynamic Visual Objects in Early Visual Cortex
As raw sensory data are partial, our visual system extensively fills in missing details, creating enriched percepts based on incomplete bottom-up information. Despite evidence for internally generated representations at early stages of cortical processing, it is not known whether these representations include missing information of dynamically transforming objects. Long-range apparent motion (AM) provides a unique test case because objects in AM can undergo changes both in position and in features. Using fMRI and encoding methods, we found that the “intermediate” orientation of an apparently rotating grating, never presented in the retinal input but interpolated during AM, is reconstructed in population-level, feature-selective tuning responses in the region of early visual cortex (V1) that corresponds to the retinotopic location of the AM path. This neural representation is absent when AM inducers are presented simultaneously and when AM is visually imagined. Our results demonstrate dynamic filling-in in V1 for object features that are interpolated during kinetic transformations
Radio-Pathomic Approaches in Pediatric Neurooncology: Opportunities and Challenges
With medical software platforms moving to cloud environments with scalable storage and computing, the translation of predictive artificial intelligence (AI) models to aid in clinical decision-making and facilitate personalized medicine for cancer patients is becoming a reality. Medical imaging, namely radiologic and histologic images, has immense analytical potential in neuro-oncology, and models utilizing integrated radiomic and pathomic data may yield a synergistic effect and provide a new modality for precision medicine. At the same time, the ability to harness multi-modal data is met with challenges in aggregating data across medical departments and institutions, as well as significant complexity in modeling the phenotypic and genotypic heterogeneity of pediatric brain tumors. In this paper, we review recent pathomic and integrated pathomic, radiomic, and genomic studies with clinical applications. We discuss current challenges limiting translational research on pediatric brain tumors and outline technical and analytical solutions. Overall, we propose that to empower the potential residing in radio-pathomics, systemic changes in cross-discipline data management and end-to-end software platforms to handle multi-modal data sets are needed, in addition to embracing modern AI-powered approaches. These changes can improve the performance of predictive models, and ultimately the ability to advance brain cancer treatments and patient outcomes through the development of such models
Unsupervised Machine Learning Using K-Means Identifies Radiomic Subgroups of Pediatric Low-Grade Gliomas That Correlate With Key Molecular Markers
Introduction: Despite advancements in molecular and histopathologic characterization of pediatric low-grade gliomas (pLGGs), there remains significant phenotypic heterogeneity among tumors with similar categorizations. We hypothesized that an unsupervised machine learning approach based on radiomic features may reveal distinct pLGG imaging subtypes.
Methods: Multi-parametric MR images (T1 pre- and post-contrast, T2, and T2 FLAIR) from 157 patients with pLGGs were collected and 881 quantitative radiomic features were extracted from tumorous region. Clustering was performed using K-means after applying principal component analysis (PCA) for feature dimensionality reduction. Molecular and demographic data was obtained from the PedCBioportal and compared between imaging subtypes.
Results: K-means identified three distinct imaging-based subtypes. Subtypes differed in mutational frequencies of BRAF (p \u3c 0.05) as well as the gene expression of BRAF (p\u3c0.05). It was also found that age (p \u3c 0.05), tumor location (p \u3c 0.01), and tumor histology (p \u3c 0.0001) differed significantly between the imaging subtypes.
Conclusion: In this exploratory work, it was found that clustering of pLGGs based on radiomic features identifies distinct, imaging-based subtypes that correlate with important molecular markers and demographic details. This finding supports the notion that incorporation of radiomic data could augment our ability to better characterize pLGGs
The Brain Tumor Segmentation (BraTS) Challenge 2023: Brain MR Image Synthesis for Tumor Segmentation (BraSyn)
Automated brain tumor segmentation methods have become well-established and
reached performance levels offering clear clinical utility. These methods
typically rely on four input magnetic resonance imaging (MRI) modalities:
T1-weighted images with and without contrast enhancement, T2-weighted images,
and FLAIR images. However, some sequences are often missing in clinical
practice due to time constraints or image artifacts, such as patient motion.
Consequently, the ability to substitute missing modalities and gain
segmentation performance is highly desirable and necessary for the broader
adoption of these algorithms in the clinical routine. In this work, we present
the establishment of the Brain MR Image Synthesis Benchmark (BraSyn) in
conjunction with the Medical Image Computing and Computer-Assisted Intervention
(MICCAI) 2023. The primary objective of this challenge is to evaluate image
synthesis methods that can realistically generate missing MRI modalities when
multiple available images are provided. The ultimate aim is to facilitate
automated brain tumor segmentation pipelines. The image dataset used in the
benchmark is diverse and multi-modal, created through collaboration with
various hospitals and research institutions.Comment: Technical report of BraSy
The Brain Tumor Segmentation (BraTS) Challenge 2023: Focus on Pediatrics (CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs)
Pediatric tumors of the central nervous system are the most common cause of
cancer-related death in children. The five-year survival rate for high-grade
gliomas in children is less than 20\%. Due to their rarity, the diagnosis of
these entities is often delayed, their treatment is mainly based on historic
treatment concepts, and clinical trials require multi-institutional
collaborations. The MICCAI Brain Tumor Segmentation (BraTS) Challenge is a
landmark community benchmark event with a successful history of 12 years of
resource creation for the segmentation and analysis of adult glioma. Here we
present the CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs 2023 challenge, which
represents the first BraTS challenge focused on pediatric brain tumors with
data acquired across multiple international consortia dedicated to pediatric
neuro-oncology and clinical trials. The BraTS-PEDs 2023 challenge focuses on
benchmarking the development of volumentric segmentation algorithms for
pediatric brain glioma through standardized quantitative performance evaluation
metrics utilized across the BraTS 2023 cluster of challenges. Models gaining
knowledge from the BraTS-PEDs multi-parametric structural MRI (mpMRI) training
data will be evaluated on separate validation and unseen test mpMRI dataof
high-grade pediatric glioma. The CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs 2023
challenge brings together clinicians and AI/imaging scientists to lead to
faster development of automated segmentation techniques that could benefit
clinical trials, and ultimately the care of children with brain tumors
Learning About Others Dynamically Changes Behavior and the Brain
Humans are social beings. The ability to interact socially requires associating perceptual and social information about other people. While prior work has elucidated the cognitive and neural basis of general social knowledge, less is known about how person-specific information is learned and remembered. The goal of this dissertation was to explore how learning associations between visual and abstract information could influence conceptual representations of specific individuals. Across three studies people learned social and reward values associated with different faces. Chapter 2 examined how the learned values influenced explicit judgments by measuring behavioral face similarity spaces before and after learning. While pre-learning spaces were structured by the visual similarity of the faces, social values selectively determined the post-learning spatial organization, and generalized to expectations of behavior in a future social context. Chapter 3 investigated the neural correlates of the face-value associations. Using functional magnetic resonance imaging (fMRI), brain activity patterns were measured while participants viewed faces and performed a task unrelated to the values, once before and once after learning. A region in the left anterior temporal lobe (ATL) had activity patterns that were biased by the social values after learning, such that faces of more similar social values evoked more similar activity patterns, and the magnitude of these learning-induced changes was directly related to an individual’s learning performance as a function of value type. Additionally, activity pattern similarity in the left inferior parietal lobe (IPL) tracked the spatial organization of individual behavioral similarity spaces after learning. Chapter 4 assessed whether there were perceptual consequences of such behavioral and neural modulations and whether effects were domain-general. A categorical perception paradigm was used to test whether learned values implicitly influenced face discrimination. Preliminary evidence indicated that both social and reward values affected discrimination performance for face and flower stimuli, however the effect of social values did not persist over a long-term delay and was susceptible to task order effects. Together, this work indicates that learned associations between visual and social attributes of other people can warp behavioral and neural representations, and such changes have downstream consequences on face perception and social preferences
Recommended from our members
Learned social values modulate representations of faces in the Fusiform Face Area
Social value processing has been shown to recruit specific neural systems, yet how they are associated with person-specificinformation, such as facial identity, processed in separate regions remains to be established. The present study examinedchanges in neural representations in face-selective visual areas due to social value learning. Over four days, participantslearned combinations of social (generosity) and reward (point) values orthogonally assigned to naturalistic face images.We found that after learning, activity similarity (measured with fMRI) in the fusiform face area evoked by viewing thefaces was related to social value as well as a measure of future social preferences, but was not related to reward value. Thisshows how learned social values can influence representations in face-selective brain regions thought to primarily encodevisual information, and provides a potential neural mechanism for the association of social and visual information relevantto propensities in future social behavior
Radiomics for characterization of the glioma immune microenvironment
Abstract Increasing evidence suggests that besides mutational and molecular alterations, the immune component of the tumor microenvironment also substantially impacts tumor behavior and complicates treatment response, particularly to immunotherapies. Although the standard method for characterizing tumor immune profile is through performing integrated genomic analysis on tissue biopsies, the dynamic change in the immune composition of the tumor microenvironment makes this approach not feasible, especially for brain tumors. Radiomics is a rapidly growing field that uses advanced imaging techniques and computational algorithms to extract numerous quantitative features from medical images. Recent advances in machine learning methods are facilitating biological validation of radiomic signatures and allowing them to “mine” for a variety of significant correlates, including genetic, immunologic, and histologic data. Radiomics has the potential to be used as a non-invasive approach to predict the presence and density of immune cells within the microenvironment, as well as to assess the expression of immune-related genes and pathways. This information can be essential for patient stratification, informing treatment decisions and predicting patients’ response to immunotherapies. This is particularly important for tumors with difficult surgical access such as gliomas. In this review, we provide an overview of the glioma microenvironment, describe novel approaches for clustering patients based on their tumor immune profile, and discuss the latest progress on utilization of radiomics for immune profiling of glioma based on current literature