11 research outputs found
Reconstructing Representations of Dynamic Visual Objects in Early Visual Cortex
As raw sensory data are partial, our visual system extensively fills in missing details, creating enriched percepts based on incomplete bottom-up information. Despite evidence for internally generated representations at early stages of cortical processing, it is not known whether these representations include missing information of dynamically transforming objects. Long-range apparent motion (AM) provides a unique test case because objects in AM can undergo changes both in position and in features. Using fMRI and encoding methods, we found that the “intermediate” orientation of an apparently rotating grating, never presented in the retinal input but interpolated during AM, is reconstructed in population-level, feature-selective tuning responses in the region of early visual cortex (V1) that corresponds to the retinotopic location of the AM path. This neural representation is absent when AM inducers are presented simultaneously and when AM is visually imagined. Our results demonstrate dynamic filling-in in V1 for object features that are interpolated during kinetic transformations
Radio-Pathomic Approaches in Pediatric Neurooncology: Opportunities and Challenges
With medical software platforms moving to cloud environments with scalable storage and computing, the translation of predictive artificial intelligence (AI) models to aid in clinical decision-making and facilitate personalized medicine for cancer patients is becoming a reality. Medical imaging, namely radiologic and histologic images, has immense analytical potential in neuro-oncology, and models utilizing integrated radiomic and pathomic data may yield a synergistic effect and provide a new modality for precision medicine. At the same time, the ability to harness multi-modal data is met with challenges in aggregating data across medical departments and institutions, as well as significant complexity in modeling the phenotypic and genotypic heterogeneity of pediatric brain tumors. In this paper, we review recent pathomic and integrated pathomic, radiomic, and genomic studies with clinical applications. We discuss current challenges limiting translational research on pediatric brain tumors and outline technical and analytical solutions. Overall, we propose that to empower the potential residing in radio-pathomics, systemic changes in cross-discipline data management and end-to-end software platforms to handle multi-modal data sets are needed, in addition to embracing modern AI-powered approaches. These changes can improve the performance of predictive models, and ultimately the ability to advance brain cancer treatments and patient outcomes through the development of such models
Training and Comparison of nnU-Net and DeepMedic Methods for Autosegmentation of Pediatric Brain Tumors
Brain tumors are the most common solid tumors and the leading cause of
cancer-related death among children. Tumor segmentation is essential in
surgical and treatment planning, and response assessment and monitoring.
However, manual segmentation is time-consuming and has high inter-operator
variability, underscoring the need for more efficient methods. We compared two
deep learning-based 3D segmentation models, DeepMedic and nnU-Net, after
training with pediatric-specific multi-institutional brain tumor data using
based on multi-parametric MRI scans.Multi-parametric preoperative MRI scans of
339 pediatric patients (n=293 internal and n=46 external cohorts) with a
variety of tumor subtypes, were preprocessed and manually segmented into four
tumor subregions, i.e., enhancing tumor (ET), non-enhancing tumor (NET), cystic
components (CC), and peritumoral edema (ED). After training, performance of the
two models on internal and external test sets was evaluated using Dice scores,
sensitivity, and Hausdorff distance with reference to ground truth manual
segmentations. Dice score for nnU-Net internal test sets was (mean +/- SD
(median)) 0.9+/-0.07 (0.94) for WT, 0.77+/-0.29 for ET, 0.66+/-0.32 for NET,
0.71+/-0.33 for CC, and 0.71+/-0.40 for ED, respectively. For DeepMedic the
Dice scores were 0.82+/-0.16 for WT, 0.66+/-0.32 for ET, 0.48+/-0.27, for NET,
0.48+/-0.36 for CC, and 0.19+/-0.33 for ED, respectively. Dice scores were
significantly higher for nnU-Net (p<=0.01). External validation of the trained
nnU-Net model on the multi-institutional BraTS-PEDs 2023 dataset revealed high
generalization capability in segmentation of whole tumor and tumor core with
Dice scores of 0.87+/-0.13 (0.91) and 0.83+/-0.18 (0.89), respectively.
Pediatric-specific data trained nnU-Net model is superior to DeepMedic for
whole tumor and subregion segmentation of pediatric brain tumors
The Brain Tumor Segmentation (BraTS) Challenge 2023: Brain MR Image Synthesis for Tumor Segmentation (BraSyn)
Automated brain tumor segmentation methods have become well-established and
reached performance levels offering clear clinical utility. These methods
typically rely on four input magnetic resonance imaging (MRI) modalities:
T1-weighted images with and without contrast enhancement, T2-weighted images,
and FLAIR images. However, some sequences are often missing in clinical
practice due to time constraints or image artifacts, such as patient motion.
Consequently, the ability to substitute missing modalities and gain
segmentation performance is highly desirable and necessary for the broader
adoption of these algorithms in the clinical routine. In this work, we present
the establishment of the Brain MR Image Synthesis Benchmark (BraSyn) in
conjunction with the Medical Image Computing and Computer-Assisted Intervention
(MICCAI) 2023. The primary objective of this challenge is to evaluate image
synthesis methods that can realistically generate missing MRI modalities when
multiple available images are provided. The ultimate aim is to facilitate
automated brain tumor segmentation pipelines. The image dataset used in the
benchmark is diverse and multi-modal, created through collaboration with
various hospitals and research institutions.Comment: Technical report of BraSy
The Brain Tumor Segmentation (BraTS) Challenge 2023: Focus on Pediatrics (CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs)
Pediatric tumors of the central nervous system are the most common cause of
cancer-related death in children. The five-year survival rate for high-grade
gliomas in children is less than 20\%. Due to their rarity, the diagnosis of
these entities is often delayed, their treatment is mainly based on historic
treatment concepts, and clinical trials require multi-institutional
collaborations. The MICCAI Brain Tumor Segmentation (BraTS) Challenge is a
landmark community benchmark event with a successful history of 12 years of
resource creation for the segmentation and analysis of adult glioma. Here we
present the CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs 2023 challenge, which
represents the first BraTS challenge focused on pediatric brain tumors with
data acquired across multiple international consortia dedicated to pediatric
neuro-oncology and clinical trials. The BraTS-PEDs 2023 challenge focuses on
benchmarking the development of volumentric segmentation algorithms for
pediatric brain glioma through standardized quantitative performance evaluation
metrics utilized across the BraTS 2023 cluster of challenges. Models gaining
knowledge from the BraTS-PEDs multi-parametric structural MRI (mpMRI) training
data will be evaluated on separate validation and unseen test mpMRI dataof
high-grade pediatric glioma. The CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs 2023
challenge brings together clinicians and AI/imaging scientists to lead to
faster development of automated segmentation techniques that could benefit
clinical trials, and ultimately the care of children with brain tumors
Recommended from our members
Social Value Learning Shifts Conceptual Representations of Faces
Values drive our behavioral choices. Ample research has
explored the cognitive and neural underpinnings of value-
based computations related to decision-making. However,
behaviorally relevant values that we associate with real-world
objects are often not monetary. For instance, social values
associated with specific people are crucial for social
behaviors and interactions. Moreover, understanding and
attributing social values allows for proper evaluations of
potential interactions with others, and can lead to more
beneficial social behaviors and relationships. Learning social
values has been shown to recruit the same systems as reward
values, however how they become associated with specific
people remains to be established. The present study examined
social value learning of other people using naturalistic face
images. We found that before learning, distances between the
faces in conceptual similarity spaces were organized
corresponding to their perceptual similarity. However, after
learning, faces were shifted in a manner that reflected
similarity of their associated social values (generosity).
Furthermore, distances were positively correlated with a post-
learning index of preference to interact with a person in a
future cooperative game. In other words, learned social values
of the faces seemed to influence their representations in
conceptual space, and such representational changes were
related to propensities in future behavior
Towards Consistency in Pediatric Brain Tumor Measurements: Challenges, Solutions, and the Role of AI-Based Segmentation
MR imaging is central to the assessment of tumor burden and changes over time in neuro-oncology. Several response assessment guidelines have been set forth by the Response Assessment in Pediatric Neuro-Oncology (RAPNO) working groups in different tumor histologies; however, the visual delineation of tumor components using MRIs is not always straightforward, and complexities not currently addressed by these criteria can introduce inter- and intra-observer variability in manual assessments. Differentiation of non-enhancing tumor from peritumoral edema, mild enhancement from absence of enhancement, and various cystic components can be challenging; particularly given a lack of sufficient and uniform imaging protocols in clinical practice. Automated tumor segmentation with artificial intelligence (AI) may be able to provide more objective delineations, but rely on accurate and consistent training data created manually (ground truth). Herein, this paper reviews existing challenges and potential solutions to identifying and defining subregions of pediatric brain tumors (PBTs) that are not explicitly addressed by current guidelines. The goal is to assert the importance of defining and adopting criteria for addressing these challenges, as it will be critical to achieving standardized tumor measurements and reproducible response assessment in PBTs, ultimately leading to more precise outcome metrics and accurate comparisons among clinical studies
iTMT v1.0: Automated temporalis muscle segmentation and thickness assessment with deep learning
iTMT v1.0: Automated temporalis muscle segmentation and thickness assessment with deep learning
Github Repo: https://github.com/AIM-KannLab/itm
The Brain Tumor Segmentation (BraTS) Challenge 2023: Brain MR Image Synthesis for Tumor Segmentation (BraSyn).
Automated brain tumor segmentation methods have become well-established and reached performance levels offering clear clinical utility. These methods typically rely on four input magnetic resonance imaging (MRI) modalities: T1-weighted images with and without contrast enhancement, T2-weighted images, and FLAIR images. However, some sequences are often missing in clinical practice due to time constraints or image artifacts, such as patient motion. Consequently, the ability to substitute missing modalities and gain segmentation performance is highly desirable and necessary for the broader adoption of these algorithms in the clinical routine. In this work, we present the establishment of the Brain MR Image Synthesis Benchmark (BraSyn) in conjunction with the Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2023. The primary objective of this challenge is to evaluate image synthesis methods that can realistically generate missing MRI modalities when multiple available images are provided. The ultimate aim is to facilitate automated brain tumor segmentation pipelines. The image dataset used in the benchmark is diverse and multi-modal, created through collaboration with various hospitals and research institutions