25,159 research outputs found

    Visualization-Based Mapping of Language Function in the Brain

    Get PDF
    Cortical language maps, obtained through intraoperative electrical stimulation studies, provide a rich source of information for research on language organization. Previous studies have shown interesting correlations between the distribution of essential language sites and such behavioral indicators as verbal IQ and have provided suggestive evidence for regarding human language cortex as an organization of multiple distributed systems. Noninvasive studies using ECoG, PET, and functional MR lend support to this model; however, there as yet are no studies that integrate these two forms of information. In this paper we describe a method for mapping the stimulation data onto a 3-D MRI-based neuroanatomic model of the individual patient. The mapping is done by comparing an intraoperative photograph of the exposed cortical surface with a computer-based MR visualization of the surface, interactively indicating corresponding stimulation sites, and recording 3-D MR machine coordinates of the indicated sites. Repeatability studies were performed to validate the accuracy of the mapping technique. Six observers—a neurosurgeon, a radiologist, and four computer scientists, independently mapped 218 stimulation sites from 12 patients. The mean distance of a mapping from the mean location of each site was 2.07 mm, with a standard deviation of 1.5 mm, or within 5.07 mm with 95% confidence. Since the surgical sites are accurate within approximately 1 cm, these results show that the visualization-based approach is accurate within the limits of the stimulation maps. When incorporated within the kind of information system envisioned by the Human Brain Project, this anatomically based method will not only provide a key link between noninvasive and invasive approaches to understanding language organization, but will also provide the basis for studying the relationship between language function and anatomical variability

    Vessel tractography using an intensity based tensor model with branch detection

    Get PDF
    In this paper, we present a tubular structure seg- mentation method that utilizes a second order tensor constructed from directional intensity measurements, which is inspired from diffusion tensor image (DTI) modeling. The constructed anisotropic tensor which is fit inside a vessel drives the segmen- tation analogously to a tractography approach in DTI. Our model is initialized at a single seed point and is capable of capturing whole vessel trees by an automatic branch detection algorithm developed in the same framework. The centerline of the vessel as well as its thickness is extracted. Performance results within the Rotterdam Coronary Artery Algorithm Evaluation framework are provided for comparison with existing techniques. 96.4% average overlap with ground truth delineated by experts is obtained in addition to other measures reported in the paper. Moreover, we demonstrate further quantitative results over synthetic vascular datasets, and we provide quantitative experiments for branch detection on patient Computed Tomography Angiography (CTA) volumes, as well as qualitative evaluations on the same CTA datasets, from visual scores by a cardiologist expert

    Deep Interactive Region Segmentation and Captioning

    Full text link
    With recent innovations in dense image captioning, it is now possible to describe every object of the scene with a caption while objects are determined by bounding boxes. However, interpretation of such an output is not trivial due to the existence of many overlapping bounding boxes. Furthermore, in current captioning frameworks, the user is not able to involve personal preferences to exclude out of interest areas. In this paper, we propose a novel hybrid deep learning architecture for interactive region segmentation and captioning where the user is able to specify an arbitrary region of the image that should be processed. To this end, a dedicated Fully Convolutional Network (FCN) named Lyncean FCN (LFCN) is trained using our special training data to isolate the User Intention Region (UIR) as the output of an efficient segmentation. In parallel, a dense image captioning model is utilized to provide a wide variety of captions for that region. Then, the UIR will be explained with the caption of the best match bounding box. To the best of our knowledge, this is the first work that provides such a comprehensive output. Our experiments show the superiority of the proposed approach over state-of-the-art interactive segmentation methods on several well-known datasets. In addition, replacement of the bounding boxes with the result of the interactive segmentation leads to a better understanding of the dense image captioning output as well as accuracy enhancement for the object detection in terms of Intersection over Union (IoU).Comment: 17, pages, 9 figure
    corecore