92 research outputs found

    Curved reformat of the paediatric brain MRI into a ‘flat-earth map’ — standardised method for demonstrating cortical surface atrophy resulting from hypoxic–ischaemic encephalopathy

    Get PDF
    Hypoxic–ischaemic encephalopathy is optimally imaged with brain MRI in the neonatal period. However neuroimaging is often also performed later in childhood (e.g., when parents seek compensation in cases of alleged birth asphyxia). We describe a standardised technique for creating two curved reconstructions of the cortical surface to show the characteristic surface changes of hypoxic–ischaemic encephalopathy in children imaged after the neonatal period. The technique was applied for 10 cases of hypoxic–ischaemic encephalopathy and also for age-matched healthy children to assess the visibility of characteristic features of hypoxic–ischaemic encephalopathy. In the abnormal brains, fissural or sulcal widening was seen in all cases and ulegyria was identifiable in 7/10. These images could be used as a visual aid for communicating MRI findings to clinicians and other interested parties

    Visual analytics of magnetic resonance images for supporting detection of cortical lesions and preoperative location of anatomical landmarks

    Get PDF
    Orientadores: Shin-Ting Wu, Clarissa Lin YasudaTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: Com o aumento da qualidade das neuroimagens estruturais torna-se possível uma investigação detalhada da neuroanatomia. Contudo, a utilização de ferramentas computacionais que ajudem no realce e na exploração de neuroimagens ainda é de extrema importância para a identificação de lesões corticais e para a localização pré-operatória de referências anatômicas. A reformatação curvilínea e o janelamento (windowing) são dois exemplos. A reformatação curvilínea permite refatiar curvilinearmente um volume reconstruído a partir de uma imagem 3D e o janelamento, alterar o brilho e o contraste das neuroimagens. No estado da arte, os métodos de reformatação curvilínea são aplicados no diagnóstico para propiciar uma melhor visualização dos giros e sulcos cerebrais, já o janelamento está presente em diversos aplicativos de renderização de neuroimagens por facilitar ajustes interativos dos parâmetros de renderização à percepção subjetiva. Este trabalho foi motivado pela nossa hipótese da existência de uma correspondência entre a neuroanatomia e os padrões de sinais em imagens de ressonância magnética ponderadas em T1 e do potencial uso deste conhecimento no aprimoramento das ferramentas já existentes. Mostramos como essa correspondência pode ser usada para caracterizar estruturas neuroanatômicas que possibilitam a construção de um algoritmo capaz de localizar a região das meninges e de estimar medidas estatísticas, como a média e o desvio-padrão, da substância cinzenta e branca de um cérebro. Uma vez que a reformatação curvilínea é realizada com preservação das meninges, a técnica se torna útil na localização pré-operatória de referências anatômicas (como as estruturas vasculares), para preservação de áreas eloquentes durante uma intervenção cirúrgica. Uma nova forma de janelamento baseada na correspondência identificada pode também realçar a interface substância cinzenta¿branca e, consequentemente, revelar lesões sutis. A análise visual com uso das ferramentas em conjunto se mostrou útil no suporte à localização de lesões corticais e na exposição pré-operatória dos marcadores fiduciais intra-operatórios em imagens anatômicas com contrasteAbstract: As the quality of structural neuroimage improves, a more accurate neuroanatomy investiga- tion becomes feasible. However, computational tools to aid in the enhancement and explo- ration of these neuroimages are still extremely useful for the identification of subtle cortical lesions and pre-operative localization of anatomical landmarks. Curvilinear reformatting and windowing are two examples. Curvilinear reformatting allows reslicing curvilinearly a volume reconstructued from a 3D image, while windowing allows changing the brightness and the contrast of neuroimages. In the current state-of-the-art, curvilinear reformatting is applied to diagnosis for providing a better visualization of complex cerebral cortical folding patterns, and the windowing is used in medical visualization software for facilitating fine adjustments of rendering parameters according to the subjective perception. This work was driven by the hypothesis that there would be a correspondence between the neuroanatomy and the signal patterns from T1-weighted magnetic resonance images, and that the knowledge of this cor- respondence might improve the existing tools. We showed how this correspondence could be used to characterize the anatomical structures that made possible the design of an algorithm for locating the neighborhood of meninges and for computing the statistical measures, such as mean and standard deviation, of grey and white matter. Assuring that the curvilinear reformatting can be performed preserving the meninges, the technique is useful to preopera- tively localize the anatomical landmarks , such as superficial veins. A new way of windowing based on our finding of the neuroanatomy¿imaging signal correspondence can enhance the interface between gray matter and white matter and reveal subtle lesions. Visual analytics using both tools has proven to be helpful in supporting the localization of cortical lesions and in revealing preoperatively the intraoperative fiducial landmarks, such as superficial veins, on contrast-enhanced anatomical imagesDoutoradoEngenharia de ComputaçãoDoutor em Engenharia Elétrica165777/2014-1CNP

    Survey of visualization and analysis tools

    Get PDF
    A large number of commercially available visualization and analysis tools are available to the researcher. Some of the strengths and limitations of some of these tools, from the viewpoint of the earth sciences discipline, are discussed. Visualization and analysis tools fall into one of two categories: those that are designed to a specific purpose and are non-extensive and those that are generic visual programming tools that are extensible. Most of the extensible packages examined incorporate a data flow paradigm

    ESMValTool (v1.0) – a community diagnostic and performance metrics tool for routine evaluation of Earth system models in CMIP

    Get PDF
    A community diagnostics and performance metrics tool for the evaluation of Earth system models (ESMs) has been developed that allows for routine comparison of single or multiple models, either against predecessor versions or against observations. The priority of the effort so far has been to target specific scientific themes focusing on selected essential climate variables (ECVs), a range of known systematic biases common to ESMs, such as coupled tropical climate variability, monsoons, Southern Ocean processes, continental dry biases, and soil hydrology–climate interactions, as well as atmospheric CO2 budgets, tropospheric and stratospheric ozone, and tropospheric aerosols. The tool is being developed in such a way that additional analyses can easily be added. A set of standard namelists for each scientific topic reproduces specific sets of diagnostics or performance metrics that have demonstrated their importance in ESM evaluation in the peer-reviewed literature. The Earth System Model Evaluation Tool (ESMValTool) is a community effort open to both users and developers encouraging open exchange of diagnostic source code and evaluation results from the Coupled Model Intercomparison Project (CMIP) ensemble. This will facilitate and improve ESM evaluation beyond the state-of-the-art and aims at supporting such activities within CMIP and at individual modelling centres. Ultimately, we envisage running the ESMValTool alongside the Earth System Grid Federation (ESGF) as part of a more routine evaluation of CMIP model simulations while utilizing observations available in standard formats (obs4MIPs) or provided by the user

    Rock Art Pilot Project Main Report

    Get PDF
    A report on the results of a pilot project to investigate the current state of research, conservation, management and presentation of prehistoric rock art in England commissioned by English Heritage from Archaeology Group, School of Conservation Sciences, Bournemouth Unviersity and the Institute of Archaeology, University College Londo

    Granite: A scientific database model and implementation

    Get PDF
    The principal goal of this research was to develop a formal comprehensive model for representing highly complex scientific data. An effective model should provide a conceptually uniform way to represent data and it should serve as a framework for the implementation of an efficient and easy-to-use software environment that implements the model. The dissertation work presented here describes such a model and its contributions to the field of scientific databases. In particular, the Granite model encompasses a wide variety of datatypes used across many disciplines of science and engineering today. It is unique in that it defines dataset geometry and topology as separate conceptual components of a scientific dataset. We provide a novel classification of geometries and topologies that has important practical implications for a scientific database implementation. The Granite model also offers integrated support for multiresolution and adaptive resolution data. Many of these ideas have been addressed by others, but no one has tried to bring them all together in a single comprehensive model. The datasource portion of the Granite model offers several further contributions. In addition to providing a convenient conceptual view of rectilinear data, it also supports multisource data. Data can be taken from various sources and combined into a unified view. The rod storage model is an abstraction for file storage that has proven an effective platform upon which to develop efficient access to storage. Our spatial prefetching technique is built upon the rod storage model, and demonstrates very significant improvement in access to scientific datasets, and also allows machines to access data that is far too large to fit in main memory. These improvements bring the extremely large datasets now being generated in many scientific fields into the realm of tractability for the ordinary researcher. We validated the feasibility and viability of the model by implementing a significant portion of it in the Granite system. Extensive performance evaluations of the implementation indicate that the features of the model can be provided in a user-friendly manner with an efficiency that is competitive with more ad hoc systems and more specialized application specific solutions

    Abstracts to Be Presented at the 2015 Supercomputing Conference

    Get PDF
    Compilation of Abstracts to be presented at the 2015 Supercomputing Conferenc

    Visualization Techniques in Space and Atmospheric Sciences

    Get PDF
    Unprecedented volumes of data will be generated by research programs that investigate the Earth as a system and the origin of the universe, which will in turn require analysis and interpretation that will lead to meaningful scientific insight. Providing a widely distributed research community with the ability to access, manipulate, analyze, and visualize these complex, multidimensional data sets depends on a wide range of computer science and technology topics. Data storage and compression, data base management, computational methods and algorithms, artificial intelligence, telecommunications, and high-resolution display are just a few of the topics addressed. A unifying theme throughout the papers with regards to advanced data handling and visualization is the need for interactivity, speed, user-friendliness, and extensibility

    Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates

    Get PDF
    The study of cerebral anatomy in developing neonates is of great importance for the understanding of brain development during the early period of life. This dissertation therefore focuses on three challenges in the modelling of cerebral anatomy in neonates during brain development. The methods that have been developed all use Magnetic Resonance Images (MRI) as source data. To facilitate study of vascular development in the neonatal period, a set of image analysis algorithms are developed to automatically extract and model cerebral vessel trees. The whole process consists of cerebral vessel tracking from automatically placed seed points, vessel tree generation, and vasculature registration and matching. These algorithms have been tested on clinical Time-of- Flight (TOF) MR angiographic datasets. To facilitate study of the neonatal cortex a complete cerebral cortex segmentation and reconstruction pipeline has been developed. Segmentation of the neonatal cortex is not effectively done by existing algorithms designed for the adult brain because the contrast between grey and white matter is reversed. This causes pixels containing tissue mixtures to be incorrectly labelled by conventional methods. The neonatal cortical segmentation method that has been developed is based on a novel expectation-maximization (EM) method with explicit correction for mislabelled partial volume voxels. Based on the resulting cortical segmentation, an implicit surface evolution technique is adopted for the reconstruction of the cortex in neonates. The performance of the method is investigated by performing a detailed landmark study. To facilitate study of cortical development, a cortical surface registration algorithm for aligning the cortical surface is developed. The method first inflates extracted cortical surfaces and then performs a non-rigid surface registration using free-form deformations (FFDs) to remove residual alignment. Validation experiments using data labelled by an expert observer demonstrate that the method can capture local changes and follow the growth of specific sulcus

    The ERTS-1 investigation (ER-600): A compendium of analysis results of the utility of ERTS-1 data for land resources management

    Get PDF
    The results of the ERTS-1 investigations conducted by the Earth Observations Division at the NASA Lyndon B. Johnson Space Center are summarized in this report, which is an overview of documents detailing individual investigations. Conventional image interpretation and computer-aided classification procedures were the two basic techniques used in analyzing the data for detecting, identifying, locating, and measuring surface features related to earth resources. Data from the ERTS-1 multispectral scanner system were useful for all applications studied, which included agriculture, coastal and estuarine analysis, forestry, range, land use and urban land use, and signature extension. Percentage classification accuracies are cited for the conventional and computer-aided techniques
    • …
    corecore