38,518 research outputs found

    The INCF Digital Atlasing Program: Report on Digital Atlasing Standards in the Rodent Brain

    Get PDF
    The goal of the INCF Digital Atlasing Program is to provide the vision and direction necessary to make the rapidly growing collection of multidimensional data of the rodent brain (images, gene expression, etc.) widely accessible and usable to the international research community. This Digital Brain Atlasing Standards Task Force was formed in May 2008 to investigate the state of rodent brain digital atlasing, and formulate standards, guidelines, and policy recommendations.

Our first objective has been the preparation of a detailed document that includes the vision and specific description of an infrastructure, systems and methods capable of serving the scientific goals of the community, as well as practical issues for achieving
the goals. This report builds on the 1st INCF Workshop on Mouse and Rat Brain Digital Atlasing Systems (Boline et al., 2007, _Nature Preceedings_, doi:10.1038/npre.2007.1046.1) and includes a more detailed analysis of both the current state and desired state of digital atlasing along with specific recommendations for achieving these goals

    Seafloor characterization using airborne hyperspectral co-registration procedures independent from attitude and positioning sensors

    Get PDF
    The advance of remote-sensing technology and data-storage capabilities has progressed in the last decade to commercial multi-sensor data collection. There is a constant need to characterize, quantify and monitor the coastal areas for habitat research and coastal management. In this paper, we present work on seafloor characterization that uses hyperspectral imagery (HSI). The HSI data allows the operator to extend seafloor characterization from multibeam backscatter towards land and thus creates a seamless ocean-to-land characterization of the littoral zone

    The Digital Anatomist Information System and Its Use in the Generation and Delivery of Web-Based Anatomy Atlases

    Get PDF
    Advances in network and imaging technology, coupled with the availability of 3-D datasets such as the Visible Human, provide a unique opportunity for developing information systems in anatomy that can deliver relevant knowledge directly to the clinician, researcher or educator. A software framework is described for developing such a system within a distributed architecture that includes spatial and symbolic anatomy information resources, Web and custom servers, and authoring and end-user client programs. The authoring tools have been used to create 3-D atlases of the brain, knee and thorax that are used both locally and throughout the world. For the one and a half year period from June 1995–January 1997, the on-line atlases were accessed by over 33,000 sites from 94 countries, with an average of over 4000 ‘‘hits’’ per day, and 25,000 hits per day during peak exam periods. The atlases have been linked to by over 500 sites, and have received at least six unsolicited awards by outside rating institutions. The flexibility of the software framework has allowed the information system to evolve with advances in technology and representation methods. Possible new features include knowledge-based image retrieval and tutoring, dynamic generation of 3-D scenes, and eventually, real-time virtual reality navigation through the body. Such features, when coupled with other on-line biomedical information resources, should lead to interesting new ways for managing and accessing structural information in medicine

    The global hydrology education resource

    Get PDF
    This article is a selective overview of a range of contemporary teaching resources currently available globally for university hydrology educators, with an emphasis on web-based resources. Major governmental and scientific organizations relevant to the promotion of hydrology teaching are briefly introduced. Selected online teaching materials are then overviewed, i.e. PowerPoint presentations, course materials, and multimedia. A range of websites offering free basic hydrology modelling software are mentioned, together with some data file sources which could be used for teaching. Websites offering a considerable range of general hydrology links are also noted, as are websites providing international and national data sets which might be incorporated into teaching exercises. Finally, some discussion is given on reference material for different modes of hydrology teaching, including laboratory and field exercises

    The Joint COntrols Project Framework

    Full text link
    The Framework is one of the subprojects of the Joint COntrols Project (JCOP), which is collaboration between the four LHC experiments and CERN. By sharing development, this will reduce the overall effort required to build and maintain the experiment control systems. As such, the main aim of the Framework is to deliver a common set of software components, tools and guidelines that can be used by the four LHC experiments to build their control systems. Although commercial components are used wherever possible, further added value is obtained by customisation for HEP-specific applications. The supervisory layer of the Framework is based on the SCADA tool PVSS, which was selected after a detailed evaluation. This is integrated with the front-end layer via both OPC (OLE for Process Control), an industrial standard, and the CERN-developed DIM (Distributed Information Management System) protocol. Several components are already in production and being used by running fixed-target experiments at CERN as well as for the LHC experiment test beams. The paper will give an overview of the key concepts behind the project as well as the state of the current development and future plans.Comment: Paper from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 4 pages, PDF. PSN THGT00

    Applying MDE tools to defining domain specific languages for model management

    Get PDF
    In the model driven engineering (MDE), modeling languages play a central role. They range from the most generic languages such as UML, to more individual ones, called domain-specific modeling languages (DSML). These languages are used to create and manage models and must accompany them throughout their life cycle and evolution. In this paper we propose a domain-specific language for model management, to facilitate the user's task, developed with techniques and tools used in the MDE paradigm.Fil: Pérez, Gabriela. Universidad Nacional de la Plata. Facultad de Informática. Laboratorio de Investigación y Formación en Informática Avanzada; ArgentinaFil: Irazábal, Jerónimo. Universidad Nacional de la Plata. Facultad de Informática. Laboratorio de Investigación y Formación en Informática Avanzada; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Pons, Claudia Fabiana. Universidad Nacional de la Plata. Facultad de Informática. Laboratorio de Investigación y Formación en Informática Avanzada; Argentina. Provincia de Buenos Aires. Gobernación. Comisión de Investigaciones Científicas; ArgentinaFil: Giandini, Roxana Silvia. Universidad Nacional de la Plata. Facultad de Informática. Laboratorio de Investigación y Formación en Informática Avanzada; Argentin

    The ENIGMA Stroke Recovery Working Group: Big data neuroimaging to study brain–behavior relationships after stroke

    Get PDF
    The goal of the Enhancing Neuroimaging Genetics through Meta‐Analysis (ENIGMA) Stroke Recovery working group is to understand brain and behavior relationships using well‐powered meta‐ and mega‐analytic approaches. ENIGMA Stroke Recovery has data from over 2,100 stroke patients collected across 39 research studies and 10 countries around the world, comprising the largest multisite retrospective stroke data collaboration to date. This article outlines the efforts taken by the ENIGMA Stroke Recovery working group to develop neuroinformatics protocols and methods to manage multisite stroke brain magnetic resonance imaging, behavioral and demographics data. Specifically, the processes for scalable data intake and preprocessing, multisite data harmonization, and large‐scale stroke lesion analysis are described, and challenges unique to this type of big data collaboration in stroke research are discussed. Finally, future directions and limitations, as well as recommendations for improved data harmonization through prospective data collection and data management, are provided

    A methodological proposal and tool support for the HL7 standards compliance in the development of health information systems

    Get PDF
    Health information systems are increasingly complex, and their development is presented as a challenge for software development companies offering quality, maintainable and interoperable products. HL7 (Health level 7) International, an international non-profit organization, defines and maintains standards related to health information systems. However, the modelling languages proposed by HL7 are far removed from standard languages and widely known by software engineers. In these lines, NDT is a software development methodology that has a support tool called NDT-Suite and is based, on the one hand, on the paradigm of model-driven engineering and, on the other hand, in UML that is a widely recognized standard language. This paper proposes an extension of the NDT methodology called MoDHE (Model Driven Health Engineering) to offer software engineers a methodology capable of modelling health information systems conforming to HL7 using UML domain models
    corecore