9,167 research outputs found

    Towards structured sharing of raw and derived neuroimaging data across existing resources

    Full text link
    Data sharing efforts increasingly contribute to the acceleration of scientific discovery. Neuroimaging data is accumulating in distributed domain-specific databases and there is currently no integrated access mechanism nor an accepted format for the critically important meta-data that is necessary for making use of the combined, available neuroimaging data. In this manuscript, we present work from the Derived Data Working Group, an open-access group sponsored by the Biomedical Informatics Research Network (BIRN) and the International Neuroimaging Coordinating Facility (INCF) focused on practical tools for distributed access to neuroimaging data. The working group develops models and tools facilitating the structured interchange of neuroimaging meta-data and is making progress towards a unified set of tools for such data and meta-data exchange. We report on the key components required for integrated access to raw and derived neuroimaging data as well as associated meta-data and provenance across neuroimaging resources. The components include (1) a structured terminology that provides semantic context to data, (2) a formal data model for neuroimaging with robust tracking of data provenance, (3) a web service-based application programming interface (API) that provides a consistent mechanism to access and query the data model, and (4) a provenance library that can be used for the extraction of provenance data by image analysts and imaging software developers. We believe that the framework and set of tools outlined in this manuscript have great potential for solving many of the issues the neuroimaging community faces when sharing raw and derived neuroimaging data across the various existing database systems for the purpose of accelerating scientific discovery

    Engineering Agile Big-Data Systems

    Get PDF
    To be effective, data-intensive systems require extensive ongoing customisation to reflect changing user requirements, organisational policies, and the structure and interpretation of the data they hold. Manual customisation is expensive, time-consuming, and error-prone. In large complex systems, the value of the data can be such that exhaustive testing is necessary before any new feature can be added to the existing design. In most cases, the precise details of requirements, policies and data will change during the lifetime of the system, forcing a choice between expensive modification and continued operation with an inefficient design.Engineering Agile Big-Data Systems outlines an approach to dealing with these problems in software and data engineering, describing a methodology for aligning these processes throughout product lifecycles. It discusses tools which can be used to achieve these goals, and, in a number of case studies, shows how the tools and methodology have been used to improve a variety of academic and business systems

    Engineering Agile Big-Data Systems

    Get PDF
    To be effective, data-intensive systems require extensive ongoing customisation to reflect changing user requirements, organisational policies, and the structure and interpretation of the data they hold. Manual customisation is expensive, time-consuming, and error-prone. In large complex systems, the value of the data can be such that exhaustive testing is necessary before any new feature can be added to the existing design. In most cases, the precise details of requirements, policies and data will change during the lifetime of the system, forcing a choice between expensive modification and continued operation with an inefficient design.Engineering Agile Big-Data Systems outlines an approach to dealing with these problems in software and data engineering, describing a methodology for aligning these processes throughout product lifecycles. It discusses tools which can be used to achieve these goals, and, in a number of case studies, shows how the tools and methodology have been used to improve a variety of academic and business systems

    1st INCF Workshop on Sustainability of Neuroscience Databases

    Get PDF
    The goal of the workshop was to discuss issues related to the sustainability of neuroscience databases, identify problems and propose solutions, and formulate recommendations to the INCF. The report summarizes the discussions of invited participants from the neuroinformatics community as well as from other disciplines where sustainability issues have already been approached. The recommendations for the INCF involve rating, ranking, and supporting database sustainability

    Batch and Streaming Data Ingestion towards Creating Holistic Health Records

    Get PDF
    The healthcare sector has been moving toward Electronic Health Record (EHR) systems that produce enormous amounts of healthcare data due to the increased emphasis on getting the appropriate information to the right person, wherever they are, at any time. This highlights the need for a holistic approach to ingest, exploit, and manage these huge amounts of data for achieving better health management and promotion in general. This manuscript proposes such an approach, providing a mechanism allowing all health ecosystem entities to obtain actionable knowledge from heterogeneous data in a multimodal way. The mechanism includes diverse techniques for automatically ingesting healthcare-related information from heterogeneous sources that produce batch/streaming data, managing, fusing, and aggregating this data into new data structures (i.e., Holistic Health Records (HHRs)). The latter enable the aggregation of data coming from different sources, such as Internet of Medical Things (IoMT) devices, online/offline platforms, while to effectively construct the HHRs, the mechanism develops various data management techniques covering the overall data path, from data acquisition and cleaning to data integration, modelling, and interpretation. The mechanism has been evaluated upon different healthcare scenarios, ranging from hospital-retrieved data to patient platforms, combined with data obtained from IoMT devices, having produced useful insights towards its successful and wide adaptation in this domain. In order to implement a paradigm shift from heterogeneous and independent data sources, limited data exploitation, and health records, the mechanism has combined multidisciplinary technologies toward HHRs. Doi: 10.28991/ESJ-2023-07-02-03 Full Text: PD

    Improving root cause analysis through the integration of PLM systems with cross supply chain maintenance data

    Get PDF
    The purpose of this paper is to demonstrate a system architecture for integrating Product Lifecycle Management (PLM) systems with cross supply chain maintenance information to support root-cause analysis. By integrating product-data from PLM systems with warranty claims, vehicle diagnostics and technical publications, engineers were able to improve the root-cause analysis and close the information gaps. Data collection was achieved via in-depth semi-structured interviews and workshops with experts from the automotive sector. Unified Modelling Language (UML) diagrams were used to design the system architecture proposed. A user scenario is also presented to demonstrate the functionality of the system

    Tools for software project data collection and integration

    Get PDF
    Sotsiaalmeedial on tĂ€napĂ€eval suur roll ĂŒhiskonnas ja tarkvaraarendusprotsessis. Iga pĂ€evaga kasvab sotsiaalmeedia vahendusel suhtlevate ja oma elu ja tööprotsesse kajastavate inimeste arv. Erinevalt möödunud sajandist on palju lihtsam integreerida meeskondasid isegi siis kui neid lahutab ookean. Tööriistad nagu JIRA, TFS ja Bugzilla on loodud just sel eesmĂ€rgil: meeskondade integreerimine ja tarkvaraarenduse protsessis osalejate elu lihtsustamine. Selle magistritöö eesmĂ€rk on siduda sotsiaalmeedia muutustehaldusega ning analĂŒĂŒsida nende vahelisi seoseid. Selles töös loodi ĂŒhine mudel muutustehalduse ja sotsiaalmeedia jaoks. Mudelite loomiseks ja ĂŒhendamiseks kasutati pöördprojekteerimist. PĂ€rast ĂŒhise mudeli loomist, kirjutati adapterid sotsiaalmeediast ja muudatustehaldusest andmete ĂŒhisesse mudelisse laadimiseks. Muudatustehalduse ja sotsiaalmeedia kanalite andmete ĂŒhendamisega saadud andmestikul teostati nĂ€idisanalĂŒĂŒs. AnalĂŒĂŒs nĂ€itas, et avatud lĂ€htekoodiga tarkvaraprojektidesse panustajad suhtlevad IRC ja e-maili listide teel ning 76% IRC kasutajatest on ka aktiivsed muudatustehalduse kasutajad.Nowadays Social Media has a big impact in our society and also in the software development process. Everyday more and more people are communicating through social media and discussing about their life and even for their work processes. Unlike the past century it is much easier to integrate teams even if they have oceans between them. Tools such as JIRA, TFS and Bugzilla are created for that purpose: Integrating teams and making life easier for everyone who is taking part of any cycle of software development process. This Master’s Thesis aims to integrate social media with issue trackers and analysing the relationships between them. In this thesis, unified models for social media and issue trackers were designed. Reverse engineering is used for designing the models and unification of the data models. After, creating the unified models, adapters were written in order to extract the data from social media and issue trackers for analysing. We conducted an example analysis on the data that we got by merging issue tracking. We found out that some interesting facts such as, open source software projects’ contributors are tend to communicate via IRC and email lists and also 76% of the users who are active in IRC are also active in issue tracking systems
    • 

    corecore