851,128 research outputs found

    Cloud-based data management system for automatic real-time data acquisition from large-scale laying-hen farms

    Get PDF
    : Management of poultry farms in China mostly relies on manual labor. Since such a large amount of valuable data for the production process either are saved incomplete or saved only as paper documents, making it very difficult for data retrieve, processing and analysis. An integrated cloud-based data management system (CDMS) was proposed in this study, in which the asynchronous data transmission, distributed file system, and wireless network technology were used for information collection, management and sharing in large-scale egg production. The cloud-based platform can provide information technology infrastructures for different farms. The CDMS can also allocate the computing resources and storage space based on demand. A real-time data acquisition software was developed, which allowed farm management staff to submit reports through website or smartphone, enabled digitization of production data. The use of asynchronous transfer in the system can avoid potential data loss during the transmission between farms and the remote cloud data center. All the valid historical data of poultry farms can be stored to the remote cloud data center, and then eliminates the need for large server clusters on the farms. Users with proper identification can access the online data portal of the system through a browser or an APP from anywhere worldwide

    Meeting the Challenges of Exploration Systems: Health Management Technologies for Aerospace Systems With Emphasis on Propulsion

    Get PDF
    The constraints of future Exploration Missions will require unique Integrated System Health Management (ISHM) capabilities throughout the mission. An ambitious launch schedule, human-rating requirements, long quiescent periods, limited human access for repair or replacement, and long communication delays all require an ISHM system that can span distinct yet interdependent vehicle subsystems, anticipate failure states, provide autonomous remediation, and support the Exploration Mission from beginning to end. NASA Glenn Research Center has developed and applied health management system technologies to aerospace propulsion systems for almost two decades. Lessons learned from past activities help define the approach to proper ISHM development: sensor selection- identifies sensor sets required for accurate health assessment; data qualification and validation-ensures the integrity of measurement data from sensor to data system; fault detection and isolation-uses measurements in a component/subsystem context to detect faults and identify their point of origin; information fusion and diagnostic decision criteria-aligns data from similar and disparate sources in time and use that data to perform higher-level system diagnosis; and verification and validation-uses data, real or simulated, to provide variable exposure to the diagnostic system for faults that may only manifest themselves in actual implementation, as well as faults that are detectable via hardware testing. This presentation describes a framework for developing health management systems and highlights the health management research activities performed by the Controls and Dynamics Branch at the NASA Glenn Research Center. It illustrates how those activities contribute to the development of solutions for Integrated System Health Management

    Space Data Integrator (SDI) and Space Program Integrated Data and Estimated Risk (SPIDER): Proof-of-Concept Software Solution for Integrating Launch and Reentry Vehicles into the National Airspace System (NAS)

    Get PDF
    The Space Data Integrator (SDI) Project is the initial step to satisfy the Federal Aviation Administration (FAA) strategic initiative to integrate commercial space launch and reentry vehicles into the National Airspace System. The project addresses the needs for greater situational awareness and monitoring, and increased response capability during non-nominal and catastrophic incidents during space operations. The initial phase of this project leverages current FAA systems, and provide an initial demonstration of capability that will provide for state data from a commercial reentry vehicle to be ingested into a the FAA Traffic Flow Management System, and displayed on Traffic Situation Displays. Space vehicle data will be received at the William J. Hughes Technical Center, and transmitted to the Event Management Center at the Air Traffic Control System Command Center. The second phase, called the Space Program Integrated Data and Estimated Risk system, will be built upon the initial SDI Demo phase, and deliver a Proof-of-Concept system that will provide added capability and situational awareness displays similar to current systems utilized at the major federal and commercial ranges

    Medical Data Architecture Platform and Recommended Requirements for a Medical Data System for Exploration Missions

    Get PDF
    The Medical Data Architecture (MDA) project supports the Exploration Medical Capability (ExMC) risk to minimize or reduce the risk of adverse health outcomes and decrements in performance due to in-flight medical capabilities on human exploration missions. To mitigate this risk, the ExMC MDA project addresses the technical limitations identified in ExMC Gap Med 07: We do not have the capability to comprehensively process medically- relevant information to support medical operations during exploration missions. This gap identifies that the current in-flight medical data management includes a combination of data collection and distribution methods that are minimally integrated with on-board medical devices and systems. Furthermore, there are a variety of data sources and methods of data collection. For an exploration mission, the seamless management of such data will enable a more medically autonomous crew than the current paradigm of medical data management on the International Space Station. ExMC has recognized that in order to make informed decisions about a medical data architecture framework, current methods for medical data management must not only be understood, but an architecture must also be identified that provides the crew with actionable insight to medical conditions. This medical data architecture will provide the necessary functionality to address the challenges of executing a self-contained medical system that approaches crew health care delivery without assistance from ground support. Hence, the products derived from the third MDA prototype development will directly inform exploration medical system requirements for Level of Care IV in Gateway missions. In fiscal year 2019, the MDA project developed Test Bed 3, the third iteration in a series of prototypes, that featured integrations with cognition tool data, ultrasound image analytics and core Flight Software (cFS). Maintaining a layered architecture design, the framework implemented a plug-in, modular approach in the integration of these external data sources. An early version of MDA Test Bed 3 software was deployed and operated in a simulated analog environment that was part of the Next Space Technologies for Exploration Partnerships (NextSTEP) Gateway tests of multiple habitat prototypes. In addition, the MDA team participated in the Gateway Test and Verification Demonstration, where the MDA cFS applications was integrated with Gateway-in-a-Box software to send and receive medically relevant data over a simulated vehicle network. This software demonstration was given to ExMC and Gateway Program stakeholders at the NASA Johnson Space Center Integrated Power, Avionics and Software (iPAS) facility. Also, the integrated prototypes served as a vehicle to provide Level 5 requirements for the Crew Health and Performance Habitat Data System for Gateway Missions (Medical Level of Care IV). In the upcoming fiscal year, the MDA project will continue to provide systems engineering and vertical prototypes to refine requirements for medical Level of Care IV and inform requirements for Level of Care V

    Rationalized development of a campus-wide cell line dataset for implementation in the biobank LIMS system at Bioresource center Ghent

    Get PDF
    The Bioresource center Ghent is the central hospital-integrated biobank of Ghent University Hospital. Our mission is to facilitate translational biomedical research by collecting, storing and providing high quality biospecimens to researchers. Several of our biobank partners store large amounts of cell lines. As cell lines are highly important both in basic research and preclinical screening phases, good annotation, authentication, and quality of these cell lines is pivotal in translational biomedical science. A Biobank Information Management System (BIMS) was implemented as sample and data management system for human bodily material. The samples are annotated by the use of defined datasets, based on the BRISQ (Biospecimen Reporting for Improved Study Quality) and Minimum Information About Biobank data Sharing (MIABIS) guidelines completed with SPREC (Standard PREanalytical Coding) information. However, the defined dataset for human bodily material is not ideal to capture the specific cell line data. Therefore, we set out to develop a rationalized cell line dataset. Through comparison of different datasets of online cell banks (human, animal, and stem cell), we established an extended cell line dataset of 156 data fields that was further analyzed until a smaller dataset-the survey dataset of 54 data fields-was obtained. The survey dataset was spread throughout our campus to all cell line users to rationalize the fields of the dataset and their potential use. Analysis of the survey data revealed only small differences in preferences in data fields between human, animal, and stem cell lines. Hence, one essential dataset for human, animal and stem cell lines was compiled consisting of 33 data fields. The essential dataset was prepared for implementation in our BIMS system. Good Clinical Data Management Practices formed the basis of our decisions in the implementation phase. Known standards, reference lists and ontologies (such as ICD-10-CM, animal taxonomy, cell line ontology...) were considered. The semantics of the data fields were clearly defined, enhancing the data quality of the stored cell lines. Therefore, we created an essential cell line dataset with defined data fields, useable for multiple cell line users

    The XENON1T Data Distribution and Processing Scheme

    Full text link
    The XENON experiment is looking for non-baryonic particle dark matter in the universe. The setup is a dual phase time projection chamber (TPC) filled with 3200 kg of ultra-pure liquid xenon. The setup is operated at the Laboratori Nazionali del Gran Sasso (LNGS) in Italy. We present a full overview of the computing scheme for data distribution and job management in XENON1T. The software package Rucio, which is developed by the ATLAS collaboration, facilitates data handling on Open Science Grid (OSG) and European Grid Infrastructure (EGI) storage systems. A tape copy at the Center for High Performance Computing (PDC) is managed by the Tivoli Storage Manager (TSM). Data reduction and Monte Carlo production are handled by CI Connect which is integrated into the OSG network. The job submission system connects resources at the EGI, OSG, SDSC's Comet, and the campus HPC resources for distributed computing. The previous success in the XENON1T computing scheme is also the starting point for its successor experiment XENONnT, which starts to take data in autumn 2019.Comment: 8 pages, 2 figures, CHEP 2018 proceeding

    Proceedings of the NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications

    Get PDF
    The proceedings of the National Space Science Data Center Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications held July 23 through 25, 1991 at the NASA/Goddard Space Flight Center are presented. The program includes a keynote address, invited technical papers, and selected technical presentations to provide a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disk and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's

    Building the University--Community Partnership in Disaster Management

    Get PDF
    The Center for Defense Integrated Data (CDID) and the Coastal Hazards Center of Excellence (CHC) at Jackson State University have developed the Disaster Response Intelligent System (DRIS) to ensure interoperable communication, rapid data processing for safe and timely evacuations, scenario analysis, and decision support during disaster events. With an increasing occurrence of both natural and man-made disasters, theoretical underpinnings have emerged to address not only how communities respond to disasters but also how they plan for such. Currently, there is a need to expand upon the existing paradigm in the highly specialized, practitioner driven field of emergency response and disaster management. Perhaps there are no better institutions to guide such expansion than institutions of higher education. The DRIS application has, as an extension, an education model wherein the system is installed at universities with disaster management programs or related curricula. Such installation builds on the university’s capacity to foster a multi-disciplinary approach to emergency response and disaster management by incorporating academic areas such as urban planning, computer science, environmental science, social science, geography, and various disciplines of engineering among others

    Testing Enabling Technologies for Safe UAS Urban Operations

    Get PDF
    A set of more than 100 flight operations were conducted at NASA Langley Research Center using small UAS (sUAS) to demonstrate, test, and evaluate a set of technologies and an overarching air-ground system concept aimed at enabling safety. The research vehicle was tracked continuously during nominal traversal of planned flight paths while autonomously operating over moderately populated land. For selected flights, off-nominal risks were introduced, including vehicle-to-vehicle (V2V) encounters. Three contingency maneuvers were demonstrated that provide safe responses. These maneuvers made use of an integrated air/ground platform and two on-board autonomous capabilities. Flight data was monitored and recorded with multiple ground systems and was forwarded in real time to a UAS traffic management (UTM) server for airspace coordination and supervision

    Integrated testing and verification system for research flight software design document

    Get PDF
    The NASA Langley Research Center is developing the MUST (Multipurpose User-oriented Software Technology) program to cut the cost of producing research flight software through a system of software support tools. The HAL/S language is the primary subject of the design. Boeing Computer Services Company (BCS) has designed an integrated verification and testing capability as part of MUST. Documentation, verification and test options are provided with special attention on real time, multiprocessing issues. The needs of the entire software production cycle have been considered, with effective management and reduced lifecycle costs as foremost goals. Capabilities have been included in the design for static detection of data flow anomalies involving communicating concurrent processes. Some types of ill formed process synchronization and deadlock also are detected statically
    • …
    corecore