667 research outputs found

    The Hierarchic treatment of marine ecological information from spatial networks of benthic platforms

    Get PDF
    Measuring biodiversity simultaneously in different locations, at different temporal scales, and over wide spatial scales is of strategic importance for the improvement of our understanding of the functioning of marine ecosystems and for the conservation of their biodiversity. Monitoring networks of cabled observatories, along with other docked autonomous systems (e.g., Remotely Operated Vehicles [ROVs], Autonomous Underwater Vehicles [AUVs], and crawlers), are being conceived and established at a spatial scale capable of tracking energy fluxes across benthic and pelagic compartments, as well as across geographic ecotones. At the same time, optoacoustic imaging is sustaining an unprecedented expansion in marine ecological monitoring, enabling the acquisition of new biological and environmental data at an appropriate spatiotemporal scale. At this stage, one of the main problems for an effective application of these technologies is the processing, storage, and treatment of the acquired complex ecological information. Here, we provide a conceptual overview on the technological developments in the multiparametric generation, storage, and automated hierarchic treatment of biological and environmental information required to capture the spatiotemporal complexity of a marine ecosystem. In doing so, we present a pipeline of ecological data acquisition and processing in different steps and prone to automation. We also give an example of population biomass, community richness and biodiversity data computation (as indicators for ecosystem functionality) with an Internet Operated Vehicle (a mobile crawler). Finally, we discuss the software requirements for that automated data processing at the level of cyber-infrastructures with sensor calibration and control, data banking, and ingestion into large data portals.Peer ReviewedPostprint (published version

    From the oceans to the cloud: Opportunities and challenges for data, models, computation and workflows.

    Get PDF
    © The Author(s), 2019. This article is distributed under the terms of the Creative Commons Attribution License. The definitive version was published in Vance, T. C., Wengren, M., Burger, E., Hernandez, D., Kearns, T., Medina-Lopez, E., Merati, N., O'Brien, K., O'Neil, J., Potemrag, J. T., Signell, R. P., & Wilcox, K. From the oceans to the cloud: Opportunities and challenges for data, models, computation and workflows. Frontiers in Marine Science, 6(211), (2019), doi:10.3389/fmars.2019.00211.Advances in ocean observations and models mean increasing flows of data. Integrating observations between disciplines over spatial scales from regional to global presents challenges. Running ocean models and managing the results is computationally demanding. The rise of cloud computing presents an opportunity to rethink traditional approaches. This includes developing shared data processing workflows utilizing common, adaptable software to handle data ingest and storage, and an associated framework to manage and execute downstream modeling. Working in the cloud presents challenges: migration of legacy technologies and processes, cloud-to-cloud interoperability, and the translation of legislative and bureaucratic requirements for “on-premises” systems to the cloud. To respond to the scientific and societal needs of a fit-for-purpose ocean observing system, and to maximize the benefits of more integrated observing, research on utilizing cloud infrastructures for sharing data and models is underway. Cloud platforms and the services/APIs they provide offer new ways for scientists to observe and predict the ocean’s state. High-performance mass storage of observational data, coupled with on-demand computing to run model simulations in close proximity to the data, tools to manage workflows, and a framework to share and collaborate, enables a more flexible and adaptable observation and prediction computing architecture. Model outputs are stored in the cloud and researchers either download subsets for their interest/area or feed them into their own simulations without leaving the cloud. Expanded storage and computing capabilities make it easier to create, analyze, and distribute products derived from long-term datasets. In this paper, we provide an introduction to cloud computing, describe current uses of the cloud for management and analysis of observational data and model results, and describe workflows for running models and streaming observational data. We discuss topics that must be considered when moving to the cloud: costs, security, and organizational limitations on cloud use. Future uses of the cloud via computational sandboxes and the practicalities and considerations of using the cloud to archive data are explored. We also consider the ways in which the human elements of ocean observations are changing – the rise of a generation of researchers whose observations are likely to be made remotely rather than hands on – and how their expectations and needs drive research towards the cloud. In conclusion, visions of a future where cloud computing is ubiquitous are discussed.This is PMEL contribution 4873

    The Application of Cloud Computing to the Creation of Image Mosaics and Management of Their Provenance

    Get PDF
    We have used the Montage image mosaic engine to investigate the cost and performance of processing images on the Amazon EC2 cloud, and to inform the requirements that higher-level products impose on provenance management technologies. We will present a detailed comparison of the performance of Montage on the cloud and on the Abe high performance cluster at the National Center for Supercomputing Applications (NCSA). Because Montage generates many intermediate products, we have used it to understand the science requirements that higher-level products impose on provenance management technologies. We describe experiments with provenance management technologies such as the "Provenance Aware Service Oriented Architecture" (PASOA).Comment: 15 pages, 3 figur

    The Role of Provenance Management in Accelerating the Rate of Astronomical Research

    Get PDF
    The availability of vast quantities of data through electronic archives has transformed astronomical research. It has also enabled the creation of new products, models and simulations, often from distributed input data and models, that are themselves made electronically available. These products will only provide maximal long-term value to astronomers when accompanied by records of their provenance; that is, records of the data and processes used in the creation of such products. We use the creation of image mosaics with the Montage grid-enabled mosaic engine to emphasize the necessity of provenance management and to understand the science requirements that higher-level products impose on provenance management technologies. We describe experiments with one technology, the "Provenance Aware Service Oriented Architecture" (PASOA), that stores provenance information at each step in the computation of a mosaic. The results inform the technical specifications of provenance management systems, including the need for extensible systems built on common standards. Finally, we describe examples of provenance management technology emerging from the fields of geophysics and oceanography that have applicability to astronomy applications.Comment: 8 pages, 1 figure; Proceedings of Science, 201

    NDSF technical operations via telecommunications

    Get PDF
    In 2015, the Woods Hole Oceanographic Institution (WHOI) commissioned an external study concerning the use of modern telecommunications and telepresence technologies in the potential reduction of manpower in National Deep Submergence Operations. That study has been completed, and the final report is attached as Appendix A.Funding was provided by the Nereus Legacy Fund at the Woods Hole Oceanographic Institutio

    Jetstream: A self-provisoned, scalable science and engineering cloud environment

    Get PDF
    The paper describes the motivation behind Jetstream, its functions, hardware configuration, software environment, user interface, design, use cases, relationships with other projects such as Wrangler and iPlant, and challenges in implementation.Funded by the National Science Foundation Award #ACI - 144560

    The value of research data to the nation

    Get PDF
    Executive Director’s report Ross Wilkinson, ANDS How can Australia address the challenge of living in bushfire prone city fringes? How can Australia most effectively farm and preserve our precious soil? How can Australia understand the Great Barrier Reef? No single discipline can answer these questions, but to address these challenges data is needed from a range of sources and disciplines. Research data that is well organised and available allows research to make substantial contributions vital to Australia’s future. For example, by drawing upon data that is able to be used by soil scientists, geneticists, plant scientists, climate analysts, and others, it is possible to conduct the multidisciplinary investigations necessary to tackle truly difficult and important challenges. The data might be provided by a Terrestrial Ecosystems Research Network OzFluz tower, insect observations recorded by a citizen through the Atlas of Living Australia, genetic sequencing of insects through a Bioplatforms Australia facility, weather observations by the Bureau of Meteorology, or historical data generated by CSIRO over many decades. Each will provide a part of the jigsaw, but the pieces must be able to be put together. This requires careful collection and organisation, which together deliver enormous value to the country. However, nationally significant problems are often tackled by international cooperation, so Australia’s data assets enable Australian researchers to work with the best in the world, solving problems of both national and international significance. Australia’s data assets and research data infrastructure provide Australian researchers with an excellent platform for international collaboration. Australia has world-leading research data infrastructure: our ability to store, compute, discover, explore, analyse and publish is the best in the world. The ability to capture data through a wide range of capabilities, from the Australian Synchrotron to Integrated Marine Observation System [IMOS: imos.org.au] ocean gliders, the combination of national storage and computation through RDSI, NCI and Pawsey initiatives, the ability to publish and discover data through ANDS, the ability to analyse and explore data through Nectar, and state and local eResearch capabilities, highlights just some of the capabilities that Australian researchers are able to access. Importantly, their international partners are able to work with them using many of these resources. As well, Australian research organisations are assembling many resources to support their research. These include policies, procedures, practical infrastructure, and very importantly – people! The eResearch team and the data librarians are always keen to help. This issue of Share highlights how the data resources of Australia are providing a very substantial national benefit, and how that benefit is being realised

    Towards Interoperable Research Infrastructures for Environmental and Earth Sciences

    Get PDF
    This open access book summarises the latest developments on data management in the EU H2020 ENVRIplus project, which brought together more than 20 environmental and Earth science research infrastructures into a single community. It provides readers with a systematic overview of the common challenges faced by research infrastructures and how a ‘reference model guided’ engineering approach can be used to achieve greater interoperability among such infrastructures in the environmental and earth sciences. The 20 contributions in this book are structured in 5 parts on the design, development, deployment, operation and use of research infrastructures. Part one provides an overview of the state of the art of research infrastructure and relevant e-Infrastructure technologies, part two discusses the reference model guided engineering approach, the third part presents the software and tools developed for common data management challenges, the fourth part demonstrates the software via several use cases, and the last part discusses the sustainability and future directions
    corecore