376,843 research outputs found

    GeoNEX: A Cloud Gateway for Near Real-time Processing of Geostationary Satellite Products

    Get PDF
    The emergence of a new generation of geostationary satellite sensors provides land andatmosphere monitoring capabilities similar to MODIS and VIIRS with far greater temporal resolution (5-15 minutes). However, processing such large volume, highly dynamic datasets requires computing capabilities that (1) better support data access and knowledge discovery for scientists; (2) provide resources to enable real-time processing for emergency response (wildfire, smoke, dust, etc.); and (3) provide reliable and scalable services for the broader user community. This paper presents an implementation of GeoNEX (Geostationary NASA-NOAA Earth Exchange) services that integrate scientific algorithms with Amazon Web Services (AWS) to provide near realtime monitoring (~5 minute latency) capability in a hybrid cloud-computing environment. It offers a user-friendly, manageable and extendable interface and benefits from the scalability provided by Amazon Web Services. Four use cases are presented to illustrate how to (1) search and access geostationary data; (2) configure computing infrastructure to enable near real-time processing; (3) disseminate and utilize research results, visualizations, and animations to concurrent users; and (4) use a Jupyter Notebook-like interface for data exploration and rapid prototyping. As an example of (3), the Wildfire Automated Biomass Burning Algorithm (WF_ABBA) was implemented on GOES-16 and -17 data to produce an active fire map every 5 minutes over the conterminous US. Details of the implementation strategies, architectures, and challenges of the use cases are discussed

    Observations of the Hubble Deep Field South with the Infrared Space Observatory - II. Associations and star formation rates

    Get PDF
    We present results from a deep mid-IR survey of the Hubble Deep Field South (HDF-S) region performed at 7 and 15um with the CAM instrument on board ISO. We found reliable optical/near-IR associations for 32 of the 35 sources detected in this field by Oliver et al. (2002, Paper I): eight of them were identified as stars, one is definitely an AGN, a second seems likely to be an AGN, too, while the remaining 22 appear to be normal spiral or starburst galaxies. Using model spectral energy distributions (SEDs) of similar galaxies, we compare methods for estimating the star formation rates (SFRs) in these objects, finding that an estimator based on integrated (3-1000um) IR luminosity reproduces the model SFRs best. Applying this estimator to model fits to the SEDs of our 22 spiral and starburst galaxies, we find that they are forming stars at rates of ~1-100 M_sol/yr, with a median value of ~40M_sol/yr, assuming an Einstein - de Sitter universe with a Hubble constant of 50 km/s/Mpc, and star formation taking place according to a Salpeter (1955) IMF across the mass range 0.1-100M_sol. We split the redshift range 0.0<z<0.6 into two equal-volume bins to compute raw estimates of the star formation rate density contributed by these sources, assuming the same cosmology and IMF as above and computing errors based on estimated uncertainties in the SFRs of individual galaxies. We compare these results with other estimates of the SFR density made with the same assumptions, showing them to be consistent with the results of Flores et al. (1999) from their ISO survey of the CFRS 1415+52 field. However, the relatively small volume of our survey means that our SFR density estimates suffer from a large sampling variance, implying that our results, by themselves, do not place tight constraints on the global mean SFR density.Comment: Accepted for MNRAS. 23 pages, 10 figures (Figs. 4&6 included here as low resolution JPEGS), latex, uses mn,epsfig. Further information and full resolution versions of Figs 4&6 available at http://astro.ic.ac.uk/hdfs (v2: full author list added

    Reliable Communication in a Dynamic Network in the Presence of Byzantine Faults

    Full text link
    We consider the following problem: two nodes want to reliably communicate in a dynamic multihop network where some nodes have been compromised, and may have a totally arbitrary and unpredictable behavior. These nodes are called Byzantine. We consider the two cases where cryptography is available and not available. We prove the necessary and sufficient condition (that is, the weakest possible condition) to ensure reliable communication in this context. Our proof is constructive, as we provide Byzantine-resilient algorithms for reliable communication that are optimal with respect to our impossibility results. In a second part, we investigate the impact of our conditions in three case studies: participants interacting in a conference, robots moving on a grid and agents in the subway. Our simulations indicate a clear benefit of using our algorithms for reliable communication in those contexts

    Asymmetric Distributed Trust

    Get PDF
    Quorum systems are a key abstraction in distributed fault-tolerant computing for capturing trust assumptions. They can be found at the core of many algorithms for implementing reliable broadcasts, shared memory, consensus and other problems. This paper introduces asymmetric Byzantine quorum systems that model subjective trust. Every process is free to choose which combinations of other processes it trusts and which ones it considers faulty. Asymmetric quorum systems strictly generalize standard Byzantine quorum systems, which have only one global trust assumption for all processes. This work also presents protocols that implement abstractions of shared memory and broadcast primitives with processes prone to Byzantine faults and asymmetric trust. The model and protocols pave the way for realizing more elaborate algorithms with asymmetric trust

    The CDF Data Handling System

    Full text link
    The Collider Detector at Fermilab (CDF) records proton-antiproton collisions at center of mass energy of 2.0 TeV at the Tevatron collider. A new collider run, Run II, of the Tevatron started in April 2001. Increased luminosity will result in about 1~PB of data recorded on tapes in the next two years. Currently the CDF experiment has about 260 TB of data stored on tapes. This amount includes raw and reconstructed data and their derivatives. The data storage and retrieval are managed by the CDF Data Handling (DH) system. This system has been designed to accommodate the increased demands of the Run II environment and has proven robust and reliable in providing reliable flow of data from the detector to the end user. This paper gives an overview of the CDF Run II Data Handling system which has evolved significantly over the course of this year. An outline of the future direction of the system is given.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 7 pages, LaTeX, 4 EPS figures, PSN THKT00

    Cloud Storage and Bioinformatics in a private cloud deployment: Lessons for Data Intensive research

    No full text
    This paper describes service portability for a private cloud deployment, including a detailed case study about Cloud Storage and bioinformatics services developed as part of the Cloud Computing Adoption Framework (CCAF). Our Cloud Storage design and deployment is based on Storage Area Network (SAN) technologies, details of which include functionalities, technical implementation, architecture and user support. Experiments for data services (backup automation, data recovery and data migration) are performed and results confirm backup automation is completed swiftly and is reliable for data-intensive research. The data recovery result confirms that execution time is in proportion to quantity of recovered data, but the failure rate increases in an exponential manner. The data migration result confirms execution time is in proportion to disk volume of migrated data, but again the failure rate increases in an exponential manner. In addition, benefits of CCAF are illustrated using several bioinformatics examples such as tumour modelling, brain imaging, insulin molecules and simulations for medical training. Our Cloud Storage solution described here offers cost reduction, time-saving and user friendliness
    • …
    corecore