467,928 research outputs found

    GeoNEX: A Cloud Gateway for Near Real-time Processing of Geostationary Satellite Products

    Get PDF
    The emergence of a new generation of geostationary satellite sensors provides land andatmosphere monitoring capabilities similar to MODIS and VIIRS with far greater temporal resolution (5-15 minutes). However, processing such large volume, highly dynamic datasets requires computing capabilities that (1) better support data access and knowledge discovery for scientists; (2) provide resources to enable real-time processing for emergency response (wildfire, smoke, dust, etc.); and (3) provide reliable and scalable services for the broader user community. This paper presents an implementation of GeoNEX (Geostationary NASA-NOAA Earth Exchange) services that integrate scientific algorithms with Amazon Web Services (AWS) to provide near realtime monitoring (~5 minute latency) capability in a hybrid cloud-computing environment. It offers a user-friendly, manageable and extendable interface and benefits from the scalability provided by Amazon Web Services. Four use cases are presented to illustrate how to (1) search and access geostationary data; (2) configure computing infrastructure to enable near real-time processing; (3) disseminate and utilize research results, visualizations, and animations to concurrent users; and (4) use a Jupyter Notebook-like interface for data exploration and rapid prototyping. As an example of (3), the Wildfire Automated Biomass Burning Algorithm (WF_ABBA) was implemented on GOES-16 and -17 data to produce an active fire map every 5 minutes over the conterminous US. Details of the implementation strategies, architectures, and challenges of the use cases are discussed

    Monolayer Excitonic Laser

    Full text link
    Recently, two-dimensional (2D) materials have opened a new paradigm for fundamental physics explorations and device applications. Unlike gapless graphene, monolayer transition metal dichalcogenide (TMDC) has new optical functionalities for next generation ultra-compact electronic and opto-electronic devices. When TMDC crystals are thinned down to monolayers, they undergo an indirect to direct bandgap transition, making it an outstanding 2D semiconductor. Unique electron valley degree of freedom, strong light matter interactions and excitonic effects were observed. Enhancement of spontaneous emission has been reported on TMDC monolayers integrated with photonic crystal and distributed Bragg reflector microcavities. However, the coherent light emission from 2D monolayer TMDC has not been demonstrated, mainly due to that an atomic membrane has limited material gain volume and is lack of optical mode confinement. Here, we report the first realization of 2D excitonic laser by embedding monolayer tungsten disulfide (WS2) in a microdisk resonator. Using a whispering gallery mode (WGM) resonator with a high quality factor and optical confinement, we observed bright excitonic lasing in visible wavelength. The Si3N4/WS2/HSQ sandwich configuration provides a strong feedback and mode overlap with monolayer gain. This demonstration of 2D excitonic laser marks a major step towards 2D on-chip optoelectronics for high performance optical communication and computing applications.Comment: 15 pages, 4 figure

    Status and Future Perspectives for Lattice Gauge Theory Calculations to the Exascale and Beyond

    Full text link
    In this and a set of companion whitepapers, the USQCD Collaboration lays out a program of science and computing for lattice gauge theory. These whitepapers describe how calculation using lattice QCD (and other gauge theories) can aid the interpretation of ongoing and upcoming experiments in particle and nuclear physics, as well as inspire new ones.Comment: 44 pages. 1 of USQCD whitepapers

    ATLAS Data Challenge 1

    Full text link
    In 2002 the ATLAS experiment started a series of Data Challenges (DC) of which the goals are the validation of the Computing Model, of the complete software suite, of the data model, and to ensure the correctness of the technical choices to be made. A major feature of the first Data Challenge (DC1) was the preparation and the deployment of the software required for the production of large event samples for the High Level Trigger (HLT) and physics communities, and the production of those samples as a world-wide distributed activity. The first phase of DC1 was run during summer 2002, and involved 39 institutes in 18 countries. More than 10 million physics events and 30 million single particle events were fully simulated. Over a period of about 40 calendar days 71000 CPU-days were used producing 30 Tbytes of data in about 35000 partitions. In the second phase the next processing step was performed with the participation of 56 institutes in 21 countries (~ 4000 processors used in parallel). The basic elements of the ATLAS Monte Carlo production system are described. We also present how the software suite was validated and the participating sites were certified. These productions were already partly performed by using different flavours of Grid middleware at ~ 20 sites.Comment: 10 pages; 3 figures; CHEP03 Conference, San Diego; Reference MOCT00

    ASCR/HEP Exascale Requirements Review Report

    Full text link
    This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEP's research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.Comment: 77 pages, 13 Figures; draft report, subject to further revisio
    corecore