1,665 research outputs found

    Joint Astrophysics Nascent Universe Satellite:. utilizing GRBs as high redshift probes

    Get PDF
    The Joint Astrophysics Nascent Universe Satellite (JANUS) is a multiwavelength cosmology mission designed to address fundamental questions about the cosmic dawn. It has three primary science objectives: (1) measure the massive star formation rate over 5 ≤ z ≤ 12 by discovering and observing high-z gamma-ray bursts (GRBs) and their afterglows, (2) enable detailed studies of the history of reionization and metal enrichment in the early Universe, and (3) map the growth of the first supermassive black holes by discovering and observing the brightest quasars at z ≥ 6. A rapidly slewing spacecraft and three science instruments – the X-ray Coded Aperture Telescope (XCAT), the Near InfraRed Telescope (NIRT), and the GAmma-ray Transient Experiment for Students (GATES) – make-up the JANUS observatory and are responsible for realizing the three primary science objectives. The XCAT (0.5–20 keV) is a wide field of view instrument responsible for detecting and localizing ∼60 z ≥ 5 GRBs, including ∼8 z ≥ 8 GRBs, during a 2-year mission. The NIRT (0.7–1.7 µm) refines the GRB positions and provides rapid (≤ 30 min) redshift information to the astronomical community. Concurrently, the NIRT performs a 20, 000 deg2 survey of the extragalactic sky discovering and localizing ∼300 z ≥ 6 quasars, including ∼50 at z ≥ 7, over a two-year period. The GATES provides high-energy (15 keV −1.0 MeV) spectroscopy as well as 60–500 keV polarimetry of bright GRBs. Here we outline the JANUS instrumentation and the mission science motivations

    Structure and hydration of polyvinylpyrrolidone-hydrogen peroxide

    Get PDF
    The structure of the commercially important polyvinylpyrrolidone-hydrogen peroxide complex can be understood by reference to the co-crystal structure of a hydrogen peroxide complex and its mixed hydrates of a two-monomer unit model compound, bisVP·2H2O2. The mixed hydrates involve selective water substitution into one of the two independent hydrogen peroxide binding sites

    A Review of Commercial and Research Cluster Management Software

    Get PDF
    In the past decade there has been a dramatic shift from mainframe or ‘host-centric’ computing to a distributed ‘client-server’ approach. In the next few years this trend is likely to continue with further shifts towards ‘network-centric’ computing becoming apparent. All these trends were set in motion by the invention of the mass-reproducible microprocessor by Ted Hoff of Intel some twenty-odd years ago. The present generation of RISC microprocessors are now more than a match for mainframes in terms of cost and performance. The long-foreseen day when collections of RISC microprocessors assembled together as a parallel computer could outperform the vector supercomputers has finally arrived. Such high-performance parallel computers incorporate proprietary interconnection networks allowing low-latency, high bandwidth inter-processor communications. However, for certain types of applications such interconnect optimization is unnecessary and conventional LAN technology is sufficient. This has led to the realization that clusters of high-performance workstations can be realistically used for a variety of applications either to replace mainframes, vector supercomputers and parallel computers or to better manage already installed collections of workstations. Whilst it is clear that ‘cluster computers’ have limitations, many institutions and companies are exploring this option. Software to manage such clusters is at an early stage of development and this report reviews the current state-of-the-art. Cluster computing is a rapidly maturing technology that seems certain to play an important part in the ‘network-centric’ computing future

    Cluster Computing Review

    Get PDF
    In the past decade there has been a dramatic shift from mainframe or ‘host−centric’ computing to a distributed ‘client−server’ approach. In the next few years this trend is likely to continue with further shifts towards ‘network−centric’ computing becoming apparent. All these trends were set in motion by the invention of the mass−reproducible microprocessor by Ted Hoff of Intel some twenty−odd years ago. The present generation of RISC microprocessors are now more than a match for mainframes in terms of cost and performance. The long−foreseen day when collections of RISC microprocessors assembled together as a parallel computer could out perform the vector supercomputers has finally arrived. Such high−performance parallel computers incorporate proprietary interconnection networks allowing low−latency, high bandwidth inter−processor communications. However, for certain types of applications such interconnect optimization is unnecessary and conventional LAN technology is sufficient. This has led to the realization that clusters of high−performance workstations can be realistically used for a variety of applications either to replace mainframes, vector supercomputers and parallel computers or to better manage already installed collections of workstations. Whilst it is clear that ‘cluster computers’ have limitations, many institutions and companies are exploring this option. Software to manage such clusters is at an early stage of development and this report reviews the current state−of−the−art. Cluster computing is a rapidly maturing technology that seems certain to play an important part in the ‘network−centric’ computing future

    Cliffbot Maestro

    Get PDF
    Cliffbot Maestro permits teleoperation of remote rovers for field testing in extreme environments. The application user interface provides two sets of tools for operations: stereo image browsing and command generation

    Spatial Query for Planetary Data

    Get PDF
    Science investigators need to quickly and effectively assess past observations of specific locations on a planetary surface. This innovation involves a location-based search technology that was adapted and applied to planetary science data to support a spatial query capability for mission operations software. High-performance location-based searching requires the use of spatial data structures for database organization. Spatial data structures are designed to organize datasets based on their coordinates in a way that is optimized for location-based retrieval. The particular spatial data structure that was adapted for planetary data search is the R+ tree

    Targeting and Localization for Mars Rover Operations

    Get PDF
    A design and a partially developed application framework were presented for improving localization and targeting for surface spacecraft. The program has value for the Mars Science Laboratory mission, and has been delivered to support the Mars Exploration Rovers as part of the latest version of the Maestro science planning tool. It also has applications for future missions involving either surface-based or low-altitude atmospheric robotic vehicles. The targeting and localization solutions solve the problem of how to integrate localization estimate updates into operational planning tools, operational data product generalizations, and flight software by adding expanded flexibility to flight software, the operations data product pipeline, and operations planning tools based on coordinate frame updates during a planning cycle

    Phosphine-alkene ligand-mediated alkyl-alkyl and alkyl-halide elimination processes from palladium(II)

    Get PDF
    N-Diphenylphosphino-7-aza-benzobicyclo[2.2.1]hept-2-ene (2) behaves as a chelating phosphine–alkene ligand for Pd0 and PdII, promoting direct alkyl–alkyl and indirect alkyl–halide reductive elimination reactions due to the stabilisation of the resulting bis(phosphine–alkene)Pd0 complex

    Educational Assessment of Medical Student Rotation in Emergency Ultrasound

    Get PDF
    Background: Medical student ultrasound education is sparse. In 2002, we began the first medical student rotation in emergency ultrasound. Objective: To evaluate if medical students can learn and retain sonographic skills during a two- or four-week elective. Methods: We gave students an exam on the first and last days of the rotation. Six months later, students took the exam a third time. A control group was used for comparison. Results: Over a 19-month period, we enrolled 45 students (25 on the two-week and 20 on the four-week elective). The four-week student post-test score was significantly better than the two- week posttest score (81% vs. 72%, p=0.003). On the six-month exam, the four-week student post-test score was significantly better than the two-week post-test score (77% vs 69%, p=0.008). The control group did not statistically improve. Conclusion: Medical students can learn bedside ultrasound interpretation with clinical integration and retain the knowledge six months later
    • …
    corecore