5,143 research outputs found

    Revisiting Event Horizon Finders

    Full text link
    Event horizons are the defining physical features of black hole spacetimes, and are of considerable interest in studying black hole dynamics. Here, we reconsider three techniques to localise event horizons in numerical spacetimes: integrating geodesics, integrating a surface, and integrating a level-set of surfaces over a volume. We implement the first two techniques and find that straightforward integration of geodesics backward in time to be most robust. We find that the exponential rate of approach of a null surface towards the event horizon of a spinning black hole equals the surface gravity of the black hole. In head-on mergers we are able to track quasi-normal ringing of the merged black hole through seven oscillations, covering a dynamic range of about 10^5. Both at late times (when the final black hole has settled down) and at early times (before the merger), the apparent horizon is found to be an excellent approximation of the event horizon. In the head-on binary black hole merger, only {\em some} of the future null generators of the horizon are found to start from past null infinity; the others approach the event horizons of the individual black holes at times far before merger.Comment: 30 pages, 15 figures, revision

    Full linear multistep methods as root-finders

    Get PDF
    Root-finders based on full linear multistep methods (LMMs) use previous function values, derivatives and root estimates to iteratively find a root of a nonlinear function. As ODE solvers, full LMMs are typically not zero-stable. However, used as root-finders, the interpolation points are convergent so that such stability issues are circumvented. A general analysis is provided based on inverse polynomial interpolation, which is used to prove a fundamental barrier on the convergence rate of any LMM-based method. We show, using numerical examples, that full LMM-based methods perform excellently. Finally, we also provide a robust implementation based on Brent's method that is guaranteed to converge.Comment: 20 pages, 1 figur

    An excision scheme for black holes in constrained evolution formulations: spherically symmetric case

    Full text link
    Excision techniques are used in order to deal with black holes in numerical simulations of Einstein equations and consist in removing a topological sphere containing the physical singularity from the numerical domain, applying instead appropriate boundary conditions at the excised surface. In this work we present recent developments of this technique in the case of constrained formulations of Einstein equations and for spherically symmetric spacetimes. We present a new set of boundary conditions to apply to the elliptic system in the fully-constrained formalism of Bonazzola et al. (2004), at an arbitrary coordinate sphere inside the apparent horizon. Analytical properties of this system of boundary conditions are studied and, under some assumptions, an exponential convergence toward the stationary solution is exhibited for the vacuum spacetime. This is verified in numerical examples, together with the applicability in the case of the accretion of a scalar field onto a Schwarzschild black hole. We also present the successful use of the excision technique in the collapse of a neutron star to a black hole, when excision is switched on during the simulation, after the formation of the apparent horizon. This allows the accretion of matter remaining outside the excision surface and for the stable long-term evolution of the newly formed black hole.Comment: 14 pages, 9 figures. New section added and changes included according to published articl

    Designing Integrated Conflict Management Systems: Guidelines for Practitioners and Decision Makers in Organizations

    Get PDF
    A committee of the ADR (alternative dispute resolution) in the Workplace Initiative of the Society of Professionals in Dispute Resolution (SPIDR) prepared this document for employers, managers, labor representatives, employees, civil and human rights organizations, and others who interact with organizations. In this document we explain why organizations should consider developing integrated conflict management systems to prevent and resolve conflict, and we provide practical guidelines for designing and implementing such systems. The principles identified in this document can also be used to manage external conflict with customers, clients, and the public. Indeed, we recommend that organizations focus simultaneously on preventing and managing both internal and external conflict. SPIDR recognizes that an integrated conflict management system will work only if designed with input from users and decision makers at all levels of the organization. Each system must be tailored to fit the organization\u27s needs, circumstances, and culture. In developing these systems, experimentation is both necessary and healthy. We hope that this document will provide guidance, encourage experimentation, and contribute to the evolving understanding of how best to design and implement these systems

    Direct Detection of Planets Orbiting Large Angular Diameter Stars: Sensitivity of an Internally Occulting Space-based Coronagraph

    Get PDF
    High-contrast imaging observations of large angular diameter stars enable complementary science questions to be addressed compared to the baseline goals of proposed missions like the Terrestrial Planet Finder-Coronagraph, New World's Observer, and others. Such targets, however, present a practical problem in that finite stellar size results in unwanted starlight reaching the detector, which degrades contrast. In this paper, we quantify the sensitivity, in terms of contrast, of an internally occulting, space-based coronagraph as a function of stellar angular diameter, from unresolved dwarfs to the largest evolved stars. Our calculations show that an assortment of band-limited image masks can accommodate a diverse set of observations to help maximize mission scientific return. We discuss two applications based on the results: the spectro-photometric study of planets already discovered with the radial velocity technique to orbit evolved stars, which we elucidate with the example of Pollux b, and the direct detection of planets orbiting our closest neighbor, α Centauri, whose primary component is on the main sequence but subtends an appreciable angle on the sky. It is recommended that similar trade studies be performed with other promising internal, external, and hybrid occulter designs for comparison, as there is relevance to a host of interesting topics in planetary science and related fields

    PeakNet: Bragg peak finding in X-ray crystallography experiments with U-Net

    Full text link
    Serial crystallography at X-ray free electron laser (XFEL) sources has experienced tremendous progress in achieving high data rate in recent times. While this development offers potential to enable novel scientific investigations, such as imaging molecular events at logarithmic timescales, it also poses challenges in regards to real-time data analysis, which involves some degree of data reduction to only save those features or images pertaining to the science on disks. If data reduction is not effective, it could directly result in a substantial increase in facility budgetary requirements, or even hinder the utilization of ultra-high repetition imaging techniques making data analysis unwieldy. Furthermore, an additional challenge involves providing real-time feedback to users derived from real-time data analysis. In the context of serial crystallography, the initial and critical step in real-time data analysis is finding X-ray Bragg peaks from diffraction images. To tackle this challenge, we present PeakNet, a Bragg peak finder that utilizes neural networks and runs about four times faster than Psocake peak finder, while delivering significantly better indexing rates and comparable number of indexed events. We formulated the task of peak finding into a semantic segmentation problem, which is implemented as a classical U-Net architecture. A key advantage of PeakNet is its ability to scale linearly with respect to data volume, making it well-suited for real-time serial crystallography data analysis at high data rates

    Dark matter haloes determine the masses of supermassive black holes

    Full text link
    The energy and momentum deposited by the radiation from accretion onto the supermassive black holes (BHs) that reside at the centres of virtually all galaxies can halt or even reverse gas inflow, providing a natural mechanism for supermassive BHs to regulate their growth and to couple their properties to those of their host galaxies. However, it remains unclear whether this self-regulation occurs on the scale at which the BH is gravitationally dominant, on that of the stellar bulge, the galaxy, or that of the entire dark matter halo. To answer this question, we use self-consistent simulations of the co-evolution of the BH and galaxy populations that reproduce the observed correlations between the masses of the BHs and the properties of their host galaxies. We first confirm unambiguously that the BHs regulate their growth: the amount of energy that the BHs inject into their surroundings remains unchanged when the fraction of the accreted rest mass energy that is injected, is varied by four orders of magnitude. The BHs simply adjust their masses so as to inject the same amount of energy. We then use simulations with artificially reduced star formation rates to demonstrate explicitly that BH mass is not set by the stellar mass. Instead, we find that it is determined by the mass of the dark matter halo with a secondary dependence on the halo concentration, of the form that would be expected if the halo binding energy were the fundamental property that controls the mass of the BH. We predict that the logarithmic slope of the relation between dark matter halo mass and black hole mass is 1.55+/-0.05 and that the scatter around the mean relation in part reflects the scatter in the halo concentration-mass relation.Comment: MNRAS accepted. 6 pages, 3 figures. v2: Minor changes in response to referee comment

    A Deep Neural Network for Pixel-Level Electromagnetic Particle Identification in the MicroBooNE Liquid Argon Time Projection Chamber

    Full text link
    We have developed a convolutional neural network (CNN) that can make a pixel-level prediction of objects in image data recorded by a liquid argon time projection chamber (LArTPC) for the first time. We describe the network design, training techniques, and software tools developed to train this network. The goal of this work is to develop a complete deep neural network based data reconstruction chain for the MicroBooNE detector. We show the first demonstration of a network's validity on real LArTPC data using MicroBooNE collection plane images. The demonstration is performed for stopping muon and a νμ\nu_\mu charged current neutral pion data samples

    Reconstruction for Liquid Argon TPC Neutrino Detectors Using Parallel Architectures

    Full text link
    Neutrinos are particles that interact rarely, so identifying them requires large detectors which produce lots of data. Processing this data with the computing power available is becoming more difficult as the detectors increase in size to reach their physics goals. In liquid argon time projection chambers (TPCs) the charged particles from neutrino interactions produce ionization electrons which drift in an electric field towards a series of collection wires, and the signal on the wires is used to reconstruct the interaction. The MicroBooNE detector currently collecting data at Fermilab has 8000 wires, and planned future experiments like DUNE will have 100 times more, which means that the time required to reconstruct an event will scale accordingly. Modernization of liquid argon TPC reconstruction code, including vectorization, parallelization and code portability to GPUs, will help to mitigate these challenges. The liquid argon TPC hit finding algorithm within the \texttt{LArSoft}\xspace framework used across multiple experiments has been vectorized and parallelized. This increases the speed of the algorithm on the order of ten times within a standalone version on Intel architectures. This new version has been incorporated back into \texttt{LArSoft}\xspace so that it can be generally used. These methods will also be applied to other low-level reconstruction algorithms of the wire signals such as the deconvolution. The applications and performance of this modernized liquid argon TPC wire reconstruction will be presented
    corecore