453 research outputs found

    BL Lac Objects in the Synchrotron Proton Blazar Model

    Get PDF
    We calculate the spectral energy distribution (SED) of electromagnetic radiation and the spectrum of high energy neutrinos from BL Lac objects in the context of the Synchrotron Proton Blazar Model. In this model, the high energy hump of the SED is due to accelerated protons, while most of the low energy hump is due to synchrotron radiation by co-accelerated electrons. To accelerate protons to sufficiently high energies to produce the high energy hump, rather high magnetic fields are required. Assuming reasonable emission region volumes and Doppler factors, we then find that in low-frequency peaked BL Lacs (LBLs), which have higher luminosities than high-frequency peaked BL Lacs (HBLs), there is a significant contribution to the high frequency hump of the SED from pion photoproduction and subsequent cascading, including synchrotron radiation by muons. In contrast, in HBLs we find that the high frequency hump of the SED is dominated by proton synchrotron radiation. We are able to model the SED of typical LBLs and HBLs, and to model the famous 1997 flare of Markarian 501. We also calculate the expected neutrino output of typical BL Lac objects, and estimate the diffuse neutrino intensity due to all BL Lacs. Because pion photoproduction is inefficient in HBLs, as protons lose energy predominantly by synchrotron radiation, the contribution of LBLs dominates the diffuse neutrino intensity. We suggest that nearby LBLs may well be observable with future high-sensitivity TeV gamma-ray telescopes.Comment: 33 pages, 20 Figures. Astropart. Phys., accepte

    Neutrino Background Flux from Sources of Ultrahigh-Energy Cosmic-Ray Nuclei

    Get PDF
    Motivated by Pierre Auger Observatory results favoring a heavy nuclear composition for ultrahigh-energy (UHE) cosmic rays, we investigate implications for the cumulative neutrino background. The requirement that nuclei not be photodisintegrated constrains their interactions in sources, therefore limiting neutrino production via photomeson interactions. Assuming a dNCR/dECRECR2dN_{\rm CR}/dE_{\rm CR} \propto E_{\rm CR}^{-2} injection spectrum and photodisintegration via the giant dipole resonance, the background flux of neutrinos is lower than Eν2Φν109GeVcm2s1sr1E_\nu^2 \Phi_\nu \sim {10}^{-9} {\rm GeV} {\rm cm}^{-2} {\rm s}^{-1} {\rm sr}^{-1} if UHE nuclei ubiquitously survive in their sources. This is smaller than the analogous Waxman-Bahcall flux for UHE protons by about one order of magnitude, and is below the projected IceCube sensitivity. If IceCube detects a neutrino background, it could be due to other sources, e.g., hadronuclear interactions of lower-energy cosmic rays; if it does not, this supports our strong restrictions on the properties of sources of UHE nuclei.Comment: 7 pages, 3 figure

    Advanced process design for re-contouring using a time-domain dynamic material removal simulation

    Get PDF
    The repair of components often requires the removal of excess weld material. This removal is considered as re-contouring. Re-contouring processes have to be designed individually for each case of damage to fulfil the high quality requirements. Therefore, a prognosis of the machined surface topography is crucial. The material removal simulation introduced in this paper allows the prediction of process stability and surface topography for 5-axis ball end milling including dynamic effects. Different process strategies for re-contouring of Ti-6Al-4V welds are examined. It is shown, that selecting suitable process parameters can lead to high surface quality while maintaining productivity. © 2019 The Author(s)

    Reduced Order Modeling for Parameterized Time-Dependent PDEs using Spatially and Memory Aware Deep Learning

    Get PDF
    We present a novel reduced order model (ROM) approach for parameterized time-dependent PDEs based on modern learning. The ROM is suitable for multi-query problems and is nonintrusive. It is divided into two distinct stages: A nonlinear dimensionality reduction stage that handles the spatially distributed degrees of freedom based on convolutional autoencoders, and a parameterized time-stepping stage based on memory aware neural networks (NNs), specifically causal convolutional and long short-term memory NNs. Strategies to ensure generalization and stability are discussed. The methodology is tested on the heat equation, advection equation, and the incompressible Navier-Stokes equations, to show the variety of problems the ROM can handle

    Markov chain generative adversarial neural networks for solving Bayesian inverse problems in physics applications

    Get PDF
    In the context of solving inverse problems for physics applications within a Bayesian framework, we present a new approach, the Markov Chain Generative Adversarial Neural Network (MCGAN), to alleviate the computational costs associated with solving the Bayesian inference problem. GANs pose a very suitable framework to aid in the solution of Bayesian inference problems, as they are designed to generate samples from complicated high-dimensional distributions. By training a GAN to sample from a low-dimensional latent space and then embedding it in a Markov Chain Monte Carlo method, we can highly efficiently sample from the posterior, by replacing both the high-dimensional prior and the expensive forward map. This comes at the cost of a potentially expensive offline stage in which training data must be simulated or gathered and the GAN has to be trained. We prove that the proposed methodology converges to the true posterior in the Wasserstein-1 distance and that sampling from the latent space is equivalent to sampling in the high-dimensional space in a weak sense. The method is showcased in two test cases where we perform both state and parameter estimation simultaneously and it is compared with two conventional approaches, polynomial chaos expansion and ensemble Kalman filter, and a deep learning-based approach, deep Bayesian inversion. The method is shown to be more accurate than alternative approaches while also being computationally faster, in multiple test cases, including the important engineering setting of detecting leaks in pipelines

    Do CBCT scans alter surgical treatment plans? Comparison of preoperative surgical diagnosis using panoramic versus cone-beam CT images

    Get PDF
    Cone beam CT and/or panoramic images are often required for a successful diagnosis in oral and maxillofacial surgery. The aim of this study was to evaluate if 3D diagnostic imaging information had a significant impact on the decision process in six different classes of surgical indications. Material and methods: Records of all patients who had undergone both panoramic X-ray and CBCT imaging due to surgical indications between January 2008 and December 2012 were examined retrospectively. In February 2013, all surgically relevant diagnoses of both conventional panoramic radiographs and CBCT scans were retrieved from the patient's charts. It was recorded whether (1) 3D imaging presented additional surgically relevant information and (2) if the final decision of surgical therapy had been based on 2D or 3D imaging. Results: A total of 253 consecutive patients with both panoramic radiographs and CBCT analysis were eligible for the study. 3D imaging provided significantly more surgically relevant information in cases of implant dentistry, maxillary sinus diagnosis and in oral and maxillofacial traumatology. However, surgical strategies had not been influenced to any significant extent by 3D imaging. Conclusion: Within the limitations of this study it may be concluded that CBCT imaging results in significantly more surgically relevant information in implant dentistry, maxillary sinus diagnosis and in cases of oral and maxillofacial trauma. However, 3D imaging information did not alter significantly the surgical plan that was based on 2D panoramic radiography. Further studies are necessary to define indications for CBCT in detail

    Photon-Photon Entanglement with a Single Trapped Atom

    Full text link
    An experiment is performed where a single rubidium atom trapped within a high-finesse optical cavity emits two independently triggered entangled photons. The entanglement is mediated by the atom and is characterized both by a Bell inequality violation of S=2.5, as well as full quantum-state tomography, resulting in a fidelity exceeding F=90%. The combination of cavity-QED and trapped atom techniques makes our protocol inherently deterministic - an essential step for the generation of scalable entanglement between the nodes of a distributed quantum network.Comment: 5 pages, 4 figure
    corecore