2,450 research outputs found

    Flares from the Tidal Disruption of Stars by Massive Black Holes

    Get PDF
    Tidal disruption flares are differentiated into two classes -- those which are sub-Eddington and those which radiate near the Eddington limit. Flares from black holes above ~2 x 10^7 M_\odot will generally not radiate above the Eddington limit. For a Schwarzschild black hole, the maximum bolometric luminosity of a tidal disruption is ~L_Edd(5 x 10^7 M_\odot), substantially below the Eddington luminosities of the most massive disrupting black holes (~2 x 10^8 M_\odot). Bolometric corrections to the spectra of the brightest flares are found to be large (~7.5 mag). Nevertheless, the brightest flares are likely to have absolute magnitudes in excess of -19 in V and -21 in U (in the absence of reddening). Because the spectra are so blue, K-corrections may actually brighten the flares in optical bands. If such flares are as frequent as believed, they may soon be detected in low or high redshift supernovae searches. The He II ionizing radiation produced in the flares may dominate that which is produced by all other sources in the centers of quiescent galaxies, creating a steady state, highly ionized, fossil nebula with an extent of ~1 kpc which may be observable in recombination lines.Comment: 21 pages with 4 figures, AAS Latex, ApJ Submitte

    The Temperature Dependence of Solar Neutrino Fluxes

    Get PDF
    By comparing neutrino fluxes and central temperatures calculated from 1000 detailed numerical solar models, we derive improved scaling laws which show how each of the neutrino fluxes depends upon the central temperature (flux Tm\propto T^m); we also estimate uncertainties for the temperature exponents. With the aid of a one-zone model of the sun, we derive expressions for the temperature exponents of the neutrino fluxes. For the most important neutrino fluxes, the exponents calculated with the one-zone model agree to within 20\% or better with the exponents extracted from the detailed numerical models. The one-zone model provides a physical understanding of the temperature dependence of the neutrino fluxes. For the pppp neutrino flux, the one-zone model explains the (initially-surprising) dependence of the flux upon a negative power of the temperature and suggests a new functional dependence. This new function makes explicit the strong anti-correlation between the 7^7Be and pppp neutrino fluxes. The one-zone model also predicts successfully the average linear relations between neutrino fluxes, but cannot predict the appreciable scatter in a Δϕi/ϕi\Delta \phi_i/\phi_i versus Δϕj/ϕj\Delta \phi_j/\phi_j diagram.Comment: Repaired http URL path for postscript file. 24 pages (RevTeX) + 5 figures (postscript), uuencoded gz-compressed tar file including text+figures. Postscript file also available at http://www.sns.ias.edu/~jnb/preprints.html Accepted for publication in Physical Review

    Strong Clustering in the Low Redshift Lyman-α\alpha Forest

    Full text link
    The two-point correlation function, ξ\xi, of Lyman-alpha forest is found to be large, ξ=1.81.2+1.6\xi = 1.8^{+1.6}_{-1.2}, > 90% confidence level, on the scale of 250-500 km/s for a sample of absorbers (0 < z < 1.3) assembled from HST Key Project Observations. This correlation function is stronger than at high redshift (z > 1.7) where ξ0.2\xi \approx 0.2 for velocities > 250 km/s.Comment: 20 pages; Latex with 3 figures and 5 tables; Submitted to Ap

    Constraints on the Gamma-ray Burst Luminosity Function from PVO and BATSE

    Get PDF
    We examine the width of the gamma-ray burst luminosity function through the distribution of GRB peak fluxes as detected by the Pioneer Venus Orbiter (PVO) and the Burst and Transient Source Experiment (BATSE). The strength of the analysis is greatly enhanced by using a merged catalog of peak fluxes from both instruments with good cross-calibration of their sensitivities. The range of peak fluxes is increased by approximately a factor of 20 relative to the BATSE catalog. Thus, more sensitive investigations of the logNlogP\log N-\log P distribution are possible. We place constraints on the width of the luminosity function of gamma-ray bursts brighter than the BATSE completeness limit by comparing the intensity distribution in the merged catalog with those produced by a variety of spatial density and luminosity functions. For the models examined, 90%90\% of the {\em detectable\/} bursts have peak luminosities within a range of 10, indicating that the peak luminosities of gamma-ray bursts span a markedly less wide range of values than many other of their measurable properties. We also discuss for which slopes of a power-law luminosity function the observed width is at the upper end of the constrained range. This is important in determining the power-law slopes for which luminosity-duration correlations could be important.Comment: 10 pages latex + 2 uuencoded figures; APJL accepte

    Finding renewal in the midst of disaster: The case of the deepwater horizon oil spill

    Get PDF
    In 2010, the United States experienced the worst environmental disaster in its history. An explosion on a BP oilrig located in the Gulf of Mexico triggered the crisis. As a result, the United States coast guard and BP were charged with crisis communication in its response to the crisis. This essay provides an unprecedented examination and analysis of the communication experiences of public information officers who worked in the unified command center in Houma, Louisiana during the Deepwater Horizon oil spill response. The authors use the discourse of renewal theory to understand the communication practices and choices of the public information officers. Then, using the renewal framework, the authors present three implications for improving crisis communication research and practice

    The Width of the Gamma-ray Burst Luminosity Function

    Get PDF
    We examine the width of the gamma-ray burst (GRB) luminosity function through the distribution of GRB peak count rates, Cpeak_{\rm{peak}}, as detected by BATSE (\cite{batse:93}). In the context of galactic corona spatial distribution models, we attempt to place constraints on the characteristic width of the luminosity function by comparing the observed intensity distribution with those produced by a range of density and luminosity functions. We find that the intrinsic width of the luminosity function cannot be very well restricted. However, the distribution of intrinsic luminosities of {\it detected bursts} can be limited: we find that most observed bursts have luminosities that are in a range of one to two decades, but a significant population of undetected less luminous bursts cannot be excluded. These findings demonstrate that the assumption that GRB are standard candles is sufficient but not necessary to explain the observed intensity distribution. We show that the main reason for the relatively poor constraints is the fact that the bright-end part of the GRB flux distribution is not yet sampled by BATSE, and better sampling in the future may lead to significantly stronger constraints on the width of the luminosity function.Comment: 10 pages of uuencoded compressed postscript, including 2 figures. Princeton University Observatory preprint POP-575. Accepted by Astrophysical Journal, July 20, 199

    Actively Preventing Negative Transfer

    Get PDF
    Transfer learning is a common technique used in a wide variety of deep learning applications. Transfer learning methods are typically used to make use of a source domain, where there is an abundance of labeled data, to make inferences in a target domain, where labeled data is scarce. In the digital age, improving a model’s ability to generalize knowledge gained from the massive amount of data available online to new contexts is crucial. Most new contexts of interest, like radiological scans, have very few labels, an obstacle that can be overcome with improved transfer learning methods. A basic transfer learning technique involves resetting the weights and biases associated with the last few layers of a deep learning model that has been trained on the source domain, and then re-training the model on the target domain. This a very widely used technique, but can often times result in a phenomenon known as negative transfer. Negative transfer occurs when the knowledge gained in the source domain proves to be harmful when transferring to the target domain. In order to prevent this phenomenon, our team is focusing on making a systematic method for determining which weights and biases should be reset when transferring knowledge. The basic idea is that if the source and target domains are similar, then most of the models knowledge gained in the source domain will be transferred to the target domain. However, if the source and target domains are different, the model will forget that knowledge which would be harmful in its learning the target domain

    Investigating Dataset Distinctiveness

    Get PDF
    Just as a human might struggle to interpret another human’s handwriting, a computer vision program might fail when asked to perform one task in two different domains. To be more specific, visualize a self-driving car as a human driver who had only ever driven on clear, sunny days, during daylight hours. This driver – the self-driving car – would inevitably face a significant challenge when asked to drive when it is violently raining or foggy during the night, putting the safety of its passengers in danger. An extensive understanding of the data we use to teach computer vision models – such as those that will be driving our cars in the years to come – is absolutely necessary as these sorts of complex systems find their way into everyday human life. This study works to develop a comprehensive meaning of the style of a dataset, or the quantitative difference between cursive lettering and print lettering, with respect to the image data used in the field of computer vision. We accomplished this by asking a machine learning model to predict which commonly used dataset a particular image belongs to, based on detailed features of the images. If the model performed well when classifying an image based on which dataset it belongs to, that dataset was considered distinct. We then developed a linear relationship between this distinctiveness metric and a model’s ability to learn from one dataset and test on another, so as to have a better understanding of how a computer vision system will perform in a given context, before it is trained
    corecore