463 research outputs found

    NEW SEISMIC SOURCE ZONE MODEL FOR PORTUGAL AND AZORES

    Get PDF
    The development of seismogenic source models is one of the first steps in seismic hazard assessment. In seismic hazard terminology, seismic source zones (SSZ) are polygons (or volumes) that delineate areas with homogeneous characteristics of seismicity. The importance of using knowledge on geology, seismicity and tectonics in the definition of source zones has been recognized for a long time [1]. However, the definition of SSZ tends to be subjective and controversial. Using SSZ based on broad geology, by spreading the seismicity clusters throughout the areal extent of a zone, provides a way to account for possible long-term non-stationary seismicity behavior [2,3]. This approach effectively increases seismicity rates in regions with no significant historical or instrumental seismicity, while decreasing seismicity rates in regions that display higher rates of seismicity. In contrast, the use of SSZ based on concentrations of seismicity or spatial smoothing results in stationary behavior [4]. In the FP7 Project SHARE (Seismic Hazard Harmonization in Europe), seismic hazard will be assessed with a logic tree approach that allows for three types of branches for seismicity models: a) smoothed seismicity, b) SSZ, c) SSZ and faults. In this context, a large-scale zonation model for use in the smoothed seismicity branch, and a new consensus SSZ model for Portugal and Azores have been developed. The new models were achieved with the participation of regional experts by combining and adapting existing models and incorporating new regional knowledge of the earthquake potential. The main criteria used for delineating the SSZ include distribution of seismicity, broad geological architecture, crustal characteristics (oceanic versus continental, tectonically active versus stable, etc.), historical catalogue completeness, and the characteristics of active or potentially-active faults. This model will be integrated into an Iberian model of SSZ to be used in the Project SHARE seismic hazard assessment

    WD + MS systems as the progenitor of SNe Ia

    Full text link
    We show the initial and final parameter space for SNe Ia in a (log⁥Pi,M2i\log P^{\rm i}, M_{\rm 2}^{\rm i}) plane and find that the positions of some famous recurrent novae, as well as a supersoft X-ray source (SSS), RX J0513.9-6951, are well explained by our model. The model can also explain the space velocity and mass of Tycho G, which is now suggested to be the companion star of Tycho's supernova. Our study indicates that the SSS, V Sge, might be the potential progenitor of supernovae like SN 2002ic if the delayed dynamical-instability model due to Han & Podsiadlowski (2006) is appropriate. Following the work of Meng, Chen & Han (2009), we found that the SD model (WD + MS) with an optically thick wind can explain the birth rate of supernovae like SN 2006X and reproduce the distribution of the color excess of SNe Ia. The model also predicts that at least 75% of all SNe Ia may show a polarization signal in their spectra.Comment: 6 pages, 2 figures, accepted for publication in Astrophysics & Space Science (Proceeding of the 4th Meeting on Hot Subdwarf Stars and Related Objects, edited by Zhanwen Han, Simon Jeffery & Philipp Podsiadlowski

    Incorporating Descriptive Metadata into Seismic Source Zone Models for Seismic Hazard Assessment: A case study of the Azores-West Iberian region

    Get PDF
    In probabilistic seismic-hazard analysis (PSHA), seismic source zone (SSZ) models are widely used to account for the contribution to the hazard from earth- quakes not directly correlated with geological structures. Notwithstanding the impact of SSZ models in PSHA, the theoretical framework underlying SSZ models and the criteria used to delineate the SSZs are seldom explicitly stated and suitably docu- mented. In this paper, we propose a methodological framework to develop and docu- ment SSZ models, which includes (1) an assessment of the appropriate scale and degree of stationarity, (2) an assessment of seismicity catalog completeness-related issues, and (3) an evaluation and credibility ranking of physical criteria used to delin- eate the boundaries of the SSZs. We also emphasize the need for SSZ models to be supported by a comprehensive set of metadata documenting both the unique character- istics of each SSZ and the criteria used to delineate its boundaries. This procedure ensures that the uncertainties in the model can be properly addressed in the PSHA and that the model can be easily updated whenever new data are available. The pro- posed methodology is illustrated using the SSZ model developed for the Azores–West Iberian region in the context of the Seismic Hazard Harmonization in Europe project (project SHARE) and some of the most relevant SSZs are discussed in detail

    Relic Neutrino Absorption Spectroscopy

    Full text link
    Resonant annihilation of extremely high-energy cosmic neutrinos on big-bang relic anti-neutrinos (and vice versa) into Z-bosons leads to sizable absorption dips in the neutrino flux to be observed at Earth. The high-energy edges of these dips are fixed, via the resonance energies, by the neutrino masses alone. Their depths are determined by the cosmic neutrino background density, by the cosmological parameters determining the expansion rate of the universe, and by the large redshift history of the cosmic neutrino sources. We investigate the possibility of determining the existence of the cosmic neutrino background within the next decade from a measurement of these absorption dips in the neutrino flux. As a by-product, we study the prospects to infer the absolute neutrino mass scale. We find that, with the presently planned neutrino detectors (ANITA, Auger, EUSO, OWL, RICE, and SalSA) operating in the relevant energy regime above 10^{21} eV, relic neutrino absorption spectroscopy becomes a realistic possibility. It requires, however, the existence of extremely powerful neutrino sources, which should be opaque to nucleons and high-energy photons to evade present constraints. Furthermore, the neutrino mass spectrum must be quasi-degenerate to optimize the dip, which implies m_{nu} >~ 0.1 eV for the lightest neutrino. With a second generation of neutrino detectors, these demanding requirements can be relaxed considerably.Comment: 19 pages, 26 figures, REVTeX

    Search for direct production of charginos and neutralinos in events with three leptons and missing transverse momentum in √s = 7 TeV pp collisions with the ATLAS detector

    Get PDF
    A search for the direct production of charginos and neutralinos in final states with three electrons or muons and missing transverse momentum is presented. The analysis is based on 4.7 fb−1 of proton–proton collision data delivered by the Large Hadron Collider and recorded with the ATLAS detector. Observations are consistent with Standard Model expectations in three signal regions that are either depleted or enriched in Z-boson decays. Upper limits at 95% confidence level are set in R-parity conserving phenomenological minimal supersymmetric models and in simplified models, significantly extending previous results

    Jet size dependence of single jet suppression in lead-lead collisions at sqrt(s(NN)) = 2.76 TeV with the ATLAS detector at the LHC

    Get PDF
    Measurements of inclusive jet suppression in heavy ion collisions at the LHC provide direct sensitivity to the physics of jet quenching. In a sample of lead-lead collisions at sqrt(s) = 2.76 TeV corresponding to an integrated luminosity of approximately 7 inverse microbarns, ATLAS has measured jets with a calorimeter over the pseudorapidity interval |eta| < 2.1 and over the transverse momentum range 38 < pT < 210 GeV. Jets were reconstructed using the anti-kt algorithm with values for the distance parameter that determines the nominal jet radius of R = 0.2, 0.3, 0.4 and 0.5. The centrality dependence of the jet yield is characterized by the jet "central-to-peripheral ratio," Rcp. Jet production is found to be suppressed by approximately a factor of two in the 10% most central collisions relative to peripheral collisions. Rcp varies smoothly with centrality as characterized by the number of participating nucleons. The observed suppression is only weakly dependent on jet radius and transverse momentum. These results provide the first direct measurement of inclusive jet suppression in heavy ion collisions and complement previous measurements of dijet transverse energy imbalance at the LHC.Comment: 15 pages plus author list (30 pages total), 8 figures, 2 tables, submitted to Physics Letters B. All figures including auxiliary figures are available at http://atlas.web.cern.ch/Atlas/GROUPS/PHYSICS/PAPERS/HION-2011-02

    Technical summary

    Get PDF
    Human interference with the climate system is occurring. Climate change poses risks for human and natural systems. The assessment of impacts, adaptation, and vulnerability in the Working Group II contribution to the IPCC's Fifth Assessment Report (WGII AR5) evaluates how patterns of risks and potential benefits are shifting due to climate change and how risks can be reduced through mitigation and adaptation. It recognizes that risks of climate change will vary across regions and populations, through space and time, dependent on myriad factors including the extent of mitigation and adaptation

    Prognostic Value of N-terminal B-type Natriuretic Peptide in Patients with Acute Myocardial Infarction: A Multicenter Study

    Get PDF
    Background: Several models have been developed to help the clinician in risk stratification for Acute Coronary Syndrome (ACS),such as the TIMI and GRACE risk scores. However, there is conflicting evidence for the prognostic value of NT-ProBNP in acute myocardial infarction (AMI). Objective: (1) To explore the association of NT-proBNP with 30-day clinical outcome in AMI patients. (2) To compare the prognostic value of NT-proBNP with TIMI and GRACE risk scores in AMI patients. Methods: We conducted a multicenter, prospective observational study recruiting patients presented with AMI between 29-October-2015 and 14-January-2017, involving 1 cardiology referral centre and 4 non-cardiology hospitals. NT-proBNP level (Alere TriageÂź, US)was measured within 24 hours fromthe diagnosis of AMI. Patientswere followed-up for 1 month. Results: A total of 186 patients were recruited, 143 from tertiary cardiology centre and 43 from non-cardiology hospitals. Mean age was 54.7±10.0 years, 87.6% male and 64% were STEMI. The NT-proBNP level ranged from 60 to 16700pg/ml, with a median of 714pg/ml. Using the 75th centile as the cutoff, Kaplan-Meier survival analysis for the 30-day cardiac related mortality was significantly higher for patient with NT-proBNP level of ≄1600pg/ml (6.4% vs. 0.7%, p=0.02). Cox-regression analysis showed that NT-proBNP level of ≄1600pg/ml was an independent predictor of 30-day cardiac related mortality, regardless of TIMI risk score, GRACE score, LV ejection fraction and study hospitals (HR 9.274, p=0.054, 95%CI 0.965, 89.161). Readmission for heart failure at 30-day was also higher for patient with NT-proBNP level of ≄1600pg/ml (HR 9.308, p=0.053, 95%CI 0.969, 89.492). NT-proBNP level was not associated with all-cause mortality, risk of readmission for ACS, arrhythmia and stroke (pN0.05). By adding 50 score to GRACE risk score for NT-proBNP level of ≄1600pg/ml, combination of GraceNT-proBNP scores of more than 200 appeared to be a better independent predictor for 30-day cardiac related mortality (HR:28.28, p=0.004, 95%CI 2.94, 272.1). ROC analysis showed that this new score had 75% sensitivity and 91.2% specificity in predicting 30-day cardiac related mortality (AUC 0.791, p=0.046). Conclusions: NT-proBNP is a useful point-of-care risk stratification biomarker in AMI. It can be combined to the current risk score model for better risk stratification in AMI patients
    • 

    corecore