554 research outputs found
Near real time regional moment tensor estimation using italian broadband stations
Since 2002, the Istituto Nazionale di Geofisica e Vulcanologia (INGV) in Rome has started the installation of a high quality
regional broadband network throughout the Italian territory. Up today, the network consists of 125 stations equipped with 40 s
natural period instruments. The dense station coverage allows for the implementation of real-time regional moment tensor (MT)
estimation procedures such as that proposed by Dreger and Helmberger (1993).
The automatic MT algorithm uses real-time broadband waveforms continuously telemetered to INGV, and it is triggered for events
with magnitude greater than Ml 3.5. This is the lowermost value for which we have found it possible to obtain reliable MT determination
in the frequency band used in the inversion. The automatic solution is available within about 3-5 minutes after the earthquake
location. Each solution has an assigned quality factor dependent on the number of the station used in the inversion, and the godness
of fit between synthetic and observed data. MT is published on the web after revision by a seismologist.
Efforts are also made to evaluate MT solutions for earthquakes occurring in Italy and neighboring regions in the last years. The
results are compared to those obtained from application of other moment tensor methods. It is always found a good agreement
between the newly determined solutions and those from other methods.
Overall, fast and accurate moment tensor solutions are an important ingredient when attempting to estimate the recorded ground
shaking. Overall, in Italy, earthquakes in the magnitude range 3.5 – 5 are very common; the availability of their focal mechanisms
allows the mapping of the principal stress field axes leading to a better understanding of the ongoing tectonics
Recommended from our members
2-D or not 2-D, that is the question: A Northern California test
Reliable estimates of the seismic source spectrum are necessary for accurate magnitude, yield, and energy estimation. In particular, how seismic radiated energy scales with increasing earthquake size has been the focus of recent debate within the community and has direct implications on earthquake source physics studies as well as hazard mitigation. The 1-D coda methodology of Mayeda et al. has provided the lowest variance estimate of the source spectrum when compared against traditional approaches that use direct S-waves, thus making it ideal for networks that have sparse station distribution. The 1-D coda methodology has been mostly confined to regions of approximately uniform complexity. For larger, more geophysically complicated regions, 2-D path corrections may be required. The complicated tectonics of the northern California region coupled with high quality broadband seismic data provides for an ideal ''apples-to-apples'' test of 1-D and 2-D path assumptions on direct waves and their coda. Using the same station and event distribution, we compared 1-D and 2-D path corrections and observed the following results: (1) 1-D coda results reduced the amplitude variance relative to direct S-waves by roughly a factor of 8 (800%); (2) Applying a 2-D correction to the coda resulted in up to 40% variance reduction from the 1-D coda results; (3) 2-D direct S-wave results, though better than 1-D direct waves, were significantly worse than the 1-D coda. We found that coda-based moment-rate source spectra derived from the 2-D approach were essentially identical to those from the 1-D approach for frequencies less than {approx}0.7-Hz, however for the high frequencies (0.7{le} f {le} 8.0-Hz), the 2-D approach resulted in inter-station scatter that was generally 10-30% smaller. For complex regions where data are plentiful, a 2-D approach can significantly improve upon the simple 1-D assumption. In regions where only 1-D coda correction is available it is still preferable over 2-D direct wave-based measures
Studies of modern Italian dog populations reveal multiple patterns for domestic breed evolution
Through thousands of years of breeding and strong human selection, the dog (Canis lupus familiaris) exists today within hundreds of closed populations throughout the world, each with defined phenotypes. A singular geographic region with broad diversity in dog breeds presents an interesting opportunity to observe potential mechanisms of breed formation. Italy claims 14 internationally recognized dog breeds, with numerous additional local varieties. To determine the relationship among Italian dog populations, we integrated genetic data from 263 dogs representing 23 closed dog populations from Italy, seven Apennine gray wolves, and an established dataset of 161 globally recognized dog breeds, applying multiple genetic methods to characterize the modes by which breeds are formed within a single geographic region. Our consideration of each of five genetic analyses reveals a series of development events that mirror historical modes of breed formation, but with variations unique to the codevelopment of early dog and human populations. Using 142,840 genome-wide SNPs and a dataset of 1,609 canines, representing 182 breeds and 16 wild canids, we identified breed development routes for the Italian breeds that included divergence from common populations for a specific purpose, admixture of regional stock with that from other regions, and isolated selection of local stock with specific attributes
Regional Analysis of Lg Attenuation: Comparison of 1D Methods in Northern California and Application to the Yellow Sea / Korean Peninsula
The measurement of regional attenuation Q{sup -1} can produce method dependent results. The discrepancies among methods are due to differing parameterizations (e.g., geometrical spreading rates), employed datasets (e.g., choice of path lengths and sources), and methodologies themselves (e.g., measurement in the frequency or time domain). We apply the coda normalization (CN), two-station (TS), reverse two-station (RTS), source-pair/receiver-pair (SPRP), and the new coda-source normalization (CS) methods to measure Q of the regional phase, Lg (Q{sub Lg}), and its power-law dependence on frequency of the form Q{sub 0}f{sup {eta}} with controlled parameterization in the well-studied region of northern California using a high-quality dataset from the Berkeley Digital Seismic Network. We test the sensitivity of each method to changes in geometrical spreading, Lg frequency bandwidth, the distance range of data, and the Lg measurement window. For a given method, there are significant differences in the power-law parameters, Q{sub 0} and {eta}, due to perturbations in the parameterization when evaluated using a conservative pairwise comparison. The CN method is affected most by changes in the distance range, which is most probably due to its fixed coda measurement window. Since, the CS method is best used to calculate the total path attenuation, it is very sensitive to the geometrical spreading assumption. The TS method is most sensitive to the frequency bandwidth, which may be due to its incomplete extraction of the site term. The RTS method is insensitive to parameterization choice, whereas the SPRP method as implemented here in the time-domain for a single path has great error in the power-law model parameters and {eta} is greatly affected by changes in the method parameterization. When presenting results for a given method it is best to calculate Q{sub 0}f{sup {eta}} for multiple parameterizations using some a priori distribution. We also investigate the difference in power-law Q calculated among the methods by considering only an approximately homogeneous subset of our data. All methods return similar power-law parameters, though the 95% confidence region is large. We adapt the CS method to calculate Q{sub Lg} tomography in northern California. Preliminary results show that by correcting for the source, tomography with the CS method may produce better resolved attenuation structure
Wnt4 and LAP2alpha as pacemakers of Thymic Epithelial Senescence
Age-associated thymic involution has considerable physiological impact by inhibiting de novo T-cell selection. This impaired T-cell production leads to weakened immune responses. Yet the molecular mechanisms of thymic stromal adipose involution are not clear. Age-related alterations also occur in the murine thymus providing an excellent model system. In the present work structural and molecular changes of the murine thymic stroma were investigated during aging. We show that thymic epithelial senescence correlates with significant destruction of epithelial network followed by adipose involution. We also show in purified thymic epithelial cells the age-related down-regulation of Wnt4 (and subsequently FoxN1), and the prominent increase in LAP2α expression. These senescence-related changes of gene expression are strikingly similar to those observed during mesenchymal to pre-adipocyte differentiation of fibroblast cells suggesting similar molecular background in epithelial cells. For molecular level proof-of-principle stable LAP2α and Wnt4-over-expressing thymic epithelial cell lines were established. LAP2α over-expression provoked a surge of PPARγ expression, a transcription factor expressed in pre-adipocytes. In contrast, additional Wnt4 decreased the mRNA level of ADRP, a target gene of PPARγ. Murine embryonic thymic lobes have also been transfected with LAP2α- or Wnt4-encoding lentiviral vectors. As expected LAP2α over-expression increased, while additional Wnt4 secretion suppressed PPARγ expression. Based on these pioneer experiments we propose that decreased Wnt activity and increased LAP2α expression provide the molecular basis during thymic senescence. We suggest that these molecular changes trigger thymic epithelial senescence accompanied by adipose involution. This process may either occur directly where epithelium can trans-differentiate into pre-adipocytes; or indirectly where first epithelial to mesenchymal transition (EMT) occurs followed by subsequent pre-adipocyte differentiation. The latter version fits better with literature data and is supported by the observed histological and molecular level changes
The Controversy Surrounding The Man Who Would Be Queen: A Case History of the Politics of Science, Identity, and Sex in the Internet Age
In 2003, psychology professor and sex researcher J. Michael Bailey published a book entitled The Man Who Would Be Queen: The Science of Gender-Bending and Transsexualism. The book’s portrayal of male-to-female (MTF) transsexualism, based on a theory developed by sexologist Ray Blanchard, outraged some transgender activists. They believed the book to be typical of much of the biomedical literature on transsexuality—oppressive in both tone and claims, insulting to their senses of self, and damaging to their public identities. Some saw the book as especially dangerous because it claimed to be based on rigorous science, was published by an imprint of the National Academy of Sciences, and argued that MTF sex changes are motivated primarily by erotic interests and not by the problem of having the gender identity common to one sex in the body of the other. Dissatisfied with the option of merely criticizing the book, a small number of transwomen (particularly Lynn Conway, Andrea James, and Deirdre McCloskey) worked to try to ruin Bailey. Using published and unpublished sources as well as original interviews, this essay traces the history of the backlash against Bailey and his book. It also provides a thorough exegesis of the book’s treatment of transsexuality and includes a comprehensive investigation of the merit of the charges made against Bailey that he had behaved unethically, immorally, and illegally in the production of his book. The essay closes with an epilogue that explores what has happened since 2003 to the central ideas and major players in the controversy
High Impact Of Human Leukocyte Antigen Matching On Overall Survival And Transplant Related Mortality In Allogeneic Hematopoietic Stem Cell Transplantation For CLL: Long-Term Study From The EBMT Registry
Initial Steps of Thermal Decomposition of Dihydroxylammonium 5,5′-bistetrazole-1,1′-diolate Crystals from Quantum Mechanics
Dihydroxylammonium 5,5?-bistetrazole-1,1?-diolate (TKX-50) is a recently synthesized energetic material (EM) with most promising performance, including high energy content, high density, low sensitivity, and low toxicity. TKX-50 forms an ionic crystal in which the unit cell contains two bistetrazole dianions {c-((NO)N3C)-[c-(CN3(NO)], formal charge of ?2} and four hydroxylammonium (NH3OH)+ cations (formal charge of +1). We report here quantum mechanics (QM)-based reaction studies to determine the atomistic reaction mechanisms for the initial decompositions of this system. First we carried out molecular dynamics simulations on the periodic TKX-50 crystal using forces from density functional based tight binding calculations (DFTB-MD), which finds that the chemistry is initiated by proton transfer from the cation to the dianion. Continuous heating of this periodic system leads eventually to dissociation of the protonated or diprotonated bistetrazole to release N2 and N2O. To refine the mechanisms observed in the periodic DFTB-MD, we carried out finite cluster quantum mechanics studies (B3LYP) for the unimolecular decomposition of the bistetrazole. We find that for the bistetrazole dianion, the reaction barrier for release of N2 is 45.1 kcal/mol, while release of N2O is 72.2 kcal/mol. However, transferring one proton to the bistetrazole dianion decreases the reaction barriers to 37.2 kcal/mol for N2 release and 59.5 kcal/mol for N2O release. Thus, we predict that the initial decompositions in TKX-50 lead to N2 release, which in turn provides the energy to drive further decompositions. On the basis of this mechanism, we suggest changes to make the system less sensitive while retaining the large energy release. This may help improve the synthesis strategy of developing high nitrogen explosives with further improved performance
- …