106 research outputs found

    Seismic Data Strong Noise Attenuation Based on Diffusion Model and Principal Component Analysis

    Full text link
    Seismic data noise processing is an important part of seismic exploration data processing, and the effect of noise elimination is directly related to the follow-up processing of data. In response to this problem, many authors have proposed methods based on rank reduction, sparse transformation, domain transformation, and deep learning. However, such methods are often not ideal when faced with strong noise. Therefore, we propose to use diffusion model theory for noise removal. The Bayesian equation is used to reverse the noise addition process, and the noise reduction work is divided into multiple steps to effectively deal with high-noise situations. Furthermore, we propose to evaluate the noise level of blind Gaussian seismic data using principal component analysis to determine the number of steps for noise reduction processing of seismic data. We train the model on synthetic data and validate it on field data through transfer learning. Experiments show that our proposed method can identify most of the noise with less signal leakage. This has positive significance for high-precision seismic exploration and future seismic data signal processing research.Comment: 10 pages, 13 figures. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Seasonal Characteristics of Black Carbon Aerosol and its Potential Source Regions in Baoji, China

    Get PDF
    Continuous measurements of black carbon (BC) aerosol were made at a midsized urban site in Baoji, China, in 2015. The daily average mass concentrations varied from 0.6 to 11.5 mu g m(-3), with an annual mean value of 2.9 +/- 1.7 mu g m(-3). The monthly variation indicated that the largest loading of BC occurred in January and the smallest in June. The mass concentrations exhibited strong seasonality, with the highest occurring in winter and the lowest in summer. The large BC loadings in winter were attributed to the increased use of fuel for domestic heating and to stagnant meteorological conditions, whereas the low levels in summer were related to the increase in precipitation. BC values exhibited similar bimodal diurnal patterns during the four seasons, with peaks occurring in the morning and evening rush hours and an afternoon trough, which was associated with local anthropogenic activities and meteorological conditions. A potential source contribution function model indicated that the effects of regional transport mostly occurred in spring and winter. The most likely regional sources of BC in Baoji were southern Shaanxi province, northwestern Hubei province, and northern Chongqing during spring, whereas the northeastern Sichuan Basin was the most important source region during winter

    Real-time Monitoring for the Next Core-Collapse Supernova in JUNO

    Full text link
    Core-collapse supernova (CCSN) is one of the most energetic astrophysical events in the Universe. The early and prompt detection of neutrinos before (pre-SN) and during the SN burst is a unique opportunity to realize the multi-messenger observation of the CCSN events. In this work, we describe the monitoring concept and present the sensitivity of the system to the pre-SN and SN neutrinos at the Jiangmen Underground Neutrino Observatory (JUNO), which is a 20 kton liquid scintillator detector under construction in South China. The real-time monitoring system is designed with both the prompt monitors on the electronic board and online monitors at the data acquisition stage, in order to ensure both the alert speed and alert coverage of progenitor stars. By assuming a false alert rate of 1 per year, this monitoring system can be sensitive to the pre-SN neutrinos up to the distance of about 1.6 (0.9) kpc and SN neutrinos up to about 370 (360) kpc for a progenitor mass of 30M⊙M_{\odot} for the case of normal (inverted) mass ordering. The pointing ability of the CCSN is evaluated by using the accumulated event anisotropy of the inverse beta decay interactions from pre-SN or SN neutrinos, which, along with the early alert, can play important roles for the followup multi-messenger observations of the next Galactic or nearby extragalactic CCSN.Comment: 24 pages, 9 figure

    Detection of the Diffuse Supernova Neutrino Background with JUNO

    Get PDF
    As an underground multi-purpose neutrino detector with 20 kton liquid scintillator, Jiangmen Underground Neutrino Observatory (JUNO) is competitive with and complementary to the water-Cherenkov detectors on the search for the diffuse supernova neutrino background (DSNB). Typical supernova models predict 2-4 events per year within the optimal observation window in the JUNO detector. The dominant background is from the neutral-current (NC) interaction of atmospheric neutrinos with 12C nuclei, which surpasses the DSNB by more than one order of magnitude. We evaluated the systematic uncertainty of NC background from the spread of a variety of data-driven models and further developed a method to determine NC background within 15\% with {\it{in}} {\it{situ}} measurements after ten years of running. Besides, the NC-like backgrounds can be effectively suppressed by the intrinsic pulse-shape discrimination (PSD) capabilities of liquid scintillators. In this talk, I will present in detail the improvements on NC background uncertainty evaluation, PSD discriminator development, and finally, the potential of DSNB sensitivity in JUNO

    Potential of Core-Collapse Supernova Neutrino Detection at JUNO

    Get PDF
    JUNO is an underground neutrino observatory under construction in Jiangmen, China. It uses 20kton liquid scintillator as target, which enables it to detect supernova burst neutrinos of a large statistics for the next galactic core-collapse supernova (CCSN) and also pre-supernova neutrinos from the nearby CCSN progenitors. All flavors of supernova burst neutrinos can be detected by JUNO via several interaction channels, including inverse beta decay, elastic scattering on electron and proton, interactions on C12 nuclei, etc. This retains the possibility for JUNO to reconstruct the energy spectra of supernova burst neutrinos of all flavors. The real time monitoring systems based on FPGA and DAQ are under development in JUNO, which allow prompt alert and trigger-less data acquisition of CCSN events. The alert performances of both monitoring systems have been thoroughly studied using simulations. Moreover, once a CCSN is tagged, the system can give fast characterizations, such as directionality and light curve

    Omecamtiv mecarbil in chronic heart failure with reduced ejection fraction, GALACTIC‐HF: baseline characteristics and comparison with contemporary clinical trials

    Get PDF
    Aims: The safety and efficacy of the novel selective cardiac myosin activator, omecamtiv mecarbil, in patients with heart failure with reduced ejection fraction (HFrEF) is tested in the Global Approach to Lowering Adverse Cardiac outcomes Through Improving Contractility in Heart Failure (GALACTIC‐HF) trial. Here we describe the baseline characteristics of participants in GALACTIC‐HF and how these compare with other contemporary trials. Methods and Results: Adults with established HFrEF, New York Heart Association functional class (NYHA) ≄ II, EF ≀35%, elevated natriuretic peptides and either current hospitalization for HF or history of hospitalization/ emergency department visit for HF within a year were randomized to either placebo or omecamtiv mecarbil (pharmacokinetic‐guided dosing: 25, 37.5 or 50 mg bid). 8256 patients [male (79%), non‐white (22%), mean age 65 years] were enrolled with a mean EF 27%, ischemic etiology in 54%, NYHA II 53% and III/IV 47%, and median NT‐proBNP 1971 pg/mL. HF therapies at baseline were among the most effectively employed in contemporary HF trials. GALACTIC‐HF randomized patients representative of recent HF registries and trials with substantial numbers of patients also having characteristics understudied in previous trials including more from North America (n = 1386), enrolled as inpatients (n = 2084), systolic blood pressure < 100 mmHg (n = 1127), estimated glomerular filtration rate < 30 mL/min/1.73 m2 (n = 528), and treated with sacubitril‐valsartan at baseline (n = 1594). Conclusions: GALACTIC‐HF enrolled a well‐treated, high‐risk population from both inpatient and outpatient settings, which will provide a definitive evaluation of the efficacy and safety of this novel therapy, as well as informing its potential future implementation

    Impedance Response of Electrochemical Interfaces: Part IV─Low-Frequency Inductive Loop for a Single-Electron Reaction

    No full text
    The low-frequency inductive loop is usually attributed to relaxation of adsorbed intermediates of multistep reactions in electrocatalysis and corrosion. Herein, we report a low-frequency inductive loop for a single-electron reaction when the electrode potential (EM), the equilibrium potential (Eeq), and the potential of zero charge (Epzc) are different, namely, under nonequilibrium conditions. Interestingly enough, although both reactions involve only one electron, the metal deposition reaction (M+ + e ↔ M) and the redox couple reaction (Fe(CN)63– + e ↔ Fe(CN)64–) show different impedance shapes. The low-frequency inductive loop is observed only for the M+ + e ↔ M reaction in the oxidation direction because its faradaic current has a negative phase angle due to double layer effects. Moreover, we find that the low-frequency inductive loop occurs only when the polarization curve has no diffusion-limiting features

    Two-level incremental checkpoint recovery scheme for reducing system total overheads.

    No full text
    Long-running applications are often subject to failures. Once failures occur, it will lead to unacceptable system overheads. The checkpoint technology is used to reduce the losses in the event of a failure. For the two-level checkpoint recovery scheme used in the long-running tasks, it is unavoidable for the system to periodically transfer huge memory context to a remote stable storage. Therefore, the overheads of setting checkpoints and the re-computing time become a critical issue which directly impacts the system total overheads. Motivated by these concerns, this paper presents a new model by introducing i-checkpoints into the existing two-level checkpoint recovery scheme to deal with the more probable failures with the smaller cost and the faster speed. The proposed scheme is independent of the specific failure distribution type and can be applied to different failure distribution types. We respectively make analyses between the two-level incremental and two-level checkpoint recovery schemes with the Weibull distribution and exponential distribution, both of which fit with the actual failure distribution best. The comparison results show that the total overheads of setting checkpoints, the total re-computing time and the system total overheads in the two-level incremental checkpoint recovery scheme are all significantly smaller than those in the two-level checkpoint recovery scheme. At last, limitations of our study are discussed, and at the same time, open questions and possible future work are given

    Notation.

    No full text
    <p>Notation.</p

    The two-level incremental checkpoint model.

    No full text
    <p>The two-level incremental checkpoint model.</p
    • 

    corecore