22,390 research outputs found

    Proposal of the Readout Electronics for the WCDA in LHAASO Experiment

    Full text link
    The LHAASO (Large High Altitude Air Shower Observatory) experiment is proposed for very high energy gamma ray source survey, in which the WCDA (Water Cherenkov Detector Array) is the one of the major components. In the WCDA, a total of 3600 PMTs are placed under water in four ponds, each with a size of 150 m x 150 m. Precise time and charge measurement is required for the PMT signals, over a large signal amplitude range from single P.E. (photo electron) to 4000 P.E. To fulfill the high requirement of signal measurement in so many front end nodes scattered in a large area, special techniques are developed, such as multiple gain readout, hybrid transmission of clocks, commands, and data, precise clock phase alignment, and new trigger electronics. We present the readout electronics architecture for the WCDA and several prototype modules, which are now under test in the laboratory.Comment: 8 pages, 8 figure

    Deep Bayesian Multi-Target Learning for Recommender Systems

    Full text link
    With the increasing variety of services that e-commerce platforms provide, criteria for evaluating their success become also increasingly multi-targeting. This work introduces a multi-target optimization framework with Bayesian modeling of the target events, called Deep Bayesian Multi-Target Learning (DBMTL). In this framework, target events are modeled as forming a Bayesian network, in which directed links are parameterized by hidden layers, and learned from training samples. The structure of Bayesian network is determined by model selection. We applied the framework to Taobao live-streaming recommendation, to simultaneously optimize (and strike a balance) on targets including click-through rate, user stay time in live room, purchasing behaviors and interactions. Significant improvement has been observed for the proposed method over other MTL frameworks and the non-MTL model. Our practice shows that with an integrated causality structure, we can effectively make the learning of a target benefit from other targets, creating significant synergy effects that improve all targets. The neural network construction guided by DBMTL fits in with the general probabilistic model connecting features and multiple targets, taking weaker assumption than the other methods discussed in this paper. This theoretical generality brings about practical generalization power over various targets distributions, including sparse targets and continuous-value ones.Comment: 7 pages, Deep Learning, Probabilistic Machine Learning, Recommender System, Multi-task Learnin

    Hybrid metric-Palatini brane system

    Full text link
    It is known that the metric and Palatini formalisms of gravity theories have their own interesting features but also suffer from some different drawbacks. Recently, a novel gravity theory called hybrid metric-Palatini gravity was put forward to cure or improve their individual deficiencies. The action of this gravity theory is a hybrid combination of the usual Einstein-Hilbert action and a f(R)f(\mathcal{R}) term constructed by the Palatini formalism. Interestingly, it seems that the existence of a light and long-range scalar field in this gravity may modify the cosmological and galactic dynamics without conflicting with the laboratory and Solar System tests. In this paper we focus on the tensor perturbation of thick branes in this novel gravity theory. We consider two models as examples, namely, the thick branes constructed by a background scalar field and by pure gravity. The thick branes in both models have no inner structure. However, the graviton zero mode in the first model has inner structure when the parameter in this model is larger than its critical value. We find that the effective four-dimensional gravity can be reproduced on the brane for both models. Moreover, the stability of both brane systems against the tensor perturbation can also be ensured.Comment: 7 pages, 4 figure

    Testing dark energy models with H(z)H(z) data

    Full text link
    Om(z)Om(z) is a diagnostic approach to distinguish dark energy models. However, there are few articles to discuss what is the distinguishing criterion. In this paper, firstly we smooth the latest observational H(z)H(z) data using a model-independent method -- Gaussian processes, and then reconstruct the Om(z)Om(z) and its fist order derivative Lm(1)\mathcal{L}^{(1)}_m. Such reconstructions not only could be the distinguishing criteria, but also could be used to estimate the authenticity of models. We choose some popular models to study, such as Λ\LambdaCDM, generalized Chaplygin gas (GCG) model, Chevallier-Polarski-Linder (CPL) parametrization and Jassal-Bagla-Padmanabhan (JBP) parametrization. We plot the trajectories of Om(z)Om(z) and Lm(1)\mathcal{L}^{(1)}_m with 1σ1 \sigma confidence level of these models, and compare them to the reconstruction from H(z)H(z) data set. The result indicates that the H(z)H(z) data does not favor the CPL and JBP models at 1σ1 \sigma confidence level. Strangely, in high redshift range, the reconstructed Lm(1)\mathcal{L}^{(1)}_m has a tendency of deviation from theoretical value, which demonstrates these models are disagreeable with high redshift H(z)H(z) data. This result supports the conclusions of Sahni et al. \citep{sahni2014model} and Ding et al. \citep{ding2015there} that the Λ\LambdaCDM may not be the best description of our universe

    Observational constraint on the varying speed of light theory

    Full text link
    The varying speed of light (VSL) theory is controversial. It succeeds in explaining some cosmological problems, but on the other hand it is excluded by mainstream physics because it will shake the foundation of physics. In the present paper, we devote ourselves to test whether the speed of light is varying from the observational data of the type Ia Supernova, Baryon Acoustic Oscillation, Observational H(z)H(z) data and Cosmic Microwave Background (CMB). We select the common form c(t)=c0an(t)c(t)=c_0a^n(t) with the contribution of dark energy and matter, where c0c_0 is the current value of speed of light, nn is a constant, and consequently construct a varying speed of light dark energy model (VSLDE). The combined observational data show a much trivial constraint n=−0.0033±0.0045n=-0.0033 \pm 0.0045 at 68.3\% confidence level, which indicates that the speed of light may be a constant with high significance. By reconstructing the time-variable c(t)c(t), we find that the speed of light almost has no variation for redshift z<10−1z < 10^{-1}. For high-zz observations, they are more sensitive to the VSLDE model, but the variation of speed of light is only in order of 10−210^{-2}. We also introduce the geometrical diagnostic Om(z)Om (z) to show the difference between the VSLDE and Λ\LambdaCDM model. The result shows that the current data are difficult to differentiate them. All the results show that the observational data favor the constant speed of light

    Application of Real-time Digitization Technique in Beam Measurement for Accelerators

    Full text link
    Beam measurement is very important for accelerators. With the development of analog-to-digital conversion techniques, digital beam measurement becomes a research hot spot. IQ (In-phase & Quadrature-phase) analysis based method is an important beam measurement approach, the principle of which is presented and discussed in this paper. The State Key Laboratory of Particle Detection and Electronics in University of Science and Technology of China has devoted efforts to the research of digital beam measurement based on high-speed high-resolution analog-to-digital conversion, and a series of beam measurement instruments were designed for China Spallation Neutron Source (CSNS), Shanghai Synchrotron Radiation Facility (SSRF), and Accelerator Driven Sub-critical system (ADS)

    A New All-Digital Background Calibration Technique for Time-Interleaved ADC Using First Order Approximation FIR Filters

    Full text link
    This paper describes a new all-digital technique for calibration of the mismatches in time-interleaved analog-to-digital converters (TIADCs) to reduce the circuit area. The proposed technique gives the first order approximation of the gain mismatches and sample-time mismatches, and employs first order approximation FIR filter banks to calibrate the sampled signal, which do not need large number of FIR taps. In the case of a two-channel 12-bit TIADC, the proposed technique improves SINAD of simulated data from 45dB to 69dB, and improves SINAD of measured data from 47dB to 53dB, while the number of FIR taps is only 30. In the case of slight mismatches, 24-bit FIR coefficient is sufficient to correct 12-bit signals, which makes it easy to implement this technique in hardware. In addition, this technique is not limited by the number of sub-ADC channels and can be calculated in parallel in hardware, these features enable this technique to be versatile and capable of real-time calibration

    A new method of waveform digitization based on time-interleaved A/D conversion

    Full text link
    Time interleaved analog-to-digital conversion (TIADC) based on parallelism is an effective way to meet the requirement of the ultra-fast waveform digitizer beyond Gsps. Different methods to correct the mismatch errors among different analog-to-digital conversion channels have been developed previously. To overcome the speed limitation in hardware design and to implement the mismatch correction algorithm in real time, this paper proposes a fully parallel correction algorithm. A 12-bit 1-Gsps waveform digitizer with ENOB around 10.5 bit from 5 MHz to 200 MHz is implemented based on the real-time correction algorithm.Comment: 11 pages, 15 figure

    One-Shot Coherence Dilution

    Full text link
    Manipulation and quantification of quantum resources are fundamental problems in quantum physics. In the asymptotic limit, coherence distillation and dilution have been proposed by manipulating infinite identical copies of states. In the non-asymptotic setting, finite data-size effects emerge, and the practically relevant problem of coherence manipulation using finite resources has been left open. This letter establishes the one-shot theory of coherence dilution, which involves converting maximally coherent states into an arbitrary quantum state using maximally incoherent operations, dephasing-covariant incoherent operations, incoherent operations, or strictly incoherent operations. We introduce several coherence monotones with concrete operational interpretations that estimate the one-shot coherence cost --- the minimum amount of maximally coherent states needed for faithful coherence dilution. Furthermore, we derive the asymptotic coherence dilution results with maximally incoherent operations, incoherent operations, and strictly incoherent operations as special cases. Our result can be applied in the analyses of quantum information processing tasks that exploit coherence as resources, such as quantum key distribution and random number generation.Comment: 12 pages, 1 figure, comments are welcome

    Matching the Quasi Meson Distribution Amplitude in RI/MOM scheme

    Full text link
    The xx-dependence of light-cone distribution amplitude (LCDA) can be directly calculated from a quasi distribution amplitude (DA) in lattice QCD within the framework of large-momentum effective theory (LaMET). In this paper, we study the one-loop renormalization of the quasi-DA in the regularization-independent momentum subtraction (RI/MOM) scheme. The renormalization factor for the quasi parton distribution function can be used to renormalize the quasi-DA provided that they are implemented on lattice and in perturbation theory in the same manner. We derive the one-loop matching coefficient that matches quasi-DA in the RI/MOM scheme onto LCDA in the MS‾\overline{\rm MS} scheme. Our result provides the crucial step to extract the LCDAs from lattice matrix elements of quasi-DAs.Comment: 7 pages, 0 figure
    • …
    corecore