27 research outputs found

    Optimization of multi-gigabit transceivers for high speed data communication links in HEP Experiments

    Full text link
    The scheme of the data acquisition (DAQ) architecture in High Energy Physics (HEP) experiments consist of data transport from the front-end electronics (FEE) of the online detectors to the readout units (RU), which perform online processing of the data, and then to the data storage for offline analysis. With major upgrades of the Large Hadron Collider (LHC) experiments at CERN, the data transmission rates in the DAQ systems are expected to reach a few TB/sec within the next few years. These high rates are normally associated with the increase in the high-frequency losses, which lead to distortion in the detected signal and degradation of signal integrity. To address this, we have developed an optimization technique of the multi-gigabit transceiver (MGT) and implemented it on the state-of-the-art 20nm Arria-10 FPGA manufactured by Intel Inc. The setup has been validated for three available high-speed data transmission protocols, namely, GBT, TTC-PON and 10 Gbps Ethernet. The improvement in the signal integrity is gauged by two metrics, the Bit Error Rate (BER) and the Eye Diagram. It is observed that the technique improves the signal integrity and reduces BER. The test results and the improvements in the metrics of signal integrity for different link speeds are presented and discussed

    Optimization of multi-gigabit transceivers for high speed data communication links in HEP Experiments

    Get PDF
    The scheme of the data acquisition (DAQ) architecture in High Energy Physics (HEP) experiments consist of data transport from the front-end electronics (FEE) of the online detectors to the readout units (RU), which perform online processing of the data, and then to the data storage for offline analysis. With major upgrades of the Large Hadron Collider (LHC) experiments at CERN, the data transmission rates in the DAQ systems are expected to reach a few TB/sec within the next few years. These high rates are normally associated with the increase in the high-frequency losses, which lead to distortion in the detected signal and degradation of signal integrity. To address this, we have developed an optimization technique of the multi-gigabit transceiver (MGT) and implemented it on the state-of-the-art 20nm Arria-10 FPGA manufactured by Intel Inc. The setup has been validated for three available high-speed data transmission protocols, namely, GBT, TTC-PON and 10 Gbps Ethernet. The improvement in the signal integrity is gauged by two metrics, the Bit Error Rate (BER) and the Eye Diagram. It is observed that the technique improves the signal integrity and reduces BER. The test results and the improvements in the metrics of signal integrity for different link speeds are presented and discussed

    Diabetes Prediction: A Study of Various Classification based Data Mining Techniques

    Get PDF
    Data Mining is an integral part of KDD (Knowledge Discovery in Databases) process. It deals with discovering unknown patterns and knowledge hidden in data. Classification is a pivotal data mining technique with a very wide range of applications. Now a day’s diabetic has become a major disease which has almost crippled people across the globe. It is a medical condition that causes the metabolism to become dysfunctional and increases the blood sugar level in the body and it becomes a major concern for medical practitioner and people at large. An early diagnosis is the starting point for living well with diabetes. Classification Analysis on diabetic dataset is a part of this diagnosis process which can help to detect a diabetic patient from non-diabetic. In this paper classification algorithms are applied on the Pima Indian Diabetic Database which is collected from UCI Machine Learning Laboratory. Various classification algorithms which are Naïve Bayes Classifier, Logistic Regression, Decision Tree Classifier, Random Forest Classifier, Support Vector Classifier and XGBoost Classifier are analyzed and compared based on the accuracy delivered by the models

    Quantum-centric Supercomputing for Materials Science: A Perspective on Challenges and Future Directions

    Full text link
    Computational models are an essential tool for the design, characterization, and discovery of novel materials. Hard computational tasks in materials science stretch the limits of existing high-performance supercomputing centers, consuming much of their simulation, analysis, and data resources. Quantum computing, on the other hand, is an emerging technology with the potential to accelerate many of the computational tasks needed for materials science. In order to do that, the quantum technology must interact with conventional high-performance computing in several ways: approximate results validation, identification of hard problems, and synergies in quantum-centric supercomputing. In this paper, we provide a perspective on how quantum-centric supercomputing can help address critical computational problems in materials science, the challenges to face in order to solve representative use cases, and new suggested directions.Comment: 60 pages, 14 figures; comments welcom

    Search for eccentric black hole coalescences during the third observing run of LIGO and Virgo

    Get PDF
    Despite the growing number of confident binary black hole coalescences observed through gravitational waves so far, the astrophysical origin of these binaries remains uncertain. Orbital eccentricity is one of the clearest tracers of binary formation channels. Identifying binary eccentricity, however, remains challenging due to the limited availability of gravitational waveforms that include effects of eccentricity. Here, we present observational results for a waveform-independent search sensitive to eccentric black hole coalescences, covering the third observing run (O3) of the LIGO and Virgo detectors. We identified no new high-significance candidates beyond those that were already identified with searches focusing on quasi-circular binaries. We determine the sensitivity of our search to high-mass (total mass M>70 M⊙) binaries covering eccentricities up to 0.3 at 15 Hz orbital frequency, and use this to compare model predictions to search results. Assuming all detections are indeed quasi-circular, for our fiducial population model, we place an upper limit for the merger rate density of high-mass binaries with eccentricities 0<e≤0.3 at 0.33 Gpc−3 yr−1 at 90\% confidence level

    Multi-multifractality and dynamic scaling in stochastic porous lattice

    No full text
    In this article, we extend the idea of stochastic dyadic Cantor set to weighted planar stochastic lattice that leads to a stochastic porous lattice. The process starts with an initiator which we choose to be a square of unit area for convenience. We then define a generator that divides the initiator or one of the blocks, picked preferentially with respect to their areas, to divide it either horizontally or vertically into two rectangles of which one of them is removed with probability q=1pq=1-p. We find that the remaining number of blocks and their mass varies with time as tpt^{p} and tqt^{-q}, respectively. Analytical solution shows that the dynamics of this process is governed by infinitely many hidden conserved quantities each of which is a multifractal measure with porous structure as it contains missing blocks of various different sizes. The support where these measures are distributed is fractal with fractal dimension 2p provided 0<p<10<p<1. We find that if the remaining blocks are characterized by their respective area, then the corresponding block size distribution function obeys dynamic scaling

    Regional convergence of growth, inequality and poverty in India--An empirical study

    No full text
    The paper attempts to examine whether there is regional convergence of per capita consumption, inequality and poverty across various states in India. Using panel unit root tests that are robust to cross-sectional dependence, we find that inequality and poverty indicators converge at both rural and urban levels. Further, per capita consumption converges at urban level but not at rural level. Based on factor analysis, we find two groups of states for rural sectors, viz., low-growth and high-growth states, for each of which per capita consumption converges. We also attempt at identifying the responsible entities -- central or state governments or both in cases where convergence is not achieved.Cross co-integration Cross-sectional dependence Panel unit root tests Common factor Conditional convergence Regional disparities
    corecore