134 research outputs found

    Commissioning Perspectives for the ATLAS Pixel Detector

    Get PDF
    The ATLAS Pixel Detector, the innermost sub-detector of the ATLAS experiment at the Large Hadron Collider, CERN, is an 80 million channel silicon pixel tracking detector designed for high-precision charged particle tracking and secondary vertex reconstruction. It was installed in the ATLAS experiment and commissioning for the first proton-proton collision data taking in 2008 has begun. Due to the complex layout and limited accessibility, quality assurance measurements were continuously performed during production and assembly to ensure that no problematic components are integrated. The assembly of the detector at CERN and related quality assurance measurement results, including comparison to previous production measurements, will be presented. In order to verify that the integrated detector, its data acquisition readout chain, the ancillary services and cooling system as well as the detector control and data acquisition software perform together as expected approximately 8% of the detector system was progressively assembled as close to the final layout as possible. The so-called System Test laboratory setup was operated for several months under experiment-like environment conditions. The interplay between different detector components was studied with a focus on the performance and tunability of the optical data transmission system. Operation and optical tuning procedures were developed and qualified for the upcoming commission ing. The front-end electronics preamplifier threshold tuning and noise performance were studied and noise occupancy of the detector with low sensor bias voltages was investigated. Data taking with cosmic muons was performed to test the data acquisition and trigger system as well as the offline reconstruction and analysis software. The data quality was verified with an extended version of the pixel online monitoring package which was implemented for the ATLAS Combined Testbeam. The detector raw data of the Combined Testbeam and of the System Test cosmic run was converted for offline data analysis with the Pixel bytestream converter which was continuously extended and adapted according to the offline analysis software needs

    Production accompanying testing of the ATLAS Pixel module

    Get PDF
    The ATLAS Pixel detector, innermost sub-detector of the ATLAS experiment at LHC, CERN, can be sensibly tested in its entirety the first time after its installation in 2006. Because of the poor accessibility (probably once per year) of the Pixel detector and tight scheduling the replacement of damaged modules after integration as well as during operation will become a highly exposed business. Therefore and to ensure that no affected parts will be used in following production steps, it is necessary that each production step is accompanied by testing the components before assembly and make sure the operativeness afterwards. Probably 300 of about total 2000 semiconductor hybrid pixel detector modules will be build at the Universität Dortmund. Thus a production test setup has been build up and examined before starting serial production. These tests contain the characterization and inspection of the module components and the module itself under different environmental conditions and diverse operating parameters. Once a module is assembled the operativeness is tested with a radioactive source and the long-time stability is assured by a burn-in. A fully electrical characterization is the basis for module selection and sorting for the ATLAS Pixel detector. Additionally the charge collection behavior of irradiated and non irradiated modules has been investigated in the H8 beamline with 180 GeV pions

    Commissioning perspectives for the ATLAS Pixel Detector

    Get PDF
    The ATLAS Pixel Detector, the innermost sub-detector of the ATLAS experiment at the Large Hadron Collider, CERN, is an 80 million channel silicon pixel tracking detector designed for high-precision charged particle tracking and secondary vertex reconstruction. It was installed in the ATLAS experiment and commissioning for the first proton-proton collision data taking in 2008 has begun. Due to the complex layout and limited accessibility, quality assurance measurements were continuously performed during production and assembly to ensure that no problematic components are integrated. The assembly of the detector at CERN and related quality assurance measurement results, including comparison to previous production measurements, will be presented. In order to verify that the integrated detector, its data acquisition readout chain, the ancillary services and cooling system as well as the detector control and data acquisition software perform together as expected approximately 8% of the detector system was progressively assembled as close to the final layout as possible. The so-called System Test laboratory setup was operated for several months under experiment-like environment conditions. The interplay between different detector components was studied with a focus on the performance and tunability of the optical data transmission system. Operation and optical tuning procedures were developed and qualified for the upcoming commissioning. The front-end electronics preamplifier threshold tuning and noise performance were studied and noise occupancy of the detector with low sensor bias voltages was investigated. Data taking with cosmic muons was performed to test the data acquisition and trigger system as well as the offline reconstruction and analysis software. The data quality was verified with an extended version of the pixel online monitoring package which was implemented for the ATLAS Combined Testbeam. The detector raw data of the Combined Testbeam and of the System Test cosmic run was converted for offline data analysis with the Pixel bytestream converter which was continuously extended and adapted according to the offline analysis software needs

    A parodontális sebészet áttekintő története = Overview of the periodontal surgery

    Get PDF
    A parodontium megbetegedései már a feljegyzett irodalom előtt is léteztek. Az első említés a Papyrus Ebersben történt, melyben olyan „előkészületekről” írtnak, amelyek az „íny megerősítésére” szolgálnak. Ez az irat az 1500as években született Egyiptom és Mezopotámia területén

    Biomolecular interactions control the shape of stains from drying droplets of complex fluids

    Get PDF
    When a sessile droplet of a complex fluid dries, a stain forms on the solid surface. The structure and pattern of the stain can be used to detect the presence of a specific chemical compound in the sessile droplet. In the present work, we investigate what parameters of the stain or its formation can be used to characterize the specific interaction between an aqueous dispersion of beads and its receptor immobilized on the surface. We use the biotin-streptavidin system as an experimental model. Clear dissimilarities were observed in the drying sequences on streptavidin-coated substrates of droplets of aqueous solutions containing biotin-coated or streptavidin-coated beads. Fluorescent beads are used in order to visualize the fluid flow field. We show differences in the distribution of the particles on the surface depending on biomolecular interactions between beads and the solid surface. A mechanistic model is proposed to explain the different patterns obtained during drying. The model describes that the beads are left behind the receding wetting line rather than pulled towards the drop center if the biological binding force is comparable to the surface tension of the receding wetting line. Other forces such as the viscous drag, van der Waals forces, and solid–solid friction forces are found negligible. Simple microfluidics experiments are performed to further illustrate the difference in behavior where is adhesion or friction are present between the bead and substrate due to the biological force. The results of the model are in agreement with the experimental observations which provide insight and design capabilities. A better understanding of the effects of the droplet–surface interaction on the drying mechanism is a crucial first step before the identification of drying patterns can be promisingly applied to areas such as immunology and biomarker detection

    A comparative study of anomaly detection methods for gross error detection problems.

    Get PDF
    The chemical industry requires highly accurate and reliable measurements to ensure smooth operation and effective monitoring of processing facilities. However, measured data inevitably contains errors from various sources. Traditionally in flow systems, data reconciliation through mass balancing is applied to reduce error by estimating balanced flows. However, this approach can only handle random errors. For non-random errors (called gross errors, GEs) which are caused by measurement bias, instrument failures, or process leaks, among others, this approach would return incorrect results. In recent years, many gross error detection (GED) methods have been proposed by the research community. It is recognised that the basic principle of GED is a special case of the detection of outliers (or anomalies) in data analytics. With the developments of Machine Learning (ML) research, patterns in the data can be discovered to provide effective detection of anomalous instances. In this paper, we present a comprehensive study of the application of ML-based Anomaly Detection methods (ADMs) in the GED context on a number of synthetic datasets and compare the results with several established GED approaches. We also perform data transformation on the measurement data and compare its associated results to the original results, as well as investigate the effects of training size on the detection performance. One class Support Vector Machine outperformed other ADMs and five selected statistical tests for GED on Accuracy, F1 Score, and Overall Power while Interquartile Range (IQR) method obtained the best selectivity outcome among the top 6 AMDs and the five statistical tests. The results indicate that ADMs can potentially be applied to GED problems

    A weighted ensemble of regression methods for gross error identification problem.

    Get PDF
    In this study, we proposed a new ensemble method to predict the magnitude of gross errors (GEs) on measurement data obtained from the hydrocarbon and stream processing industries. Our proposed model consists of an ensemble of regressors (EoR) obtained by training different regression algorithms on the training data of measurements and their associated GEs. The predictions of the regressors are aggregated using a weighted combining method to obtain the final GE magnitude prediction. In order to search for optimal weights for combining, we modelled the search problem as an optimisation problem by minimising the difference between GE predictions and corresponding ground truths. We used Genetic Algorithm (GA) to search for the optimal weights associated with each regressor. The experiments were conducted on synthetic measurement data generated from 4 popular systems from the literature. We first conducted experiments in comparing the performances of the proposed ensemble using GA and Particle Swarm Optimisation (PSO), nature-based optimisation algorithms to search for combining weights to show the better performance of the proposed ensemble with GA. We then compared the performance of the proposed ensemble to those of two well-known weighted ensemble methods (Least Square and BEM) and two ensemble methods for regression problems (Random Forest and Gradient Boosting). The experimental results showed that although the proposed ensemble took higher computational time for the training process than those benchmark algorithms, it performed better than them on all experimental datasets

    Performance of Particle Tracking Using a Quantum Graph Neural Network

    Full text link
    The Large Hadron Collider (LHC) at the European Organisation for Nuclear Research (CERN) will be upgraded to further increase the instantaneous rate of particle collisions (luminosity) and become the High Luminosity LHC. This increase in luminosity, will yield many more detector hits (occupancy), and thus measurements will pose a challenge to track reconstruction algorithms being responsible to determine particle trajectories from those hits. This work explores the possibility of converting a novel Graph Neural Network model, that proven itself for the track reconstruction task, to a Hybrid Graph Neural Network in order to benefit the exponentially growing Hilbert Space. Several Parametrized Quantum Circuits (PQC) are tested and their performance against the classical approach is compared. We show that the hybrid model can perform similar to the classical approach. We also present a future road map to further increase the performance of the current hybrid model.Comment: 6 pages, 11 figures, Basarim 2020 conference paper; updated trackml referenc
    corecore