66 research outputs found

    Experiments on bright field and dark field high energy electron imaging with thick target material

    Full text link
    Using a high energy electron beam for the imaging of high density matter with both high spatial-temporal and areal density resolution under extreme states of temperature and pressure is one of the critical challenges in high energy density physics . When a charged particle beam passes through an opaque target, the beam will be scattered with a distribution that depends on the thickness of the material. By collecting the scattered beam either near or off axis, so-called bright field or dark field images can be obtained. Here we report on an electron radiography experiment using 45 MeV electrons from an S-band photo-injector, where scattered electrons, after interacting with a sample, are collected and imaged by a quadrupole imaging system. We achieved a few micrometers (about 4 micrometers) spatial resolution and about 10 micrometers thickness resolution for a silicon target of 300-600 micron thickness. With addition of dark field images that are captured by selecting electrons with large scattering angle, we show that more useful information in determining external details such as outlines, boundaries and defects can be obtained.Comment: 7pages, 7 figure

    An embedding technique to determine ττ backgrounds in proton-proton collision data

    Get PDF
    An embedding technique is presented to estimate standard model tau tau backgrounds from data with minimal simulation input. In the data, the muons are removed from reconstructed mu mu events and replaced with simulated tau leptons with the same kinematic properties. In this way, a set of hybrid events is obtained that does not rely on simulation except for the decay of the tau leptons. The challenges in describing the underlying event or the production of associated jets in the simulation are avoided. The technique described in this paper was developed for CMS. Its validation and the inherent uncertainties are also discussed. The demonstration of the performance of the technique is based on a sample of proton-proton collisions collected by CMS in 2017 at root s = 13 TeV corresponding to an integrated luminosity of 41.5 fb(-1).Peer reviewe

    Performance of missing transverse momentum reconstruction in proton-proton collisions at root s=13 TeV using the CMS detector

    Get PDF
    The performance of missing transverse momentum ((p) over right arrow (miss)(T)) reconstruction algorithms for the CMS experiment is presented, using proton-proton collisions at a center-of-mass energy of 13 TeV, collected at the CERN LHC in 2016. The data sample corresponds to an integrated luminosity of 35.9 fb(-1). The results include measurements of the scale and resolution of (p) over right arrow (miss)(T), and detailed studies of events identified with anomalous (p) over right arrow (miss)(T). The performance is presented of a (p) over right arrow (miss)(T) reconstruction algorithm that mitigates the effects of multiple proton-proton interactions, using the "pileup per particle identification" method. The performance is shown of an algorithm used to estimate the compatibility of the reconstructed (p) over right arrow (miss)(T) with the hypothesis that it originates from resolution effects.Peer reviewe

    PRESC <sup>2</sup>: Efficient self-reconfiguration of cache strategies for elastic caching platforms

    No full text
    Elastic caching platforms (ECPs) play an important role in accelerating the performance of Web applications. Several cache strategies have been proposed for ECPs to manage data access and distributions while maintaining the service availability. In our earlier research, we have demonstrated that there is no "one-fits-all" strategy for heterogeneous scenarios and the selection of the optimal strategy is related with workload patterns, cluster size and the number of concurrent users. In this paper, we present a new reconfiguration framework named PRESC2. It applies machine learning approaches to determine an optimal cache strategy and supports online optimization of performance model through trace-driven simulation or semi-supervised classification. Besides, the authors also propose a robust cache entries synchronization algorithm and a new optimization mechanism to further lower the adaptation costs. In our experiments, we find that PRESC2 improves the elasticity of ECPs and brings big performance gains when compared with static configurations. &copy; 2013 Springer-Verlag Wien.Elastic caching platforms (ECPs) play an important role in accelerating the performance of Web applications. Several cache strategies have been proposed for ECPs to manage data access and distributions while maintaining the service availability. In our earlier research, we have demonstrated that there is no "one-fits-all" strategy for heterogeneous scenarios and the selection of the optimal strategy is related with workload patterns, cluster size and the number of concurrent users. In this paper, we present a new reconfiguration framework named PRESC2. It applies machine learning approaches to determine an optimal cache strategy and supports online optimization of performance model through trace-driven simulation or semi-supervised classification. Besides, the authors also propose a robust cache entries synchronization algorithm and a new optimization mechanism to further lower the adaptation costs. In our experiments, we find that PRESC2 improves the elasticity of ECPs and brings big performance gains when compared with static configurations. &copy; 2013 Springer-Verlag Wien

    VM image update notification mechanism based on pub/sub paradigm in cloud

    No full text
    Virtual machine image encapsulates the whole software stack including operating system, middleware, user application and other software products. Failure occurred in any layer of the software stack will be treated as image failure. However, virtual machine image with potential failures can be convert to template and spread to a wide range by means of template replication. And this paper refer to this phenomenon as "image failure propagation". Usually, patching is a widely adopted solution to resolve software failures. Nevertheless, virtual machine image patches are difficult to deliver to the final users in cloud computing environment for its openness and multi-tenancy features. This paper described image failure propagation model for the first time and proposed a promoting mechanism based on pub/sub computing paradigm to combat with the patching delivery problem.Virtual machine image encapsulates the whole software stack including operating system, middleware, user application and other software products. Failure occurred in any layer of the software stack will be treated as image failure. However, virtual machine image with potential failures can be convert to template and spread to a wide range by means of template replication. And this paper refer to this phenomenon as "image failure propagation". Usually, patching is a widely adopted solution to resolve software failures. Nevertheless, virtual machine image patches are difficult to deliver to the final users in cloud computing environment for its openness and multi-tenancy features. This paper described image failure propagation model for the first time and proposed a promoting mechanism based on pub/sub computing paradigm to combat with the patching delivery problem

    Robust and efficient quantum private comparison of equality with collective detection over collective-noise channels

    No full text
    We present a protocol for quantum private comparison of equality (QPCE) with the help of a semi-honest third party (TP). Instead of employing the entanglement, we use single photons to achieve the comparison in this protocol. By utilizing collective eavesdropping detection strategy, our protocol has the advantage of higher qubit efficiency and lower cost of implementation. In addition to this protocol, we further introduce three robust versions which can be immune to collective dephasing noise, collective-rotation noise and all types of unitary collective noise, respectively. Finally, we show that our protocols can be secure against the attacks from both the outside eavesdroppers and the inside participants by using the theorems on quantum operation discrimination. &copy; 2013 Science China Press and Springer-Verlag Berlin Heidelberg.We present a protocol for quantum private comparison of equality (QPCE) with the help of a semi-honest third party (TP). Instead of employing the entanglement, we use single photons to achieve the comparison in this protocol. By utilizing collective eavesdropping detection strategy, our protocol has the advantage of higher qubit efficiency and lower cost of implementation. In addition to this protocol, we further introduce three robust versions which can be immune to collective dephasing noise, collective-rotation noise and all types of unitary collective noise, respectively. Finally, we show that our protocols can be secure against the attacks from both the outside eavesdroppers and the inside participants by using the theorems on quantum operation discrimination. &copy; 2013 Science China Press and Springer-Verlag Berlin Heidelberg

    Workload-aware anomaly detection for web applications

    No full text
    The failure of Web applications often affects a large population of customers, and leads to severe economic loss. Anomaly detection is essential for improving the reliability of Web applications. Current approaches model correlations among metrics, and detect anomalies when the correlations are broken. However, dynamic workloads cause the metric correlations to change over time. Moreover, modeling various metric correlations are difficult in complex Web applications. This paper addresses these problems and proposes an online anomaly detection approach for Web applications. We present an incremental clustering algorithm for training workload patterns online, and employ the local outlier factor (LOF) in the recognized workload pattern to detect anomalies. In addition, we locate the anomalous metrics with the Student's t-test method. We evaluated our approach on a testbed running the TPC-W industry-standard benchmark. The experimental results show that our approach is able to (1) capture workload fluctuations accurately, (2) detect typical faults effectively and (3) has advantages over two contemporary ones in accuracy. &copy; 2013 Elsevier Inc.The failure of Web applications often affects a large population of customers, and leads to severe economic loss. Anomaly detection is essential for improving the reliability of Web applications. Current approaches model correlations among metrics, and detect anomalies when the correlations are broken. However, dynamic workloads cause the metric correlations to change over time. Moreover, modeling various metric correlations are difficult in complex Web applications. This paper addresses these problems and proposes an online anomaly detection approach for Web applications. We present an incremental clustering algorithm for training workload patterns online, and employ the local outlier factor (LOF) in the recognized workload pattern to detect anomalies. In addition, we locate the anomalous metrics with the Student's t-test method. We evaluated our approach on a testbed running the TPC-W industry-standard benchmark. The experimental results show that our approach is able to (1) capture workload fluctuations accurately, (2) detect typical faults effectively and (3) has advantages over two contemporary ones in accuracy. &copy; 2013 Elsevier Inc
    corecore