131 research outputs found

    Experiments on bright field and dark field high energy electron imaging with thick target material

    Full text link
    Using a high energy electron beam for the imaging of high density matter with both high spatial-temporal and areal density resolution under extreme states of temperature and pressure is one of the critical challenges in high energy density physics . When a charged particle beam passes through an opaque target, the beam will be scattered with a distribution that depends on the thickness of the material. By collecting the scattered beam either near or off axis, so-called bright field or dark field images can be obtained. Here we report on an electron radiography experiment using 45 MeV electrons from an S-band photo-injector, where scattered electrons, after interacting with a sample, are collected and imaged by a quadrupole imaging system. We achieved a few micrometers (about 4 micrometers) spatial resolution and about 10 micrometers thickness resolution for a silicon target of 300-600 micron thickness. With addition of dark field images that are captured by selecting electrons with large scattering angle, we show that more useful information in determining external details such as outlines, boundaries and defects can be obtained.Comment: 7pages, 7 figure

    Effect of Fluorosis on liver cells of VC deficient and wild type mice

    Get PDF
    ABSTRACT For decades, mouse and other rodents have been used for study of oxidative or related studies such as the effect of fluoride. It is known that rodents normally synthesize their own vitamin C (VC) due to the presence of a key enzyme in ascorbic acid synthesis, lgulono-lactone-γ-oxidase (Gulo), while humans do not have the capacity of VC synthesis due to the deletion of most part of the GULO gene. The spontaneous fracture (sfx) mouse recently emerged as a model for study of VC deficiency. We investigated the effect of fluoride on liver cells from wild type Balb/c and sfx mice. We found that reduction of SOD, GPx and CAT activities were reduced in both wild type and sfx mice; however, the amount of reduction in the sfx cells is more than that in Balb/c cells. In addition, while both cells increased MDA, the increase in the sfx cells is greater than that in Balb/c cells. Gene networks of Sod, Gpx and Cat in the liver of humans and mice are also different. Our study suggests that reaction to fluoride in Vitamin C deficient mice might be different from that of wild type mice

    An embedding technique to determine ττ backgrounds in proton-proton collision data

    Get PDF
    An embedding technique is presented to estimate standard model tau tau backgrounds from data with minimal simulation input. In the data, the muons are removed from reconstructed mu mu events and replaced with simulated tau leptons with the same kinematic properties. In this way, a set of hybrid events is obtained that does not rely on simulation except for the decay of the tau leptons. The challenges in describing the underlying event or the production of associated jets in the simulation are avoided. The technique described in this paper was developed for CMS. Its validation and the inherent uncertainties are also discussed. The demonstration of the performance of the technique is based on a sample of proton-proton collisions collected by CMS in 2017 at root s = 13 TeV corresponding to an integrated luminosity of 41.5 fb(-1).Peer reviewe

    Performance of missing transverse momentum reconstruction in proton-proton collisions at root s=13 TeV using the CMS detector

    Get PDF
    The performance of missing transverse momentum ((p) over right arrow (miss)(T)) reconstruction algorithms for the CMS experiment is presented, using proton-proton collisions at a center-of-mass energy of 13 TeV, collected at the CERN LHC in 2016. The data sample corresponds to an integrated luminosity of 35.9 fb(-1). The results include measurements of the scale and resolution of (p) over right arrow (miss)(T), and detailed studies of events identified with anomalous (p) over right arrow (miss)(T). The performance is presented of a (p) over right arrow (miss)(T) reconstruction algorithm that mitigates the effects of multiple proton-proton interactions, using the "pileup per particle identification" method. The performance is shown of an algorithm used to estimate the compatibility of the reconstructed (p) over right arrow (miss)(T) with the hypothesis that it originates from resolution effects.Peer reviewe

    Mining both frequent and rare episodes in multiple data streams

    No full text
    In this paper, we describe a method for mining both frequent episodes and rare episodes in multiple data streams. The main issues include episodes mining and data streams relationship processing. Therefore, a mining algorithm together with two dedicated handling mechanisms is presented. We propose the concept of alternative support for discovering frequent and rare episodes, and define the semantic similarity of event sequences for analyzing the relationships between data streams. The algorithm extracts basic episode information from each data stream and keeps the information in episode sets. Then analyze relationships of episode sets and merge similar episode sets, and mining episode rules from the merged sets by alternative support and confidence. From experiments, we find that our mining algorithm is successful for processing multiple data streams and mining frequent and rare episodes. Our research results may lead to a feasible solution for frequent and rare episodes mining in multiple data streams. © 2013 IEEE.In this paper, we describe a method for mining both frequent episodes and rare episodes in multiple data streams. The main issues include episodes mining and data streams relationship processing. Therefore, a mining algorithm together with two dedicated handling mechanisms is presented. We propose the concept of alternative support for discovering frequent and rare episodes, and define the semantic similarity of event sequences for analyzing the relationships between data streams. The algorithm extracts basic episode information from each data stream and keeps the information in episode sets. Then analyze relationships of episode sets and merge similar episode sets, and mining episode rules from the merged sets by alternative support and confidence. From experiments, we find that our mining algorithm is successful for processing multiple data streams and mining frequent and rare episodes. Our research results may lead to a feasible solution for frequent and rare episodes mining in multiple data streams. © 2013 IEEE

    PRESC <sup>2</sup>: Efficient self-reconfiguration of cache strategies for elastic caching platforms

    No full text
    Elastic caching platforms (ECPs) play an important role in accelerating the performance of Web applications. Several cache strategies have been proposed for ECPs to manage data access and distributions while maintaining the service availability. In our earlier research, we have demonstrated that there is no "one-fits-all" strategy for heterogeneous scenarios and the selection of the optimal strategy is related with workload patterns, cluster size and the number of concurrent users. In this paper, we present a new reconfiguration framework named PRESC2. It applies machine learning approaches to determine an optimal cache strategy and supports online optimization of performance model through trace-driven simulation or semi-supervised classification. Besides, the authors also propose a robust cache entries synchronization algorithm and a new optimization mechanism to further lower the adaptation costs. In our experiments, we find that PRESC2 improves the elasticity of ECPs and brings big performance gains when compared with static configurations. &copy; 2013 Springer-Verlag Wien.Elastic caching platforms (ECPs) play an important role in accelerating the performance of Web applications. Several cache strategies have been proposed for ECPs to manage data access and distributions while maintaining the service availability. In our earlier research, we have demonstrated that there is no "one-fits-all" strategy for heterogeneous scenarios and the selection of the optimal strategy is related with workload patterns, cluster size and the number of concurrent users. In this paper, we present a new reconfiguration framework named PRESC2. It applies machine learning approaches to determine an optimal cache strategy and supports online optimization of performance model through trace-driven simulation or semi-supervised classification. Besides, the authors also propose a robust cache entries synchronization algorithm and a new optimization mechanism to further lower the adaptation costs. In our experiments, we find that PRESC2 improves the elasticity of ECPs and brings big performance gains when compared with static configurations. &copy; 2013 Springer-Verlag Wien

    Hippo: An enhancement of pipeline-aware in-memory caching for HDFS

    No full text
    In the age of big data, distributed computing frameworks tend to coexist and collaborate in pipeline using one scheduler. While a variety of techniques for reducing I/O latency have been proposed, these are rarely specific for the whole pipeline performance. This paper proposes memory management logic called 'Hippo' which targets distributed systems and in particular 'pipelined' applications that might span differing big data frameworks. Though individual frameworks may have internal memory management primitives, Hippo proposes to make a generic framework that works agnostic of these highlevel operations. To increase the hit ratio of in-memory cache, this paper discusses the granularity of caching and how Hippo leverages the job dependency graph to make memory retention and pre-fetching decisions. Our evaluations demonstrate that job dependency is essential to improve the cache performance and a global cache policy maker, in most cases, significantly outperforms explicit caching by users.In the age of big data, distributed computing frameworks tend to coexist and collaborate in pipeline using one scheduler. While a variety of techniques for reducing I/O latency have been proposed, these are rarely specific for the whole pipeline performance. This paper proposes memory management logic called 'Hippo' which targets distributed systems and in particular 'pipelined' applications that might span differing big data frameworks. Though individual frameworks may have internal memory management primitives, Hippo proposes to make a generic framework that works agnostic of these highlevel operations. To increase the hit ratio of in-memory cache, this paper discusses the granularity of caching and how Hippo leverages the job dependency graph to make memory retention and pre-fetching decisions. Our evaluations demonstrate that job dependency is essential to improve the cache performance and a global cache policy maker, in most cases, significantly outperforms explicit caching by users

    Workload-aware anomaly detection for web applications

    No full text
    The failure of Web applications often affects a large population of customers, and leads to severe economic loss. Anomaly detection is essential for improving the reliability of Web applications. Current approaches model correlations among metrics, and detect anomalies when the correlations are broken. However, dynamic workloads cause the metric correlations to change over time. Moreover, modeling various metric correlations are difficult in complex Web applications. This paper addresses these problems and proposes an online anomaly detection approach for Web applications. We present an incremental clustering algorithm for training workload patterns online, and employ the local outlier factor (LOF) in the recognized workload pattern to detect anomalies. In addition, we locate the anomalous metrics with the Student's t-test method. We evaluated our approach on a testbed running the TPC-W industry-standard benchmark. The experimental results show that our approach is able to (1) capture workload fluctuations accurately, (2) detect typical faults effectively and (3) has advantages over two contemporary ones in accuracy. &copy; 2013 Elsevier Inc.The failure of Web applications often affects a large population of customers, and leads to severe economic loss. Anomaly detection is essential for improving the reliability of Web applications. Current approaches model correlations among metrics, and detect anomalies when the correlations are broken. However, dynamic workloads cause the metric correlations to change over time. Moreover, modeling various metric correlations are difficult in complex Web applications. This paper addresses these problems and proposes an online anomaly detection approach for Web applications. We present an incremental clustering algorithm for training workload patterns online, and employ the local outlier factor (LOF) in the recognized workload pattern to detect anomalies. In addition, we locate the anomalous metrics with the Student's t-test method. We evaluated our approach on a testbed running the TPC-W industry-standard benchmark. The experimental results show that our approach is able to (1) capture workload fluctuations accurately, (2) detect typical faults effectively and (3) has advantages over two contemporary ones in accuracy. &copy; 2013 Elsevier Inc
    corecore