44 research outputs found

    Pseudorabies Virus Induces Viability Changes and Oxidative Stress in Swine Testis Cell-Line

    Get PDF
    In this study, we evaluated the association between pseudorabies (PRV) virus-induced viability changes and oxidative stress in vitro cultivated swine testis (ST) cells. The kinetic of 2, 12, 24, 36 and 48 h during the cell culture with PRV by using a multiplicity of infection (MOI) of 1 TCID50 per cell were adopted. The results suggested a complex relation between cell viability and oxidative stress during PRV infection. In the early stages of PRV infection, the cell viability was higher than the control group, and the state of cellular oxidative stress remained relatively stable. After 24 h, the cell viability began to decrease, and the amount of the cellular malondialdehyde in ST cells increased significantly, and the activities of superoxide dismutase and catalase decreased significantly (P<0.05). Meanwhile, the rising concentrations of cellular hydrogen peroxide were detected prior to the changes in cell viability and oxidative stress. In conclusion, the PRV infection of ST cells leads to oxidative stress, and this stress could play a crucial role on the cell viability as the PRV infection time progresses

    CMS physics technical design report : Addendum on high density QCD with heavy ions

    Get PDF
    Peer reviewe

    An embedding technique to determine ττ backgrounds in proton-proton collision data

    Get PDF
    An embedding technique is presented to estimate standard model tau tau backgrounds from data with minimal simulation input. In the data, the muons are removed from reconstructed mu mu events and replaced with simulated tau leptons with the same kinematic properties. In this way, a set of hybrid events is obtained that does not rely on simulation except for the decay of the tau leptons. The challenges in describing the underlying event or the production of associated jets in the simulation are avoided. The technique described in this paper was developed for CMS. Its validation and the inherent uncertainties are also discussed. The demonstration of the performance of the technique is based on a sample of proton-proton collisions collected by CMS in 2017 at root s = 13 TeV corresponding to an integrated luminosity of 41.5 fb(-1).Peer reviewe

    Performance of missing transverse momentum reconstruction in proton-proton collisions at root s=13 TeV using the CMS detector

    Get PDF
    The performance of missing transverse momentum ((p) over right arrow (miss)(T)) reconstruction algorithms for the CMS experiment is presented, using proton-proton collisions at a center-of-mass energy of 13 TeV, collected at the CERN LHC in 2016. The data sample corresponds to an integrated luminosity of 35.9 fb(-1). The results include measurements of the scale and resolution of (p) over right arrow (miss)(T), and detailed studies of events identified with anomalous (p) over right arrow (miss)(T). The performance is presented of a (p) over right arrow (miss)(T) reconstruction algorithm that mitigates the effects of multiple proton-proton interactions, using the "pileup per particle identification" method. The performance is shown of an algorithm used to estimate the compatibility of the reconstructed (p) over right arrow (miss)(T) with the hypothesis that it originates from resolution effects.Peer reviewe

    Workload-aware anomaly detection for web applications

    No full text
    The failure of Web applications often affects a large population of customers, and leads to severe economic loss. Anomaly detection is essential for improving the reliability of Web applications. Current approaches model correlations among metrics, and detect anomalies when the correlations are broken. However, dynamic workloads cause the metric correlations to change over time. Moreover, modeling various metric correlations are difficult in complex Web applications. This paper addresses these problems and proposes an online anomaly detection approach for Web applications. We present an incremental clustering algorithm for training workload patterns online, and employ the local outlier factor (LOF) in the recognized workload pattern to detect anomalies. In addition, we locate the anomalous metrics with the Student's t-test method. We evaluated our approach on a testbed running the TPC-W industry-standard benchmark. The experimental results show that our approach is able to (1) capture workload fluctuations accurately, (2) detect typical faults effectively and (3) has advantages over two contemporary ones in accuracy. &copy; 2013 Elsevier Inc.The failure of Web applications often affects a large population of customers, and leads to severe economic loss. Anomaly detection is essential for improving the reliability of Web applications. Current approaches model correlations among metrics, and detect anomalies when the correlations are broken. However, dynamic workloads cause the metric correlations to change over time. Moreover, modeling various metric correlations are difficult in complex Web applications. This paper addresses these problems and proposes an online anomaly detection approach for Web applications. We present an incremental clustering algorithm for training workload patterns online, and employ the local outlier factor (LOF) in the recognized workload pattern to detect anomalies. In addition, we locate the anomalous metrics with the Student's t-test method. We evaluated our approach on a testbed running the TPC-W industry-standard benchmark. The experimental results show that our approach is able to (1) capture workload fluctuations accurately, (2) detect typical faults effectively and (3) has advantages over two contemporary ones in accuracy. &copy; 2013 Elsevier Inc

    VM image update notification mechanism based on pub/sub paradigm in cloud

    No full text
    Virtual machine image encapsulates the whole software stack including operating system, middleware, user application and other software products. Failure occurred in any layer of the software stack will be treated as image failure. However, virtual machine image with potential failures can be convert to template and spread to a wide range by means of template replication. And this paper refer to this phenomenon as "image failure propagation". Usually, patching is a widely adopted solution to resolve software failures. Nevertheless, virtual machine image patches are difficult to deliver to the final users in cloud computing environment for its openness and multi-tenancy features. This paper described image failure propagation model for the first time and proposed a promoting mechanism based on pub/sub computing paradigm to combat with the patching delivery problem.Virtual machine image encapsulates the whole software stack including operating system, middleware, user application and other software products. Failure occurred in any layer of the software stack will be treated as image failure. However, virtual machine image with potential failures can be convert to template and spread to a wide range by means of template replication. And this paper refer to this phenomenon as "image failure propagation". Usually, patching is a widely adopted solution to resolve software failures. Nevertheless, virtual machine image patches are difficult to deliver to the final users in cloud computing environment for its openness and multi-tenancy features. This paper described image failure propagation model for the first time and proposed a promoting mechanism based on pub/sub computing paradigm to combat with the patching delivery problem

    Detecting performance anomaly with correlation analysis for Internetware

    No full text
    Internetware has become an emerging software paradigm to provide Internet services. The performance anomaly of Internetware services not only affects user experience, but also causes severe economic loss to service providers. Diagnosing performance anomalies has become one of the keys to improving the quality of service (QoS) of Internetware. Existing approaches create a system model to predict performance. Then, the prediction from the model is compared with the observation; a significant deviation may signal the occurrence of a performance anomaly. However, these approaches require domain knowledge and parameterization efforts. Moreover, dynamic workloads affect the accuracy of performance prediction. To address these issues, we propose a correlation analysis based approach to detecting the performance anomaly for Internetware. We use kernel canonical correlation analysis (KCCA) to model the correlation between workloads and performance based on monitoring data. Furthermore, we detect anomalous correlation coefficients by XmR control charts, which detect the anomalous coefficient and trend without a priori knowledge. Finally, we adopt a feature selection method (Relief) to locate the anomalous metrics. We evaluated our approach on a testbed running the TPC-W industry-standard benchmark. The experimental results show that our approach is able to capture the performance anomaly, and locate the metrics relating to the cause of anomaly. &copy; 2013 Science China Press and Springer-Verlag Berlin Heidelberg.Internetware has become an emerging software paradigm to provide Internet services. The performance anomaly of Internetware services not only affects user experience, but also causes severe economic loss to service providers. Diagnosing performance anomalies has become one of the keys to improving the quality of service (QoS) of Internetware. Existing approaches create a system model to predict performance. Then, the prediction from the model is compared with the observation; a significant deviation may signal the occurrence of a performance anomaly. However, these approaches require domain knowledge and parameterization efforts. Moreover, dynamic workloads affect the accuracy of performance prediction. To address these issues, we propose a correlation analysis based approach to detecting the performance anomaly for Internetware. We use kernel canonical correlation analysis (KCCA) to model the correlation between workloads and performance based on monitoring data. Furthermore, we detect anomalous correlation coefficients by XmR control charts, which detect the anomalous coefficient and trend without a priori knowledge. Finally, we adopt a feature selection method (Relief) to locate the anomalous metrics. We evaluated our approach on a testbed running the TPC-W industry-standard benchmark. The experimental results show that our approach is able to capture the performance anomaly, and locate the metrics relating to the cause of anomaly. &copy; 2013 Science China Press and Springer-Verlag Berlin Heidelberg

    PRESC <sup>2</sup>: Efficient self-reconfiguration of cache strategies for elastic caching platforms

    No full text
    Elastic caching platforms (ECPs) play an important role in accelerating the performance of Web applications. Several cache strategies have been proposed for ECPs to manage data access and distributions while maintaining the service availability. In our earlier research, we have demonstrated that there is no "one-fits-all" strategy for heterogeneous scenarios and the selection of the optimal strategy is related with workload patterns, cluster size and the number of concurrent users. In this paper, we present a new reconfiguration framework named PRESC2. It applies machine learning approaches to determine an optimal cache strategy and supports online optimization of performance model through trace-driven simulation or semi-supervised classification. Besides, the authors also propose a robust cache entries synchronization algorithm and a new optimization mechanism to further lower the adaptation costs. In our experiments, we find that PRESC2 improves the elasticity of ECPs and brings big performance gains when compared with static configurations. &copy; 2013 Springer-Verlag Wien.Elastic caching platforms (ECPs) play an important role in accelerating the performance of Web applications. Several cache strategies have been proposed for ECPs to manage data access and distributions while maintaining the service availability. In our earlier research, we have demonstrated that there is no "one-fits-all" strategy for heterogeneous scenarios and the selection of the optimal strategy is related with workload patterns, cluster size and the number of concurrent users. In this paper, we present a new reconfiguration framework named PRESC2. It applies machine learning approaches to determine an optimal cache strategy and supports online optimization of performance model through trace-driven simulation or semi-supervised classification. Besides, the authors also propose a robust cache entries synchronization algorithm and a new optimization mechanism to further lower the adaptation costs. In our experiments, we find that PRESC2 improves the elasticity of ECPs and brings big performance gains when compared with static configurations. &copy; 2013 Springer-Verlag Wien

    Urban-scale SALSCS, Part II: A Parametric Study of System Performance

    No full text
    Following the experimental and numerical investigations of the Xi&rsquo;an demonstration unit in Part I, Part II presents a parametric study on the proposed urban-scale SALSCS by using the validated numerical model. This study is aimed at understanding the influence of different variables on system performance, namely, the solar irradiation, ambient-air temperature and ground temperature at a 2-m depth as ambient parameters, and the inlet and outlet heights of the solar collector, collector width, tower width and tower height as geometric parameters. The effect of pressure drop across the filters on the system flow rate has been evaluated as well. The parameters that considerably influence the system performance have been identified.</p
    corecore