32 research outputs found

    Detecting Time Correlations . . .

    No full text
    ... extracting time-correlations among multiple time-series data streams is described. The time-correlations tell us the relationships and dependencies among time-series data streams. Reusable time-correlation rules can be fed into various analysis tools, such as forecasting or simulation tools, for further analysis. Statistical techniques and aggregation functions are applied in order to reduce the search space. The method proposed in this paper can be used for detecting time-correlations both between a pair of time-series data streams, and among multiple time-series data streams. The generated rules tell us how the changes in the values of one set of time-series data streams influence the values in another set of time-series data streams. Those rules can be stored digitally and fed into various data analysis tools, such as simulation, forecasting, impact analysis, etc., for further analysis of the data

    Content Replication in Web++

    No full text
    Web++ is a prototype system that supports user transparent wide area replication of resources in order to improve the response time and reliability of the HTTP service. Our architecture is based on smart clients, and can be dynamically downloaded as mobile code into a user's application, presents a number of advantages. Clients keep track of the average HTTP latency that they experience from various servers and use that information in order to make to choose the replica of a resource that is expected to deliver the best response time for them. The clients also provide feedback on the observed request latencies to the servers, which allows helps the servers to determine which resources should be replicated and what would be the best locations for the replicas. We describe in this paper a distributed server-initiated approach for resource replication in which all servers can decide autonomously whether to replicate resources and the locations where the replicas should be allocated. In addition to the novel use of smart clients, our algorithm also avoids keeping track of complex network topologies by using the concept of logical segments. We present the results of experiments that show that our algorithm for resource allocation scale well with respect to the number of servers and the number of replicated resources

    Detecting aggregate bursts from scaled bins within the context of privacy

    No full text
    Abstract — In this paper, we consider burst detection within the context of privacy. In our scenario, multiple parties want to detect a burst in aggregated time series data, but none of the parties want to disclose their individual data. We introduce two data perturbation approaches that alter the local data so that raw time series data values are not shared and bursts can be identified using a Shewhart threshold. The first involves lossy data compression via windowing. Unfortunately, windowing alone does not guarantee enough privacy because the envelope of the time series can still be determined. Therefore, we introduce a second data perturbation approach that employs scaled binning. This method transmits values for each data point based on the distance of the data point to a local mean of the time series. The strength of this approach is its increased privacy. We empirically demonstrate the burst detection results using both real and synthetic distributed data sets. When attempting to optimize both privacy guarantees and burst detection accuracy, we find that a combined approach using both windowing and scaled binning balances burst accuracy and privacy better than either approach individually. I

    Lipid peroxidation and antioxidant system in the blood of patients with Hodgkin's disease

    No full text
    Objectives: The purpose of this study was to measure the extent of lipid peroxidation and the status of antioxidants in patients with Hodgkin's disease

    Business Process Cockpit

    No full text
    Business Process Cockpit (BPC) is a tool that supports real-time monitoring, analysis, management, and optimization of business processes running on top of HP Process Manager, the Business Process Management System developed by Hewlett-Packard. The main goal of the Business Process Cockpit is to enable business users to perform business-level quality analysis, monitoring, and management of business processes. The BPC visualizes process execution data according to different focus points that identify the process entities that are the focus of the analysis, and different perspectives that define a way to look at the information. The BPC also allows users to define new concepts, such as “slow” and “fast” executions, and use those concepts to categorize the viewed data and make it much easier for users to interpret
    corecore