11 research outputs found

    ADAPTIVE RESOURCE MANAGEMENT OF A VIRTUAL CALL CENTER USING A PEER-TO- PEER APPROACH

    Get PDF
    Abstract: As the number and diversity of end user environments increase, services should be able to dynamically adapt to available resources in a given environment. In this paper, we present the concepts of migratory services and peer-to-peer connections as the means of facilitating adaptive service and resource management in distributed and heterogeneous environments. Our approach has been realized using object-oriented principles in Adaptive Communicating Applications Platform (ACAP). The architectural design and implementation of a real-life high-level service, Virtual Call Center (VCC), are used to illustrate issues in adaptive service and management issues and discuss in detail our approach in ACAP

    On The Accuracy and Completeness of The Record Matching Process

    Get PDF
    Abstract. Record matching or linking is one of the phases of the data quality improvement process, in which, records from different sources, are cleansed and integrated in a centralized data store to be used for various purposes. Both, earlier and recent studies in data quality and record linkage focus on various statistical models, which make strong assumptions on the probabilities of attribute errors. In this study, we evaluate different models for record linkage, which are built based on data only. We use a program that generates data with known error distributions and we train classification models, which we use to estimate the accuracy and the completeness of the record linking process. The results indicate that the automated learning techniques are adequate for this process and that both their accuracy and their completeness are comparable to the accuracy and the completeness of other, mostly manual, processes

    Cooperative Caching in Append-only Databases with Hot Spots

    No full text
    We measure the performance of several cooperative caching policies for a database with hot spots. The workload consists of queries and append-only update transactions, and is modeled after a financial database of stock (historical) trading information. We show that cooperative caching is effective for this application. We show that selecting the correct set of peer servers when servicing a cache miss is crucial to achieving high performance, and we demonstrate a greedy algorithm that performs close to optimal for this workload. We also evaluate several cache replacement policies and show that a 2 nd -chance algorithm performs best. In a 2 nd -chance algorithm, replaced pages are transferred to a peer server rather than being discarded. When a page is selected for replacement a 2 nd time, the page is discarded. Our results can be applied in the design of "proxy" servers for databases or web servers where a layer of proxy servers are used to scale the system performance. 1. Introdu..

    Cooperative Caching for Financial Databases with Hot Spots

    No full text
    We measure the performance of several cooperative caching policies for a database with hot spots. The workload consists of queries and append-only update transactions, and is modeled after a financial database of stock (historical) trading information. We show that cooperative caching is effective for this application. We show that selecting the correct set of peer servers when servicing a cache miss is crucial to achieving high performance, and we demonstrate a greedy algorithm that performs close to optimal for this workload. We also evaluate several cache replacement policies and show that a 2 nd -chance algorithm performs best. In a 2 nd -chance algorithm, replaced pages are transferred to a peer server rather than being discarded. When a page is selected for replacement a 2nd time, the page is discarded. Our results can be applied in the design of "proxy" servers for databases or web servers where a layer of proxy servers are used to scale the system performance. 1 Introductio..

    Cooperative Caching for Financial Databases with Hot Spots

    No full text
    We measure the performance of several cooperative caching policies for a database with hot spots. The workload consists of queries and append-only update transactions, and is modeled after a financial database of stock (historical) trading information. We show that cooperative caching is effective for this application. We show that selecting the correct set of peer servers when servicing a cache miss is crucial to achieving high performance, and we demonstrate a greedy algorithm that performs close to optimal for this workload. We also evaluate several cache replacement policies and show that a 2 nd -chance algorithm performs best. In a 2 nd -chance algorithm, replaced pages are transferred to a peer server rather than being discarded. When a page is selected for replacement a 2nd time, the page is discarded. Our results can be applied in the design of "proxy" servers for databases or web servers where a layer of proxy servers are used to scale the system performance. 1 Introductio..

    Demonstration of Telcordia's Database Reconciliation and Data Quality Analysis Tool VLDB 2000 Demonstration Session Proposal

    No full text
    This demonstration illustrates how a comprehensive database reconciliation tool can provide the ability to characterize data-quality and data-reconciliation issues in complex real-world applications. Telcordia's data reconciliation and data quality analysis tool includes rapid generation of appropriate pre-processing and matching rules applied to a training set created from samples of the data. Once tuned, the appropriate rules can be applied efficiently to the complete data sets. The tool uses a modular Java-based architecture that allows for customized matching functions and iterative runs that build upon previously learned information. It has been applied to several real database reconciliation problems. Telcordia has been able to provide significant insights to clients who recognize that they have data reconciliation problems but cannot determine root causes effectively using currently available off-the-shelf tools. A description of the analysis of a duplicaterecord problem in a set of taxpayer databases is included in this report to illustrate the effective use of the tool

    Telcordia's database reconciliation and data quality analysis tool

    No full text
    This demonstration illustrates how a comprehensive database reconciliation tool can provide the ability to charactenze data-quality and data-reconciliation issues in complex real-world applications. Telcordia's data reconciliation and data quality analysis tool includes rapid generation of appropriate pre-processing and matching rules applied to a training set created from samples of the data. Once tuned, the appropriate rules can be applied efficiently to the complete data sets. The tool uses a modular JavaBeans-based architecture that allows for customized matching functions and iterative runs that build upon previously learned information. Telcordia has been able to provide significant insights to clients who recognize that they have data reconciliation problems but cannot determine root causes effectively when using currently available off-the-shelf tools. A description of the analysis of a duplicate-record problem in a set of taxpayer databases is included in this report to illustrate the effective use of the tool.</p

    Telcordia's database reconciliation and data quality analysis tool

    No full text
    This demonstration illustrates how a comprehensive database reconciliation tool can provide the ability to charactenze data-quality and data-reconciliation issues in complex real-world applications. Telcordia's data reconciliation and data quality analysis tool includes rapid generation of appropriate pre-processing and matching rules applied to a training set created from samples of the data. Once tuned, the appropriate rules can be applied efficiently to the complete data sets. The tool uses a modular JavaBeans-based architecture that allows for customized matching functions and iterative runs that build upon previously learned information. Telcordia has been able to provide significant insights to clients who recognize that they have data reconciliation problems but cannot determine root causes effectively when using currently available off-the-shelf tools. A description of the analysis of a duplicate-record problem in a set of taxpayer databases is included in this report to illustrate the effective use of the tool.</p
    corecore