6 research outputs found

    Analytical response time estimation in parallel relational database systems

    Get PDF
    Techniques for performance estimation in parallel database systems are well established for parameters such as throughput, bottlenecks and resource utilisation. However, response time estimation is a complex activity which is difficult to predict and has attracted research for a number of years. Simulation is one option for predicting response time but this is a costly process. Analytical modelling is a less expensive option but requires approximations and assumptions about the queueing networks built up in real parallel database machines which are often questionable and few of the papers on analytical approaches are backed by results from validation against real machines. This paper describes a new analytical approach for response time estimation that is based on a detailed study of different approaches and assumptions. The approach has been validated against two commercial parallel DBMSs running on actual parallel machines and is shown to produce acceptable accuracy

    The advantages and cost effectiveness of database improvement methods

    Get PDF
    Relational databases have proved inadequate for supporting new classes of applications, and as a consequence, a number of new approaches have been taken (Blaha 1998), (Harrington 2000). The most salient alternatives are denormalisation and conversion to an object-oriented database (Douglas 1997). Denormalisation can provide better performance but has deficiencies with respect to data modelling. Object-oriented databases can provide increased performance efficiency but without the deficiencies in data modelling (Blaha 2000). Although there have been various benchmark tests reported, none of these tests have compared normalised, object oriented and de-normalised databases. This research shows that a non-normalised database for data containing type code complexity would be normalised in the process of conversion to an objectoriented database. This helps to correct badly organised data and so gives the performance benefits of de-normalisation while improving data modelling. The costs of conversion from relational databases to object oriented databases were also examined. Costs were based on published benchmark tests, a benchmark carried out during this study and case studies. The benchmark tests were based on an engineering database benchmark. Engineering problems such as computer-aided design and manufacturing have much to gain from conversion to object-oriented databases. Costs were calculated for coding and development, and also for operation. It was found that conversion to an object-oriented database was not usually cost effective as many of the performance benefits could be achieved by the far cheaper process of de-normalisation, or by using the performance improving facilities provided by many relational database systems such as indexing or partitioning or by simply upgrading the system hardware. It is concluded therefore that while object oriented databases are a better alternative for databases built from scratch, the conversion of a legacy relational database to an object oriented database is not necessarily cost effective

    Modelling parallel database management systems for performance prediction

    Get PDF
    Abstract unavailable please refer to PD

    Um benchmark voltado a analise de desempenho de sistemas de informações geograficas

    Get PDF
    Orientador: Geovane Cayres MagalhãesDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Ciencia da ComputaçãoResumo: A enorme quantidade e a natureza dos dados armazenados por aplicações que utilizam sistemas de informações geográficas (SIGs) implicam em alterações ou extensões nos métodos de acesso, otimizadores de consulta e linguagens de consulta estabelecidos para sistemas gerenciadotes de banco de dados (SGBDs) convencionais. Com isto, diferentes soluções têm sido apresentadas, tornando-se imprescindível a criação de algum mecanismo que possa medir a eficiência destas soluções para auxiliar o direcionamento de futuros trabalhos de pesquisas. Para tal propósito é utilizada, nesta dissertação, a técnica experimental de benchmark. Esta dissertação propõe a carga de trabalho e caracteriza os dados de um benchmark voltado à análise de desempenho de SIGs. A carga de trabalho do benchmark é composta por um conjunto de transações primitivas, especificadas em alto nível, que podem ser utilizadas para a formação de transações mais complexas. Estas transações primitivas são predominantemente orientadas aos dados espaciais, sendo, a priori, independentes do formato de dados utilizado (raster ou vetorial). A caracterização dos dados do benchmark foi efetuada em termos dos tipos de dados necessários para a representação de aplicações georeferenciadas, e adicionalmente procedimentos para se realizar a geração de dados sintéticos. Finalmente, uma aplicação alvo utilizando dados sintéticos foi definida com a finalidade de validar o benchmark proposto.Abstract: Geographical Information Systems (GIS) de ai with data that are special in nature and size. Thus, the technologies developed for conventional data base systems such as access methods, query optimizers and languages, have to be modified in order to satisfy the needs of a GIS. These modifications, embedded in several GIS, or being proposed by research projects, need to be evaluated. This thesis proposes mechanisms for evaluating GIS based on benchmarks. The benchmark is composed of a workload to be submitted to the GIS being analysed and data characterizing the information. The workload is made of a set of primitive transactions that can be. combined in order to derive transactions of any degree of complexity. These primitive transactions are oriented to spatial data but not dependent on the way they are represented (vector or raster). The benchmark data base characterization was defined in terms of the types of data required by applications that use georeferencing, and by the need to generate complex and controlled artificial data. The proposed technique and methods were used to show how to create the transactions and the data for a given application.MestradoMestre em Ciência da Computaçã
    corecore