543,103 research outputs found

    Dynamic Scale-out Mechanisms for Partitioned Shared-Nothing Databases

    Get PDF
    For a database system used in pay-per-use cloud environments, elastic scaling becomes an essential feature, allowing for minimizing costs while accommodating fluctuations of load. One approach to scalability involves horizontal database partitioning and dynamic migration of partitions between servers. We define a scale-out operation as a combination of provisioning a new server followed by migration of one or more partitions to the newly-allocated server. In this thesis we study the efficiency of different implementations of the scale-out operation in the context of online transaction processing (OLTP) workloads. We designed and implemented three migration mechanisms featuring different strategies for data transfer. The first one is based on a modification of the Xen hypervisor, Snowflock, and uses on-demand block transfers for both server provisioning and partition migration. The second one is implemented in a database management system (DBMS) and uses bulk transfers for partition migration, optimized for higher bandwidth utilization. The third one is a conventional application, using SQL commands to copy partitions between servers. We perform an experimental comparison of those scale-out mechanisms for disk-bound and CPU-bound configurations. When comparing the mechanisms we analyze their impact on whole-system performance and on the experience of individual clients

    Bayesian Inference of Arrival Rate and Substitution Behavior from Sales Transaction Data with Stockouts

    Full text link
    When an item goes out of stock, sales transaction data no longer reflect the original customer demand, since some customers leave with no purchase while others substitute alternative products for the one that was out of stock. Here we develop a Bayesian hierarchical model for inferring the underlying customer arrival rate and choice model from sales transaction data and the corresponding stock levels. The model uses a nonhomogeneous Poisson process to allow the arrival rate to vary throughout the day, and allows for a variety of choice models. Model parameters are inferred using a stochastic gradient MCMC algorithm that can scale to large transaction databases. We fit the model to data from a local bakery and show that it is able to make accurate out-of-sample predictions, and to provide actionable insight into lost cookie sales

    Project management systems in agriculture in the northern great plain region of Hungary

    Get PDF
    The modern production, processing and trading of agricultural products require the adaptation of newer technologies. Not only the larger enterprises, but even the members of the SME sector are looking for employees with knowledge and practice in project management in a growing number. This is also due to the emerging number of projects. During our research we were trying to get a full-scale view on the topic of Project Management in the Agriculture. Our goal was to find out with the help of primary and secondary processing of survey-databases which skills and abilities an agricultural project manager needs, and what range of methods he/she usually uses

    A Detail Based Method for Linear Full Reference Image Quality Prediction

    Full text link
    In this paper, a novel Full Reference method is proposed for image quality assessment, using the combination of two separate metrics to measure the perceptually distinct impact of detail losses and of spurious details. To this purpose, the gradient of the impaired image is locally decomposed as a predicted version of the original gradient, plus a gradient residual. It is assumed that the detail attenuation identifies the detail loss, whereas the gradient residuals describe the spurious details. It turns out that the perceptual impact of detail losses is roughly linear with the loss of the positional Fisher information, while the perceptual impact of the spurious details is roughly proportional to a logarithmic measure of the signal to residual ratio. The affine combination of these two metrics forms a new index strongly correlated with the empirical Differential Mean Opinion Score (DMOS) for a significant class of image impairments, as verified for three independent popular databases. The method allowed alignment and merging of DMOS data coming from these different databases to a common DMOS scale by affine transformations. Unexpectedly, the DMOS scale setting is possible by the analysis of a single image affected by additive noise.Comment: 15 pages, 9 figures. Copyright notice: The paper has been accepted for publication on the IEEE Trans. on Image Processing on 19/09/2017 and the copyright has been transferred to the IEE

    The End of a Myth: Distributed Transactions Can Scale

    Full text link
    The common wisdom is that distributed transactions do not scale. But what if distributed transactions could be made scalable using the next generation of networks and a redesign of distributed databases? There would be no need for developers anymore to worry about co-partitioning schemes to achieve decent performance. Application development would become easier as data placement would no longer determine how scalable an application is. Hardware provisioning would be simplified as the system administrator can expect a linear scale-out when adding more machines rather than some complex sub-linear function, which is highly application specific. In this paper, we present the design of our novel scalable database system NAM-DB and show that distributed transactions with the very common Snapshot Isolation guarantee can indeed scale using the next generation of RDMA-enabled network technology without any inherent bottlenecks. Our experiments with the TPC-C benchmark show that our system scales linearly to over 6.5 million new-order (14.5 million total) distributed transactions per second on 56 machines.Comment: 12 page

    Older care-home residents as collaborators or advisors in research: a systematic review

    Get PDF
    Background: patient and public involvement (PPI) in research can enhance its relevance. Older care-home residents are often not involved in research processes even when studies are care-home focused. Objective: to conduct a systematic review to find out to what extent and how older care-home residents have been involved in research as collaborators or advisors. Methods: a systematic literature search of 12 databases, covering the period from 1990-September 2014 was conducted. A lateral search was also carried out. Standardised inclusion criteria were used and checked independently by two researchers. Results: 19 reports and papers were identified relating to 11 different studies. Care-home residents had been involved in the research process in multiple ways. Two key themes were identified: (i) the differences in residents’ involvement in small-scale and large-scale studies, (ii) the barriers to and facilitators of involvement. Conclusions: small-scale studies involved residents as collaborators in participatory action research, whereas larger studies involved residents as consultants in advisory roles. There are multiple facilitators of and barriers to involving residents as PPI members. The reporting of PPI varies. While it is difficult to evaluate the impact of involving care-home residents on the research outcomes, impact has been demonstrated from more inclusive research processes with care-home residents. The review shows that older care-home residents can be successfully involved in the research process

    Working with Documents in Databases

    Get PDF
    Using on a larger and larger scale the electronic documents within organizations and public institutions requires their storage and unitary exploitation by the means of databases. The purpose of this article is to present the way of loading, exploitation and visualization of documents in a database, taking as example the SGBD MSSQL Server. On the other hand, the modules for loading the documents in the database and for their visualization will be presented through code sequences written in C#. The interoperability between averages will be carried out by the means of ADO.NET technology of database access.interoperability, documents, database, full text search.
    corecore