30 research outputs found

    Neonatal sepsis and resistance pattern of isolates in Tertiary level neonatal unit: Time to evaluate the empirical antibiotics selection

    Get PDF
    Objective: To find out the most common organisms involved in neonatal sepsis origination and observe the pattern of antibiotic sensitivity and resistance of bacterial isolates.Materials and Methods: This descriptive cross-sectional study was conducted at the Department of Paediatrics Izzat Ali Shah Hospital, Wah Cantt. Out of 420 patients admitted with sepsis in NICU, 19.5% of patients with positive blood cultures were included in the study. A consecutive, non-probability sampling technique was used.Results: Out of 82 positive blood cultures gram-positive bacteria were observed in 19 patients (23.2%) and gram-negative bacteria were seen in 63 patients (76.8%). The most common gram-negative pathogens isolated were Acinetobacter (29.3%) and Klebsiella (24.4%). Staphylococcus aureus (12.2%) was the commonest gram-positive organism isolated. Gram-negative organisms showed maximum sensitivity to Tigecycline and Colistin and were resistant to Cefixime, Aztreonam, Amoxicillin, and Ceftriaxone. Gram-positive bacteria were sensitive to Teicoplanin, Linezolid, and Vancomycin while resistance was shown to penicillin and amoxicillin.Conclusion: The current study showed that gram-negative bacteria were the major contributors to sepsis in the respective setup and showed resistance to first-line antibiotics such as Penicillins and Cephalosporins. Strict infection control measures need to be implemented to avoid the emergence of resistant strains of pathogens in NICUs. This will help to reduce the incidence of neonatal sepsis leading to mortality.

    K8-Scalar: a workbench to compare autoscalers for container-orchestrated services (Artifact)

    Get PDF
    This artifact is an easy-to-use and extensible workbench exemplar, named K8-Scalar, which allows researchers to implement and evaluate different self-adaptive approaches to autoscaling container-orchestrated services. The workbench is based on Docker, a popular technology for easing the deployment of containerized software that also has been positioned as an enabler for reproducible research. The workbench also relies on a container orchestration framework: Kubernetes (K8s), the de-facto industry standard for orchestration and monitoring of elastically scalable container-based services. Finally, it integrates and extends Scalar, a generic testbed for evaluating the scalability of large-scale systems with support for evaluating the performance of autoscalers for database clusters. The associated scholarly paper presents (i) the architecture and implementation of K8-Scalar and how a particular autoscaler can be plugged in, (ii) sketches the design of a Riemann-based autoscaler for database clusters, (iii) illustrates how to design, setup and analyze a series of experiments to configure and evaluate the performance of this autoscaler for a particular database (i.e., Cassandra) and a particular workload type, (iv) and validates the effectiveness of K8-scalar as a workbench for accurately comparing the performance of different auto-scaling strategies. Future work includes extending K8-Scalar with an improved research data management repository

    Evaluating NOSQL Technologies for Historical  Financial Data

    No full text
    Today, when businesses and organizations are generating huge volumes of data; the applications like Web 2.0 or social networking requires processing of petabytes of data. Stock Exchange Systems are among the ones that process large amount of quotes and trades on a daily basis. The limited database storage ability is a major bottleneck in meeting up the challenge of providing efficient access to information. Further to this, varying data are the major source of information for the financial industry. This data needs to be read and written efficiently in the database; this is quite costly when it comes to traditional Relational Database Management System. RDBMS is good for different scenarios and can handle certain types of data very well, but it isn’t always the perfect choice. The existence of innovative architectures allows the storage of large data in an efficient manner. “Not only SQL” brings an effective solution through the provision of an efficient information storage capability. NOSQL is an umbrella term for various new data store. The NOSQL databases have gained popularity due to different factors that include their open source nature, existence of non-relational data store, high-performance, fault-tolerance, and scalability to name a few. Nowadays, NOSQL databases are rapidly gaining popularity because of the advantages that they offer compared to RDBMS. The major aim of this research is to find an efficient solution for storing and processing the huge volume of data for certain variants. The study is based on choosing a reliable, distributed, and efficient NOSQL database at Cinnober Financial Technology AB. The research majorly explores NOSQL databases and discusses issues with RDBMS; eventually selecting a database, which is best suited for financial data management. It is an attempt to contribute the current research in the field of NOSQL databases which compares one such NOSQL database Apache Cassandra with Apache Lucene and the traditional relational database MySQL for financial management. The main focus is to find out which database is the preferred choice for different variants. In this regard, the performance test framework for a selected set of candidates has also been taken into consideration

    Evaluating NOSQL Technologies for Historical  Financial Data

    No full text
    Today, when businesses and organizations are generating huge volumes of data; the applications like Web 2.0 or social networking requires processing of petabytes of data. Stock Exchange Systems are among the ones that process large amount of quotes and trades on a daily basis. The limited database storage ability is a major bottleneck in meeting up the challenge of providing efficient access to information. Further to this, varying data are the major source of information for the financial industry. This data needs to be read and written efficiently in the database; this is quite costly when it comes to traditional Relational Database Management System. RDBMS is good for different scenarios and can handle certain types of data very well, but it isn’t always the perfect choice. The existence of innovative architectures allows the storage of large data in an efficient manner. “Not only SQL” brings an effective solution through the provision of an efficient information storage capability. NOSQL is an umbrella term for various new data store. The NOSQL databases have gained popularity due to different factors that include their open source nature, existence of non-relational data store, high-performance, fault-tolerance, and scalability to name a few. Nowadays, NOSQL databases are rapidly gaining popularity because of the advantages that they offer compared to RDBMS. The major aim of this research is to find an efficient solution for storing and processing the huge volume of data for certain variants. The study is based on choosing a reliable, distributed, and efficient NOSQL database at Cinnober Financial Technology AB. The research majorly explores NOSQL databases and discusses issues with RDBMS; eventually selecting a database, which is best suited for financial data management. It is an attempt to contribute the current research in the field of NOSQL databases which compares one such NOSQL database Apache Cassandra with Apache Lucene and the traditional relational database MySQL for financial management. The main focus is to find out which database is the preferred choice for different variants. In this regard, the performance test framework for a selected set of candidates has also been taken into consideration

    Evaluating NOSQL Technologies for Historical  Financial Data

    No full text
    Today, when businesses and organizations are generating huge volumes of data; the applications like Web 2.0 or social networking requires processing of petabytes of data. Stock Exchange Systems are among the ones that process large amount of quotes and trades on a daily basis. The limited database storage ability is a major bottleneck in meeting up the challenge of providing efficient access to information. Further to this, varying data are the major source of information for the financial industry. This data needs to be read and written efficiently in the database; this is quite costly when it comes to traditional Relational Database Management System. RDBMS is good for different scenarios and can handle certain types of data very well, but it isn’t always the perfect choice. The existence of innovative architectures allows the storage of large data in an efficient manner. “Not only SQL” brings an effective solution through the provision of an efficient information storage capability. NOSQL is an umbrella term for various new data store. The NOSQL databases have gained popularity due to different factors that include their open source nature, existence of non-relational data store, high-performance, fault-tolerance, and scalability to name a few. Nowadays, NOSQL databases are rapidly gaining popularity because of the advantages that they offer compared to RDBMS. The major aim of this research is to find an efficient solution for storing and processing the huge volume of data for certain variants. The study is based on choosing a reliable, distributed, and efficient NOSQL database at Cinnober Financial Technology AB. The research majorly explores NOSQL databases and discusses issues with RDBMS; eventually selecting a database, which is best suited for financial data management. It is an attempt to contribute the current research in the field of NOSQL databases which compares one such NOSQL database Apache Cassandra with Apache Lucene and the traditional relational database MySQL for financial management. The main focus is to find out which database is the preferred choice for different variants. In this regard, the performance test framework for a selected set of candidates has also been taken into consideration

    Expressive data storage policies for multi-cloud storage configurations

    No full text
    Software-as-a-Service (SaaS) providers increasingly rely on multi-cloud setups to leverage the combined benefits of different enabling technologies and third-party providers. Especially, in the context of NoSQL storage systems, which are characterized by heterogeneity and quick technological evolution, adopting the multi-cloud paradigm is a promising way to deal with different data storage requirements. Existing data access middleware platforms that support this type of setup (polyglot persistence) commonly rely on (i) configuration models that describe the multi-cloud setup, and (ii) the hard-coded logic in the application source code or the data storage policies that define how the middleware platforms should store data across different storage systems. In practice, however, both models are tightly coupled, i.e. the hard-coded logic in the application source code and data storage policies refer to specific configuration model elements, leads to fragility issues (ripple effects) and hinders reusability. Especially in multi-cloud configurations that change often (e.g., in dynamic cloud federations), this is a key problem. In this paper, we present a more expressive way to specify storage policies, that involves (i) enriching the configuration models with metadata about the technical capabilities of the storage systems, (ii) referring to the desired capabilities of the storage system in the storage policies, and (iii) leaving actual resolution to the policy engine. Our validation in the context of a realistic SaaS application shows how the policies accommodate such changes for a number of realistic policy change scenarios. We also evaluate the performance impact in order to investigate the performance overhead of our approach. The results demonstrate that an application can benefit from an expressive data storage policy while incurring a minimal performance overhead of less than 2 %.status: publishe

    Data management policies for GDPR compliance at run time

    No full text
    Privacy by Design as dictated by data protection regulationssuch as the GDPR profoundly impacts organizations, not onlyin terms of their data processing activities, business models,organizational structures or functional architectures, but alsoat the level of operational data management. Moreover, datatiers in modern applications are increasingly hybrid in adeployment model (combining both on-premise and third-party cloud resources), and dynamic in their configurations.Instead of hardcoding, we argue in favor of externalizingdata management logic into configurable data storage policiesthat are enforced at run time. Such policies in turn become animportant artifact to demonstrate compliance. We illustratethis vision in the context of an industrial application case, adocument processing SaaS offering.no DOI, no ISBN, not included in proceedings (only published online)status: Published onlin

    Leveraging NoSQL for scalable and dynamic data encryption in multi-tenant SaaS

    No full text
    In the context of multi-tenant SaaS applications, data confidentiality support is increasingly being offered from within the application layer instead of the database layer or the storage layer to accommodate continuously changing requirements of multiple tenants. Application-level data management middleware platforms are becoming increasingly compelling for dealing with the complexity of a multi-cloud or a federated cloud storage architecture as well as multi-tenant SaaS applications. However, these platforms typically support traditional data mapping strategies that are created under the assumption of a fixed and rigorous database schema. Thus, mapping data objects while supporting varying data confidentiality requirements, therefore, leads to fragmentation of data over distributed storage nodes. This introduces significant performance overhead at the level of individual database transactions (e.g., CRUD transactions) and negatively affects the overall scalability. To address these challenges, we present a dedicated data mapping strategy that leverages the data schema flexibility of columnar NoSQL databases to accomplish dynamic and fine-grained data encryption in a more efficient and scalable manner. We validate these solutions in the context of an industrial multi-tenant SaaS application and conduct a comprehensive performance evaluation. The results confirm that the proposed data mapping strategy indeed yields scalability and performance improvements.status: publishe

    A Workload-driven Document Database Schema Recommender (DBSR)

    No full text
    Database schema design requires careful consideration of the application’s data model, workload, and target database technology to optimize for performance and data size. Traditional normalization schemes used in relational databases minimize data redundancy, whereas NoSQL document-oriented databases favor redundancy and optimize for horizontal scalability and performance. Systematic NoSQL schema design involves multiple dimensions, and a database designer is in practice required to carefully consider (i) which data elements to copy and co-locate, (ii) which data elements to normalize, and (iii) how to encode data, while taking into account factors such as the workload and data model. In this paper, we present a workload-driven document database schema recommender (DBSR), which takes a systematic, search-based approach in exploring the complex schema design space. The recommender takes as main inputs the application’s data model and its read workload, and outputs (i) the suggested document schema (featuring secondary indexing), (ii) query plan recommendations, and (iii) a document utility matrix that encodes insights on their respective costs and relative utility. We evaluate recommended schema in MongoDB using YCSB, and show significant benefits to read query performancestatus: publishe
    corecore