1,809 research outputs found

    Providing Transaction Class-Based QoS in In-Memory Data Grids via Machine Learning

    Get PDF
    Elastic architectures and the ”pay-as-you-go” resource pricing model offered by many cloud infrastructure providers may seem the right choice for companies dealing with data centric applications characterized by high variable workload. In such a context, in-memory transactional data grids have demonstrated to be particularly suited for exploiting advantages provided by elastic computing platforms, mainly thanks to their ability to be dynamically (re-)sized and tuned. Anyway, when specific QoS requirements have to be met, this kind of architectures have revealed to be complex to be managed by humans. Particularly, their management is a very complex task without the stand of mechanisms supporting run-time automatic sizing/tuning of the data platform and the underlying (virtual) hardware resources provided by the cloud. In this paper, we present a neural network-based architecture where the system is constantly and automatically re-configured, particularly in terms of computing resources

    Using real options to select stable Middleware-induced software architectures

    Get PDF
    The requirements that force decisions towards building distributed system architectures are usually of a non-functional nature. Scalability, openness, heterogeneity, and fault-tolerance are examples of such non-functional requirements. The current trend is to build distributed systems with middleware, which provide the application developer with primitives for managing the complexity of distribution, system resources, and for realising many of the non-functional requirements. As non-functional requirements evolve, the `coupling' between the middleware and architecture becomes the focal point for understanding the stability of the distributed software system architecture in the face of change. It is hypothesised that the choice of a stable distributed software architecture depends on the choice of the underlying middleware and its flexibility in responding to future changes in non-functional requirements. Drawing on a case study that adequately represents a medium-size component-based distributed architecture, it is reported how a likely future change in scalability could impact the architectural structure of two versions, each induced with a distinct middleware: one with CORBA and the other with J2EE. An option-based model is derived to value the flexibility of the induced-architectures and to guide the selection. The hypothesis is verified to be true for the given change. The paper concludes with some observations that could stimulate future research in the area of relating requirements to software architectures

    Predicting Intermediate Storage Performance for Workflow Applications

    Full text link
    Configuring a storage system to better serve an application is a challenging task complicated by a multidimensional, discrete configuration space and the high cost of space exploration (e.g., by running the application with different storage configurations). To enable selecting the best configuration in a reasonable time, we design an end-to-end performance prediction mechanism that estimates the turn-around time of an application using storage system under a given configuration. This approach focuses on a generic object-based storage system design, supports exploring the impact of optimizations targeting workflow applications (e.g., various data placement schemes) in addition to other, more traditional, configuration knobs (e.g., stripe size or replication level), and models the system operation at data-chunk and control message level. This paper presents our experience to date with designing and using this prediction mechanism. We evaluate this mechanism using micro- as well as synthetic benchmarks mimicking real workflow applications, and a real application.. A preliminary evaluation shows that we are on a good track to meet our objectives: it can scale to model a workflow application run on an entire cluster while offering an over 200x speedup factor (normalized by resource) compared to running the actual application, and can achieve, in the limited number of scenarios we study, a prediction accuracy that enables identifying the best storage system configuration

    A Peer-to-Peer Middleware Framework for Resilient Persistent Programming

    Get PDF
    The persistent programming systems of the 1980s offered a programming model that integrated computation and long-term storage. In these systems, reliable applications could be engineered without requiring the programmer to write translation code to manage the transfer of data to and from non-volatile storage. More importantly, it simplified the programmer's conceptual model of an application, and avoided the many coherency problems that result from multiple cached copies of the same information. Although technically innovative, persistent languages were not widely adopted, perhaps due in part to their closed-world model. Each persistent store was located on a single host, and there were no flexible mechanisms for communication or transfer of data between separate stores. Here we re-open the work on persistence and combine it with modern peer-to-peer techniques in order to provide support for orthogonal persistence in resilient and potentially long-running distributed applications. Our vision is of an infrastructure within which an application can be developed and distributed with minimal modification, whereupon the application becomes resilient to certain failure modes. If a node, or the connection to it, fails during execution of the application, the objects are re-instantiated from distributed replicas, without their reference holders being aware of the failure. Furthermore, we believe that this can be achieved within a spectrum of application programmer intervention, ranging from minimal to totally prescriptive, as desired. The same mechanisms encompass an orthogonally persistent programming model. We outline our approach to implementing this vision, and describe current progress.Comment: Submitted to EuroSys 200

    Support of Multiple Replica Types in FreeIPA

    Get PDF
    Velmi rozšířeným prostředkem pro správu uživatelských účtů a řízení přístupu k výpočetní infrastruktuře a službám je kombinace protokolů LDAP a Kerberos. Instalace jakož i samotná správa sítě postavené nad těmito technologiemi však skýtá mnoho překážek. Jedním z řešení je použití open-sourcové aplikace FreeIPA, která patří mezi takzvané řešení pro správu identit a bezpečnostních politik. FreeIPA výrazně usnadňuje práci s těmito protokoly od samotného nasazení až po správu celého systému. Cílem této práce je rozšíření aplikace FreeIPA o možnost použití read-only replik, které přispěje k snadnější a účinnější škálovatelnosti.LDAP and Kerberos together are widely used for management of user accounts and authorization. The installation and administration of a system based on these protocols might be difficult and full of obstacles. An open source solution exists that is capable of handling the entire life cycle of such system. It is the FreeIPA identity management system. FreeIPA significantly simplify the usage of LDAP and Kerberos from the administrator's point of view. This thesis focuses on extending the replication capabilities of FreeIPA by adding a support for read-only replicas. The read-only replicas should improve scalability features of FreeIPA controlled systems.

    Dynamic execution of scientific workflows in cloud

    Get PDF

    Materializing digital collecting: an extended view of digital materiality

    Get PDF
    If digital objects are abundant and ubiquitous, why should consumers pay for, much less collect them? The qualities of digital code present numerous challenges for collecting, yet digital collecting can and does occur. We explore the role of companies in constructing digital consumption objects that encourage and support collecting behaviours, identifying material configuration techniques that materialise these objects as elusive and authentic. Such techniques, we argue, may facilitate those pleasures of collecting otherwise absent in the digital realm. We extend theories of collecting by highlighting the role of objects and the companies that construct them in materialising digital collecting. More broadly, we extend theories of digital materiality by highlighting processes of digital material configuration that occur in the pre-objectification phase of materialisation, acknowledging the role of marketing and design in shaping the qualities exhibited by digital consumption objects and consequently related consumption behaviours and experiences
    corecore