2,016 research outputs found

    Permanent Objects, Disposable Systems

    Get PDF
    4th International Conference on Open RepositoriesThis presentation was part of the session : Conference PresentationsDate: 2009-05-19 01:00 PM – 02:30 PMThe California Digital Library (CDL) preservation program is re-envisioning its curation infrastructure as a set of loosely-coupled, distributed micro-services. There are many monolithic systems that support a range of preservation activities but also require the user and the hosting institution to buy-in to a particular system culture. The result is an institution that becomes, say, a DSpace, Fedora, or LOCKSS "shop", with a specific worldview and set of object flows and structures that will eventually need to be abandoned when it comes time to transition to the next system. Experience shows that these transitions are unavoidable, despite claims that once an object is in the system, it will be safe forever. In view of this it is safer and more cost-effective to acknowledge from the outset the inevitable transient nature of systems and to plan on managing, rather than resisting change. The disruption caused by change can be mitigated by basing curation services on simple universal structures and protocols (e.g., filesystems, HTTP) and micro-services that operate on them. We promote a "mix and match" approach in which appropriate content- and context-specific curation workflows can be nimbly constructed by combining necessary functions drawn from a granular set of independent micro-services. Micro-services, whether deployed in isolation or in combination, are especially suited to exploitation upstream towards content creators who normally don't want to think about preservation, especially if it's costly; compared to buying into an entire curation culture, it is easy to adopt a small, inexpensive tool that requires very little commitment. We see digital curation as an ongoing process of enrichment at all stages in the lifecycle of a digital object. Because the early developmental stages are so critical to an object's health and longevity, it is desirable to push curation "best practices" as far upstream towards the object creators as possible. If preservation is considered only when objects are close to retirement, it is often too late to correct the structural and semantic deficiencies that can impair object usability. The later the intervention, the more expensive the correction process, and it is always difficult to fund interventions for "has been" objects. In contrast, early stage curation challenges traditional practices. Traditionally, preservation actions are often based on end-stage processing, where objects are deposited "as is" and kept out of harm's way by limiting access (i.e., dark archives). While some systems are designed to be dark or "dim", with limited access and little regard for versioning or object enrichment, enrichment and access are now seen as necessary curation actions, that is, interventions for the sake of preservation. In particular, the darkness of an entire collection can change in the blink of an eye, for example, as the result of a court ruling or access rights purchase; turning the lights on for a collection should be as simple as throwing a switch, and not require transferring the collection from a "preservation repository" to an "access repository". Effective curation services must be flexible and easily configurable in order to respond appropriately to the wide diversity of content and content uses. To be most effective, not only should curation practices be pushed upstream but also they should be pushed out to many different contexts. The micro-services approach promotes the idea that curation is an outcome, not a place. Curation actions should be applied to content where it most usefully exists for the convenience of its creators or users. For example, high value digital assets in access repositories, or even scholars' desktops, would certainly benefit from such things as persistent identification or regular audits to discover and repair bit-level damage, functions usually available only in the context of a "preservation system" but now easily applied to content where it most usefully resides without requiring transfer to a central location

    Secure cloud micro services using Intel SGX

    Get PDF
    The micro service paradigm targets the implementation of large and scalable systems while enabling fine-grained service-level main- tainability. Due to their scalability, such architectures are frequently used in cloud environments, which are often subject to privacy and trust issues hindering the deployment of services dealing with sensitive data. In this paper we investigate the integration of trusted execution based on Intel Software Guard Extensions (SGX) into micro service applications. We present our Vert.x Vault, that supports SGX-based trusted execution in Eclipse Vert.x, a renowned tool-kit for writing reactive micro service applications. With our approach, secure micro services can run alongside regular ones, inter-connected via the Vert.x event bus to build large Vert.x applications that can contain multiple trusted components. Maintaining a full-edged Java Virtual Machine (JVM) inside an SGX enclave is impractical due to its complexity, less secure because of a large Trusted Code Base (TCB), and would suffer from performance penalties due to a high memory footprint. However, as Vert.x is written in Java, for a lean TCB this requires integration of native enclave C/C++ code into Vert.x, for which we propose the usage of Java Native Interface (JNI). Our Vert.x Vault provides the benefits of micro service architectures together with trusted execution to support privacy and data confidentiality for sensitive applications in the cloud at scale. In our evaluation we show the feasibility of our approach, buying a significantly increased level of security for a low performance overhead of only ≈ 8:7%

    Adaptive and Application-agnostic Caching in Service Meshes for Resilient Cloud Applications

    Get PDF
    Service meshes factor out code dealing with inter-micro-service communication. The overall resilience of a cloud application is improved if constituent micro-services return stale data, instead of no data at all. This paper proposes and implements application agnostic caching for micro services. While caching is widely employed for serving web service traffic, its usage in inter-micro-service communication is lacking. Micro-services responses are highly dynamic, which requires carefully choosing adaptive time-to-life caching algorithms. Our approach is application agnostic, is cloud native, and supports gRPC. We evaluate our approach and implementation using the micro-service benchmark by Google Cloud called Hipster Shop. Our approach results in caching of about 80% of requests. Results show the feasibility and efficiency of our approach, which encourages implementing caching in service meshes. Additionally, we make the code, experiments, and data publicly available

    Curation Micro-Services: A Pipeline Metaphor for Repositories

    Get PDF
    The effective long-term curation of digital content requires expert analysis, policy setting, and decision making, and a robust technical infrastructure that can effect and enforce curation policies and implement appropriate curation activities. Since the number, size, and diversity of content under curation management will undoubtedly continue to grow over time, and the state of curation understanding and best practices relative to that content will undergo a similar constant evolution, one of the overarching design goals of a sustainable curation infrastructure is flexibility. In order to provide the necessary flexibility of deployment and configuration in the face of potentially disruptive changes in technology, institutional mission, and user expectation, a useful design metaphor is provided by the Unix pipeline, in which complex behavior is an emergent property of the coordinated action of a number of simple independent components. The decomposition of repository function into a highly granular and orthogonal set of independent but interoperable micro-services is consistent with the principles of prudent engineering practice. Since each micro-service is small and self-contained, they are individually more robust and collectively easier to implement and maintain. By being freely interoperable in various strategic combinations, any number of micro-services-based repositories can be easily constructed to meet specific administrative or technical needs. Importantly, since these repositories are purposefully built from policy neutral and protocol and platform independent components to provide the function minimally necessary for a specific context, they are not constrained to conform to an infrastructural monoculture of prepackaged repository solutions. The University of California Curation Center has developed an open source micro-services infrastructure that is being used to manage the diverse digital collections of the ten campus University system and a number of non-university content partners. This paper provides a review of the conceptual design and technical implementation of this micro-services environment, a case study of initial deployment, and a look at ongoing micro-services developments

    Supporting Knowledge Management Instruments with Composable Micro-Services

    Get PDF
    Despite the fact that knowledge management (KM) challenges cannot be solved by installing a technical system alone, technical support for KM initiatives is still an important issue and nowadays requires handling of context, intelligent content analysis and extended collaboration support. Since information systems have significantly improved in the last ten years with regards to implementing Web 2.0 features and semantic content analysis, knowledge workers can expect better support from IT than ever. After the human-oriented, technology-oriented (documents), process-riented and social KM phases, KM support now needs integration of those beneficial technologies instead of hyping one and neglecting the other. The true nature and potential of social media does only manifest when people incorporate them into their day-to-day work routines or even "live" the social media idea. The same is true for business process management (BPM). If BPM tools are not integrated into the existing, well-known information systems, acceptance will be low. Practice shows, that employees often do not even know in which process they are currently working

    Integrating Game Engines into the Mobile Cloud as Micro-services

    Get PDF
    Game engines have been widely adopted in fields other than games, such as data visualization and game-based education. As the number of mobile devices owned by each person increases, extra resources are available in personal device clouds, expanding typical learning space to outside of the classroom and increasing possibilities for teacher-student interactions. Owning multiple devices poses the problem of how to make use of idle resources on devices that are slightly dated or lack portability compared to newer models. Such resources include CPU power, display, and data storage. In order to solve this problem, an architecture is proposed for mobile applications to access these resources on various mobile devices. The main approach used here is to divide an application into several modules and distribute them over a personal device cloud (formed by same-user-owned devices) as micro-services. In this architecture, game engines will be incorporated as a render module to tap in its rendering capability. Additionally, modules will communicate using CoAP which has minimal overhead. To evaluate the feasibility of such architecture, a prototype is implemented and deployed over a mobile device, and tested in a modest context that is similar to real life settings
    corecore