61 research outputs found

    Adaptive Caching of Distributed Components

    Get PDF
    Die Zugriffslokalität referenzierter Daten ist eine wichtige Eigenschaft verteilter Anwendungen. Lokales Zwischenspeichern abgefragter entfernter Daten (Caching) wird vielfach bei der Entwicklung solcher Anwendungen eingesetzt, um diese Eigenschaft auszunutzen. Anschliessende Zugriffe auf diese Daten können so beschleunigt werden, indem sie aus dem lokalen Zwischenspeicher bedient werden. Gegenwärtige Middleware-Architekturen bieten dem Anwendungsprogrammierer jedoch kaum Unterstützung für diesen nicht-funktionalen Aspekt. Die vorliegende Arbeit versucht deshalb, Caching als separaten, konfigurierbaren Middleware-Dienst auszulagern. Durch die Einbindung in den Softwareentwicklungsprozess wird die frühzeitige Modellierung und spätere Wiederverwendung caching-spezifischer Metadaten gewährleistet. Zur Laufzeit kann sich das entwickelte System außerdem bezüglich der Cachebarkeit von Daten adaptiv an geändertes Nutzungsverhalten anpassen.Locality of reference is an important property of distributed applications. Caching is typically employed during the development of such applications to exploit this property by locally storing queried data: Subsequent accesses can be accelerated by serving their results immediately form the local store. Current middleware architectures however hardly support this non-functional aspect. The thesis at hand thus tries outsource caching as a separate, configurable middleware service. Integration into the software development lifecycle provides for early capturing, modeling, and later reuse of cachingrelated metadata. At runtime, the implemented system can adapt to caching access characteristics with respect to data cacheability properties, thus healing misconfigurations and optimizing itself to an appropriate configuration. Speculative prefetching of data probably queried in the immediate future complements the presented approach

    Automatic performance optimisation of component-based enterprise systems via redundancy

    Get PDF
    Component technologies, such as J2EE and .NET have been extensively adopted for building complex enterprise applications. These technologies help address complex functionality and flexibility problems and reduce development and maintenance costs. Nonetheless, current component technologies provide little support for predicting and controlling the emerging performance of software systems that are assembled from distinct components. Static component testing and tuning procedures provide insufficient performance guarantees for components deployed and run in diverse assemblies, under unpredictable workloads and on different platforms. Often, there is no single component implementation or deployment configuration that can yield optimal performance in all possible conditions under which a component may run. Manually optimising and adapting complex applications to changes in their running environment is a costly and error-prone management task. The thesis presents a solution for automatically optimising the performance of component-based enterprise systems. The proposed approach is based on the alternate usage of multiple component variants with equivalent functional characteristics, each one optimized for a different execution environment. A management framework automatically administers the available redundant variants and adapts the system to external changes. The framework uses runtime monitoring data to detect performance anomalies and significant variations in the application's execution environment. It automatically adapts the application so as to use the optimal component configuration under the current running conditions. An automatic clustering mechanism analyses monitoring data and infers information on the components' performance characteristics. System administrators use decision policies to state high-level performance goals and configure system management processes. A framework prototype has been implemented and tested for automatically managing a J2EE application. Obtained results prove the framework's capability to successfully manage a software system without human intervention. The management overhead induced during normal system execution and through management operations indicate the framework's feasibility

    QoS-Enabled B2B Integration

    Get PDF
    Business-To-Business Integration (B2Bi) is a key mechanism for enterprises to gain competitive advantage. However, developing B2Bi applications is far from trivial. Inter alia, agreement among integration partners about the business documents and the control flow of business document exchanges as well as applying suitable communication technologies for overcoming heterogeneous IT landscapes are major challenges. At the same time, choreography languages such as ebXML BPSS (ebBP), orchestration languages such as WS-BPEL and Web Services are promising to provide the foundations for seamless interactions among business partners. Automatically translating choreography agreements of integration partners into partner-specific orchestrations is an obvious idea for ensuring conformance of orchestration models to choreography models. Moreover, the application of such model-driven development methods facilitates productivity and cost-effectiveness whereas applying a service oriented architecture (SOA) based on WS-BPEL and Web Services leverages standardization and decoupling. By now, the realization of QoS attributes has not yet received the necessary attention that makes such approaches suitable for B2Bi. In this report, we describe a proof-of-concept implementation of the translation of ebBP choreographies into WS-BPEL orchestrations that respects B2Bi-relevant QoS attributes

    Access to technology in Humboldt County: measuring digital preparedness at the start of a pandemic

    Get PDF
    This research is based on summer 2020 online survey data from a stratified random sample of 573 clients and care providers of a rural Northern California government social services agency. The goal was to study information technology access in Humboldt County, California, and the range of digital preparedness of clients of a local government agency: Humboldt County In-Home Supportive Services (IHSS). IHSS serves several groups of rural residents with low-income, foremost of which are older adults and people with disabilities. In 2020, in compliance with federal requirements, IHSS discontinued systems for paper-based client/provider confirmation of services, moving to digital technology-based service record keeping. Findings were that adults with disabilities, with lower-income and/or who live in a rural location have lower access to technology, were lower technology users, have less confidence with technology, need more help with technology, and are more unready for technological change. Native Americans, non-binary folx, and those with lower education were also more likely to have less access or be lower users. Changes needed for personal use following the COVID-19 pandemic and shelter-in-place orders, as well as in anticipation of IHSS service changes, included new computers and phones, and upgraded internet and phone services. The above groups were those who needed these changes the most. Overall, these people are considered digitally unprepared and lag behind the rest of the digital world. This research provides empirical evidence for IHSS and Humboldt County adoption and implementation of National Digital Inclusion Alliance guidelines for client and provider, training, technical assistance, and material assistance

    IPv6 and IPsec Tests of a Space-Based Asset, the Cisco Router in Low Earth Orbit (CLEO)

    Get PDF
    This report documents the design of network infrastructure to support testing and demonstrating network-centric operations and command and control of space-based assets, using IPv6 and IPsec. These tests were performed using the Cisco router in Low Earth Orbit (CLEO), an experimental payload onboard the United Kingdom--Disaster Monitoring Constellation (UK-DMC) satellite built and operated by Surrey Satellite Technology Ltd (SSTL). On Thursday, 29 March 2007, NASA Glenn Research Center, Cisco Systems and SSTL performed the first configuration and demonstration of IPsec and IPv6 onboard a satellite in low Earth orbit. IPv6 is the next generation of the Internet Protocol (IP), designed to improve on the popular IPv4 that built the Internet, while IPsec is the protocol used to secure communication across IP networks. This demonstration was made possible in part by NASA s Earth Science Technology Office (ESTO) and shows that new commercial technologies such as mobile networking, IPv6 and IPsec can be used for commercial, military and government space applications. This has direct application to NASA s Vision for Space Exploration. The success of CLEO has paved the way for new spacebased Internet technologies, such as the planned Internet Routing In Space (IRIS) payload at geostationary orbit, which will be a U.S. Department of Defense Joint Capability Technology Demonstration. This is a sanitized report for public distribution. All real addressing has been changed to psueco addressing

    Virtual Machine Image Management for Elastic Resource Usage in Grid Computing

    Get PDF
    Grid Computing has evolved from an academic concept to a powerful paradigm in the area of high performance computing (HPC). Over the last few years, powerful Grid computing solutions were developed that allow the execution of computational tasks on distributed computing resources. Grid computing has recently attracted many commercial customers. To enable commercial customers to be able to execute sensitive data in the Grid, strong security mechanisms must be put in place to secure the customers' data. In contrast, the development of Cloud Computing, which entered the scene in 2006, was driven by industry: it was designed with respect to security from the beginning. Virtualization technology is used to separate the users e.g., by putting the different users of a system inside a virtual machine, which prevents them from accessing other users' data. The use of virtualization in the context of Grid computing has been examined early and was found to be a promising approach to counter the security threats that have appeared with commercial customers. One main part of the work presented in this thesis is the Image Creation Station (ICS), a component which allows users to administer their virtual execution environments (virtual machines) themselves and which is responsible for managing and distributing the virtual machines in the entire system. In contrast to Cloud computing, which was designed to allow even inexperienced users to execute their computational tasks in the Cloud easily, Grid computing is much more complex to use. The ICS makes it easier to use the Grid by overcoming traditional limitations like installing needed software on the compute nodes that users use to execute the computational tasks. This allows users to bring commercial software to the Grid for the first time, without the need for local administrators to install the software to computing nodes that are accessible by all users. Moreover, the administrative burden is shifted from the local Grid site's administrator to the users or experienced software providers that allow the provision of individually tailored virtual machines to each user. But the ICS is not only responsible for enabling users to manage their virtual machines themselves, it also ensures that the virtual machines are available on every site that is part of the distributed Grid system. A second aspect of the presented solution focuses on the elasticity of the system by automatically acquiring free external resources depending on the system's current workload. In contrast to existing systems, the presented approach allows the system's administrator to add or remove resource sets during runtime without needing to restart the entire system. Moreover, the presented solution allows users to not only use existing Grid resources but allows them to scale out to Cloud resources and use these resources on-demand. By ensuring that unused resources are shut down as soon as possible, the computational costs of a given task are minimized. In addition, the presented solution allows each user to specify which resources can be used to execute a particular job. This is useful when a job processes sensitive data e.g., that is not allowed to leave the company. To obtain a comparable function in today's systems, a user must submit her computational task to a particular resource set, losing the ability to automatically schedule if more than one set of resources can be used. In addition, the proposed solution prioritizes each set of resources by taking different metrics into account (e.g. the level of trust or computational costs) and tries to schedule the job to resources with the highest priority first. It is notable that the priority often mimics the physical distance from the resources to the user: a locally available Cluster usually has a higher priority due to the high level of trust and the computational costs, that are usually lower than the costs of using Cloud resources. Therefore, this scheduling strategy minimizes the costs of job execution by improving security at the same time since data is not necessarily transferred to remote resources and the probability of attacks by malicious external users is minimized. Bringing both components together results in a system that adapts automatically to the current workload by using external (e.g., Cloud) resources together with existing locally available resources or Grid sites and provides individually tailored virtual execution environments to the system's users

    Security Audit Compliance for Cloud Computing

    Get PDF
    Cloud computing has grown largely over the past three years and is widely popular amongst today's IT landscape. In a comparative study between 250 IT decision makers of UK companies they said, that they already use cloud services for 61% of their systems. Cloud vendors promise "infinite scalability and resources" combined with on-demand access from everywhere. This lets cloud users quickly forget, that there is still a real IT infrastructure behind a cloud. Due to virtualization and multi-tenancy the complexity of these infrastructures is even increased compared to traditional data centers, while it is hidden from the user and outside of his control. This makes management of service provisioning, monitoring, backup, disaster recovery and especially security more complicated. Due to this, and a number of severe security incidents at commercial providers in recent years there is a growing lack of trust in cloud infrastructures. This thesis presents research on cloud security challenges and how they can be addressed by cloud security audits. Security requirements of an Infrastructure as a Service (IaaS) cloud are identified and it is shown how they differ from traditional data centres. To address cloud specific security challenges, a new cloud audit criteria catalogue is developed. Subsequently, a novel cloud security audit system gets developed, which provides a flexible audit architecture for frequently changing cloud infrastructures. It is based on lightweight software agents, which monitor key events in a cloud and trigger specific targeted security audits on demand - on a customer and a cloud provider perspective. To enable these concurrent cloud audits, a Cloud Audit Policy Language is developed and integrated into the audit architecture. Furthermore, to address advanced cloud specific security challenges, an anomaly detection system based on machine learning technology is developed. By creating cloud usage profiles, a continuous evaluation of events - customer specific as well as customer overspanning - helps to detect anomalies within an IaaS cloud. The feasibility of the research is presented as a prototype and its functionality is presented in three demonstrations. Results prove, that the developed cloud audit architecture is able to mitigate cloud specific security challenges

    Cloud computing with an emphasis on PaaS and Google app engine

    Get PDF
    Thesis on cloud with an emphasis on PaaS and Google App Engin
    corecore