234 research outputs found

    Automatic performance optimisation of component-based enterprise systems via redundancy

    Get PDF
    Component technologies, such as J2EE and .NET have been extensively adopted for building complex enterprise applications. These technologies help address complex functionality and flexibility problems and reduce development and maintenance costs. Nonetheless, current component technologies provide little support for predicting and controlling the emerging performance of software systems that are assembled from distinct components. Static component testing and tuning procedures provide insufficient performance guarantees for components deployed and run in diverse assemblies, under unpredictable workloads and on different platforms. Often, there is no single component implementation or deployment configuration that can yield optimal performance in all possible conditions under which a component may run. Manually optimising and adapting complex applications to changes in their running environment is a costly and error-prone management task. The thesis presents a solution for automatically optimising the performance of component-based enterprise systems. The proposed approach is based on the alternate usage of multiple component variants with equivalent functional characteristics, each one optimized for a different execution environment. A management framework automatically administers the available redundant variants and adapts the system to external changes. The framework uses runtime monitoring data to detect performance anomalies and significant variations in the application's execution environment. It automatically adapts the application so as to use the optimal component configuration under the current running conditions. An automatic clustering mechanism analyses monitoring data and infers information on the components' performance characteristics. System administrators use decision policies to state high-level performance goals and configure system management processes. A framework prototype has been implemented and tested for automatically managing a J2EE application. Obtained results prove the framework's capability to successfully manage a software system without human intervention. The management overhead induced during normal system execution and through management operations indicate the framework's feasibility

    Web services strategy

    Get PDF
    Thesis (S.M.M.O.T.)--Massachusetts Institute of Technology, Sloan School of Management, Management of Technology Program, 2003.June 2003.Includes bibliographical references (p. 116-123).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Everything is connected to everything. El Aleph (1945), by Jorge Luis Borges[1] This thesis addresses the need to simplify and streamline web service network infrastructure and to identify business models that best leverage Web services technology and industry dynamics to generate positive business results. Web services have evolved from the simple page-display protocol of their origin and now reach beyond the links that simply updated web data dynamically from corporate databases, to where systems can automatically transact. These Web services represent a series of network business technology standards and capabilities that irrevocably change the way in which businesses will do business. In fact, every business today is a networked business and has opportunities to grow using Web services. This study focuses on the implementation challenges in the financial services market, specifically the On Line Transaction Processing (OLTP) sector where legacy mainframes interface with multiple tiers of distribution through proprietary EDI links. The OLTP industry operates under stringent regulatory requirements for availability and audit-ability of not only who performed what transaction, but who had access to the information about the information. In this environment organizational demands on network infrastructure including hardware, software and personnel are changing radically, while concurrently Information Technology (IT) budgets are under pressure. The strategic choices for deploying web services in this environment may contain lessons for other industries where cost effective large scale processing, high availability, security, manageability and Intellectual Property Rights (IPR) are paramount concerns. In this paper we use a systems dynamics model to simulate the impact of market changes on the adoption of innovative technologies and their commoditization on the industry value chain, with the aim of identifying business models and network topologies which best support the growth of an Open Systems network business. From the results of the simulation we will derive strategic recommendations for networked business models and web services integration strategies to meet Line Of Business (LOB) objectives.by Stephen B. Miles.S.M.M.O.T

    Dynamic Upgrade of Distributed Software Components

    Get PDF
    Die Aktualisierung von komplexen Telekommunikationssystemen, die sich durch die ihnen eigene Verteiltheit und hohe Kosten bei System-Nichtverfügbarkeit auszeichnen, ist ein komplizierter und fehleranfälliger Wartungsprozess. Noch stärkere Herausforderungen bergen solche Software-Aktualisierungen, die die Systemverfügbarkeit nicht beeinträchtigen sollen. Dynamic Upgrade ist eine Wartungstechnik, die das Verwalten und die Durchführung von Software-Aktualisierung automatisiert und damit den Betrieb des Systems während der Wartungszeit nicht unterbricht. In dieser Arbeit wird das Dynamic Upgrade als ein Sonderfall der Bereitstellung und Inbetriebnahme (Deployment) von Software betrachtet, in dem Teile der einen Dienst repräsentierenden Software durch neue Versionen im laufenden Betrieb ersetzt werden. Die Problemstellung des Dynamic Upgrade wird anhand einer vom Autor erarbeiteten Taxonomie erläutert, die die Entwurfsmöglichkeiten für ein System zur Unterstützung von Dynamic Upgrade hinsichtlich dreier Systemaspekte klassifiziert: Deployment, Evolution und Zuverlässigkeit (Dependability). Mit Hilfe dieser Taxonomie lassen sich auch andere Systeme zur Unterstützung von Dynamic Upgrade miteinander vergleichen. Aufbauend auf einem ausführlichen Vergleich über existierende Ansätze zur Unterstützung von Dynamic Upgrade, wird in der vorliegenden Arbeit eine Lösung entwickelt und dargestellt, die Dynamic Upgrade in verteilten komponentenbasierten Software-Systemen ermöglicht. Ausgehend von der Problemanalyse wird mit Hilfe des Unified Process ein als Deployment and Upgrade Facility bezeichnetes Modell entwickelt, das sowohl die benötigten Leistungsfähigkeiten eines Dynamic Upgrade unterstützenden Systems als auch Eigenschaften von aktualisierbaren Software-Komponenten beschreibt. Dieses Modell ist Plattform-unabhängig und einsetzbar für mehrere unterliegende Middleware-Technologien. Das Modell wird in einem Java-basierten prototypischen Rahmenwerk programmiert und um plattformspezifische Mechanismen auf der Jgroup/ARM Middleware erweitert. Das Rahmenwerk umfasst allgemeine Entwurfslösungen und ?muster, die sich für die Konstruktion einer Unterstützung für Dynamic Upgrade eignen. Es erlaubt die Kontrolle der Lebenszyklen von Aktualisierungsprozessen und ihre Koordination im Zielsystem. Darüber hinaus definiert es eine Reihe von Unterstützungsmechanismen und Algorithmen für den dynamischen Aktualisierungsprozess, der gegebenenfalls mit unterschiedlichen Zielsetzungen und unter verschiedenen Randbedingungen erfolgen soll. Insbesondere wird ein Aktualisierungsalgorithmus für replizierte Software-Komponenten dargestellt. Das entwickelte Rahmenwerk wird zwecks Plausibilitätsprüfung der dargestellten Ansätze und zur Auswertung der Auswirkungen der Dynamic Upgrade unterstützenden Mechanismen im Hinblick auf Systemperformanz in mehreren Experimenten eingesetzt. Diese quantitative Evaluierung der Experimente führt zu einer Spezifikationen eines einfachen Bewertungsmaßstabs (Benchmark), der sich zum Vergleich von Dynamic Upgrade unterstützenden Systemen eignet.Upgrading complex telecommunication software systems, characterized by their inherent distribution and a very high cost of system unavailability, is a difficult and error-prone maintenance activity. Even more challenging are such software upgrades that do not compromise the system availability. Dynamic upgrades is a technique, which automates performing and managing upgrades so that the software system remains operational during the upgrade time. In this thesis, the dynamic upgrade is considered as a special case of software deployment, in which a running service has to be replaced with its new version. The problems of dynamic upgrades are introduced using a novel taxonomy that classifies the design issues to be solved when building support for dynamic upgrade with regard to three system aspects: deployment, evolution and dependability and provides a reference to comparing other systems supporting dynamic upgrades. An extensive and thorough survey of existing approaches to dynamic upgrades follows and, furthermore, is as a starting point to designing a solution supporting dynamic upgrades in distributed component-based software systems. Derived from the problem analysis, a model called Deployment and Upgrade Facility describing the capabilities needed for managing and performing dynamic upgrades as well as properties of upgradable software components is developed using the Unified Process approach. The model is platform independent and can be used with a range of underlying middleware technologies. The model is implemented in a Java-based prototypical framework and extended with platform specific mechanisms on top of the JGroup/ARM middleware. The framework captures common design solutions and patterns for building a support for dynamic upgrade. The framework allows for controlling life-cycle and coordination of upgrade processes in the system. It also defines a number of supporting mechanisms and algorithms for the upgrade process. A special attention is drawn to an upgrade algorithm for replicated software components for achieving a synergy of replication techniques and dynamic upgrade . The developed framework is used to validate the feasibility of the approach and to measure the overhead of the mechanisms supporting dynamic upgrade with regard to the performance of the system being upgraded in a number of practical experiments. This quantitative evaluation of the experiments leads to a specification of a simple benchmark for systems supporting dynamic upgrades

    DBKnot: A Transparent and Seamless, Pluggable Tamper Evident Database

    Get PDF
    Database integrity is crucial to organizations that rely on databases of important data. They suffer from the vulnerability to internal fraud. Database tampering by internal malicious employees with high technical authorization to their infrastructure or even compromised by externals is one of the important attack vectors. This thesis addresses such challenge in a class of problems where data is appended only and is immutable. Examples of operations where data does not change is a) financial institutions (banks, accounting systems, stock market, etc., b) registries and notary systems where important data is kept but is never subject to change, and c) system logs that must be kept intact for performance and forensic inspection if needed. The target of the approach is implementation seamlessness with little-or-no changes required in existing systems. Transaction tracking for tamper detection is done by utilizing a common hashtable that serially and cumulatively hashes transactions together while using an external time-stamper and signer to sign such linkages together. This allows transactions to be tracked without any of the organizations’ data leaving their premises and going to any third-party which also reduces the performance impact of tracking. This is done so by adding a tracking layer and embedding it inside the data workflow while keeping it as un-invasive as possible. DBKnot implements such features a) natively into databases, or b) embedded inside Object Relational Mapping (ORM) frameworks, and finally c) outlines a direction of implementing it as a stand-alone microservice reverse-proxy. A prototype ORM and database layer has been developed and tested for seamlessness of integration and ease of use. Additionally, different models of optimization by implementing pipelining parallelism in the hashing/signing process have been tested in order to check their impact on performance. Stock-market information was used for experimentation with DBKnot and the initial results gave a slightly less than 100% increase in transaction time by using the most basic, sequential, and synchronous version of DBKnot. Signing and hashing overhead does not show significant increase per record with the increased amount of data. A number of different alternate optimizations were done to the design that via testing have resulted in significant increase in performance

    Computer-supported virtual collaborative learning and assessment framework for distributed learning environment

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2002.Includes bibliographical references (leaves 166-168).by Wei Wang.S.M

    An appraisal of secure, wireless grid-enabled data warehousing

    Get PDF
    In most research, appropriate collections of data play a significant role in aiding decision-making processes. This is more critical if the data is being accessed across organisational barriers. Further, for the data to be mined and analysed efficiently, to aid decision-making processes, it must be harnessed in a suitably-structured fashion. There is, for example, a need to perform diverse data analyses and interpretation of structured (non-personal) HIV/AIDS patient-data from various quarters in South Africa. Although this data does exist, to some extent, it is autonomously owned and stored in disparate data storages, and not readily available to all interested parties. In order to put this data to meaningful use, it is imperative to integrate and store this data in a manner in which it can be better utilized by all those involved in the ontological field. This implies integration of (and hence, interoperability), and appropriate accessibility to, the information systems of the autonomous organizations providing data and data-processing. This is a typical problem-scenario for a Virtual Inter-Organisational Information System (VIOIS), proposed in this study. The VIOIS envisaged is a hypothetical, secure, Wireless Grid-enabled Data Warehouse (WGDW) that enables IOIS interaction, such as the storage and processing of HIV/AIDS patient-data to be utilized for HIV/AIDS-specific research. The proposed WDGW offers a methodical approach for arriving at such a collaborative (HIV/AIDS research) integrated system. The proposed WDGW is virtual community that consists mainly of data-providers, service-providers and information-consumers. The WGDW-basis resulted from systematic literaturesurvey that covered a variety of technologies and standards that support datastorage, data-management, computation and connectivity between virtual community members in Grid computing contexts. A Grid computing paradigm is proposed for data-storage, data management and computation in the WGDW. Informational or analytical processing will be enabled through data warehousing while connectivity will be attained wirelessly (for addressing the paucity of connectivity infrastructure in rural parts of developing countries, like South Africa)

    An appraisal of secure, wireless grid-enabled data warehousing

    Get PDF
    In most research, appropriate collections of data play a significant role in aiding decision-making processes. This is more critical if the data is being accessed across organisational barriers. Further, for the data to be mined and analysed efficiently, to aid decision-making processes, it must be harnessed in a suitably-structured fashion. There is, for example, a need to perform diverse data analyses and interpretation of structured (non-personal) HIV/AIDS patient-data from various quarters in South Africa. Although this data does exist, to some extent, it is autonomously owned and stored in disparate data storages, and not readily available to all interested parties. In order to put this data to meaningful use, it is imperative to integrate and store this data in a manner in which it can be better utilized by all those involved in the ontological field. This implies integration of (and hence, interoperability), and appropriate accessibility to, the information systems of the autonomous organizations providing data and data-processing. This is a typical problem-scenario for a Virtual Inter-Organisational Information System (VIOIS), proposed in this study. The VIOIS envisaged is a hypothetical, secure, Wireless Grid-enabled Data Warehouse (WGDW) that enables IOIS interaction, such as the storage and processing of HIV/AIDS patient-data to be utilized for HIV/AIDS-specific research. The proposed WDGW offers a methodical approach for arriving at such a collaborative (HIV/AIDS research) integrated system. The proposed WDGW is virtual community that consists mainly of data-providers, service-providers and information-consumers. The WGDW-basis resulted from systematic literaturesurvey that covered a variety of technologies and standards that support datastorage, data-management, computation and connectivity between virtual community members in Grid computing contexts. A Grid computing paradigm is proposed for data-storage, data management and computation in the WGDW. Informational or analytical processing will be enabled through data warehousing while connectivity will be attained wirelessly (for addressing the paucity of connectivity infrastructure in rural parts of developing countries, like South Africa)
    corecore