30 research outputs found

    A technology reference model for client/server software development

    Get PDF
    In today's highly competitive global economy, information resources representing enterprise-wide information are essential to the survival of an organization. The development of and increase in the use of personal computers and data communication networks are supporting or, in many cases, replacing the traditional computer mainstay of corporations. The client/server model incorporates mainframe programming with desktop applications on personal computers. The aim of the research is to compile a technology model for the development of client/server software. A comprehensive overview of the individual components of the client/server system is given. The different methodologies, tools and techniques that can be used are reviewed, as well as client/server-specific design issues. The research is intended to create a road map in the form of a Technology Reference Model for Client/Server Software Development.ComputingM. Sc. (Information Systems

    Component-based software engineering

    Get PDF
    To solve the problems coming with the current software development methodologies, component-based software engineering has caught many researchers\u27 attention recently. In component-based software engineering, a software system is considered as a set of software components assembled together instead of as a set of functions from the traditional perspective. Software components can be bought from third party vendors as off-the-shelf components and be assembled together. Component-based software engineering, though very promising, needs to solve several core issues before it becomes a mature software development strategy. The goal of this dissertation is to establish an infrastructure for component-based software development. The author identifies and studies some of the core issues such as component planning, component building, component assembling, component representation, and component retrieval. A software development process model is developed in this dissertation to emphasize the reuse of existing software components. The software development process model addresses how a software system should be planned and built to maximize the reuse of software components. It conducts domain engineering and application engineering simultaneously to map a software system to a set of existing components in such a way that the development of a software system can reuse the existing software components to the full extent. Besides the planning of software development based on component technology, the migration and integration of legacy systems, most of which are non-component-based systems, to the component-based software systems are studied. A framework and several methodologies are developed to serve as the guidelines of adopting component technology in legacy systems. Component retrieval is also studied in this dissertation. One of the most important issues in component-based software engineering is how to find a software component quickly and accurately in a component repository. A component representation framework is developed in this dissertation to represent software components. Based on the component representation framework, an efficient searching method that combines neural network, information retrieval, and Bayesian inference technology is developed. Finally a prototype component retrieval system is implemented to demonstrate the correctness and feasibility of the proposed method

    Web services strategy

    Get PDF
    Thesis (S.M.M.O.T.)--Massachusetts Institute of Technology, Sloan School of Management, Management of Technology Program, 2003.June 2003.Includes bibliographical references (p. 116-123).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Everything is connected to everything. El Aleph (1945), by Jorge Luis Borges[1] This thesis addresses the need to simplify and streamline web service network infrastructure and to identify business models that best leverage Web services technology and industry dynamics to generate positive business results. Web services have evolved from the simple page-display protocol of their origin and now reach beyond the links that simply updated web data dynamically from corporate databases, to where systems can automatically transact. These Web services represent a series of network business technology standards and capabilities that irrevocably change the way in which businesses will do business. In fact, every business today is a networked business and has opportunities to grow using Web services. This study focuses on the implementation challenges in the financial services market, specifically the On Line Transaction Processing (OLTP) sector where legacy mainframes interface with multiple tiers of distribution through proprietary EDI links. The OLTP industry operates under stringent regulatory requirements for availability and audit-ability of not only who performed what transaction, but who had access to the information about the information. In this environment organizational demands on network infrastructure including hardware, software and personnel are changing radically, while concurrently Information Technology (IT) budgets are under pressure. The strategic choices for deploying web services in this environment may contain lessons for other industries where cost effective large scale processing, high availability, security, manageability and Intellectual Property Rights (IPR) are paramount concerns. In this paper we use a systems dynamics model to simulate the impact of market changes on the adoption of innovative technologies and their commoditization on the industry value chain, with the aim of identifying business models and network topologies which best support the growth of an Open Systems network business. From the results of the simulation we will derive strategic recommendations for networked business models and web services integration strategies to meet Line Of Business (LOB) objectives.by Stephen B. Miles.S.M.M.O.T

    State of Maine Information Technology Plans, 2000

    Get PDF
    https://digitalmaine.com/oit_docs/1022/thumbnail.jp

    BUILDING RELIABLE AND ROBUST SERVICE-BASED SYSTEMS FOR AUTOMATED BUSINESS PROCESSES

    Get PDF
    An exciting trend in enterprise computing lies in the integration of applications across an organisation and even between organisations. This allows the provision of services by automated business processes that coordinate business activity among several collaborating organisations. The best successes in this type of integrated distributed system come through use of Web Services and Service-based Architecture, which allow interoperation between applications through open standards based on XML and SOAP. But still, there are unresolved issues when developers seek to build a reliable and robust system. An important goal for the designers of a loosely coupled distributed system is to maintain consistency for each long running business process in the presence of failures and concurrent activities. Our approach to assist the developers in this domain is to guide the developers with the key principles they must consider, and to provide programming models and protocols, which make it easier to detect and avoid consistency faults in service-based system. We start by defining a realistic e-procurement scenario to illustrate the common problems faced by the developers which prevent them from building a reliable and robust system. These problems make it hard to maintain the consistency of the data and state during the execution of a business process in the occurrence of failures and interference from concurrent activities. Through the analysis of the common problems, we identify key principles the developers must consider to avoid producing the common problems. Then based on the key principles, we provide a framework called GAT in the orchestration infrastructure. GAT allows developers to express all the necessary processing to handle deviations including those due to failures and concurrent activities. We discuss the GAT framework in detail with its structure and key features. Using an example taken from part of the e-procurement case study, we illustrate how developers can use the framework to design their business requirements. We also discuss how key features of the new framework help the developers to avoid producing consistency faults. We illustrate how systems based on our framework can be built using today’s proven technology. Finally, we provide a unified isolation mechanism called Promises that is not only applicable to our GAT framework, but also to any applications that run in the service-based world. We discuss the concept, how it works, and how it defines a protocol. We also provide a list of potential implementation techniques. Using some of the implementation techniques we mention, we provide a proof-of-concept prototype system

    Integrating modern business applications with objectified legacy systems

    Get PDF

    Fault-tolerant distributed transactions for partitioned OLTP databases

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 103-112).This thesis presents Dtxn, a fault-tolerant distributed transaction system designed specifically for building online transaction processing (OLTP) databases. Databases have traditionally been designed as general purpose data processing tools. By being designed only for OLTP workloads, Dtxn can be more efficient. It is designed to support very large databases by partitioning data across a cluster of commodity servers in a data center. Combining multiple servers together allows systems built with Dtxn to be cost effective, highly available, scalable, and fault-tolerant. Dtxn provides three novel features. First, it provides reusable infrastructure for building a distributed OLTP database out of single machine databases. This allows developers to take a specialized backend storage engine and use it across multiple machines, without needing to re-implement the distributed transaction infrastructure. We used Dtxn to build four different applications: a simple key/value store, a specialized TPC-C implementation, a main-memory OLTP database, and a traditional disk-based OLTP database. Second, Dtxn provides a novel concurrency control mechanism called speculative concurrency control, designed for main memory OLTP workloads that are primarily composed of transactions with a single round of communication between the application and database. Speculative concurrency control executes one transaction at a time, with no concurrency control overhead. In cases where there may be stalls due to network communication, it speculates future transactions. Our results show that this provides significantly better throughput than traditional two-phase locking, outperforming it by a factor of two on the TPC-C benchmark. Finally, Dtxn supports live migration, allowing part of the data on one server to be moved to another server while processing transactions. Our experiments show that our approach has nearly no visible impact on throughput or latency when moving data under moderate to high loads. It has significantly less impact than the best commercially available systems when the database is overloaded. The period of time where the throughput is reduced is less than half as long as failing over to another replica or using virtual machine migration.by Evan Philip Charles Jones.Ph.D

    Adaptive Caching of Distributed Components

    Get PDF
    Die Zugriffslokalität referenzierter Daten ist eine wichtige Eigenschaft verteilter Anwendungen. Lokales Zwischenspeichern abgefragter entfernter Daten (Caching) wird vielfach bei der Entwicklung solcher Anwendungen eingesetzt, um diese Eigenschaft auszunutzen. Anschliessende Zugriffe auf diese Daten können so beschleunigt werden, indem sie aus dem lokalen Zwischenspeicher bedient werden. Gegenwärtige Middleware-Architekturen bieten dem Anwendungsprogrammierer jedoch kaum Unterstützung für diesen nicht-funktionalen Aspekt. Die vorliegende Arbeit versucht deshalb, Caching als separaten, konfigurierbaren Middleware-Dienst auszulagern. Durch die Einbindung in den Softwareentwicklungsprozess wird die frühzeitige Modellierung und spätere Wiederverwendung caching-spezifischer Metadaten gewährleistet. Zur Laufzeit kann sich das entwickelte System außerdem bezüglich der Cachebarkeit von Daten adaptiv an geändertes Nutzungsverhalten anpassen.Locality of reference is an important property of distributed applications. Caching is typically employed during the development of such applications to exploit this property by locally storing queried data: Subsequent accesses can be accelerated by serving their results immediately form the local store. Current middleware architectures however hardly support this non-functional aspect. The thesis at hand thus tries outsource caching as a separate, configurable middleware service. Integration into the software development lifecycle provides for early capturing, modeling, and later reuse of cachingrelated metadata. At runtime, the implemented system can adapt to caching access characteristics with respect to data cacheability properties, thus healing misconfigurations and optimizing itself to an appropriate configuration. Speculative prefetching of data probably queried in the immediate future complements the presented approach

    Adaptive object management for distributed systems

    Get PDF
    This thesis describes an architecture supporting the management of pluggable software components and evaluates it against the requirement for an enterprise integration platform for the manufacturing and petrochemical industries. In a distributed environment, we need mechanisms to manage objects and their interactions. At the least, we must be able to create objects in different processes on different nodes; we must be able to link them together so that they can pass messages to each other across the network; and we must deliver their messages in a timely and reliable manner. Object based environments which support these services already exist, for example ANSAware(ANSA, 1989), DEC's Objectbroker(ACA,1992), Iona's Orbix(Orbix,1994)Yet such environments provide limited support for composing applications from pluggable components. Pluggability is the ability to install and configure a component into an environment dynamically when the component is used, without specifying static dependencies between components when they are produced. Pluggability is supported to a degree by dynamic binding. Components may be programmed to import references to other components and to explore their interfaces at runtime, without using static type dependencies. Yet thus overloads the component with the responsibility to explore bindings. What is still generally missing is an efficient general-purpose binding model for managing bindings between independently produced components. In addition, existing environments provide no clear strategy for dealing with fine grained objects. The overhead of runtime binding and remote messaging will severely reduce performance where there are a lot of objects with complex patterns of interaction. We need an adaptive approach to managing configurations of pluggable components according to the needs and constraints of the environment. Management is made difficult by embedding bindings in component implementations and by relying on strong typing as the only means of verifying and validating bindings. To solve these problems we have built a set of configuration tools on top of an existing distributed support environment. Specification tools facilitate the construction of independent pluggable components. Visual composition tools facilitate the configuration of components into applications and the verification of composite behaviours. A configuration model is constructed which maintains the environmental state. Adaptive management is made possible by changing the management policy according to this state. Such policy changes affect the location of objects, their bindings, and the choice of messaging system
    corecore