297 research outputs found

    Paper Session II-A - Operations and Maintenance Requirements Specifications - Automated Buy-Off System

    Get PDF
    Project Engineering is the Kennedy Space Center (KSC) organization responsible for monitoring the progress of milestone-closure as part of Space Shuttle processing, also known as a flow. An Operations Maintenance Plan (OMP) is a plan used to process a Space Shuttle as it is in a flow for a single mission. Each requirement-line-item in an OMP is known as an Operations and Maintenance Requirement Specification (OMRS). Each OMRS must be verified and closed separately by a configuration management person and an engineer. An Operations and Maintenance Instructions (OMI) document, which is a type of Work Authorization Document (WAD), consists of sequences of steps of tasks performed on a Space Shuttle used to satisfy a set of OMRS\u27s. Performing steps in an OMI is a prerequisite for an OMRS to be verified and closed

    Runtime Reconfiguration of J2EE Applications

    Full text link
    Runtime reconfiguration considered as "applying required changes to a running system" plays an important role for providing high availability not only of safety- and mission-critical systems, but also for commercial web-applications offering professional services. Hereby, the main concerns are maintaining the consistency of the running system during reconfiguration and minimizing its down-time caused by the reconfiguration. This paper focuses on the platform independent subsystem that realises deployment and redeployment of J2EE modules based on the new J2EE Deployment API as a part of the implementation of our proposed system architecture enabling runtime reconfiguration of component-based systems. Our "controlled runtime redeployment" comprises an extension of hot deployment and dynamic reloading, complemented by allowing for structural chang

    Reengineering Standalone C++ Legacy Systems into the J2EE Partition Distributed Environment

    Get PDF
    ABSTRACT Many enterprise systems are developed in C++ language and most of them are standalone. Because the standalone software can not follow the new market environment, reengineering the standalone legacy systems into distributed environment becomes a critical problem. Some methods have been proposed on related topics such as design recovery, the identification of the component, modeling the interfaces of components and components allocation. Up to now, there does not exist a reengineering process for partition distributed environment, which will offer distinct advantages on horizontal scalability and performance over normal distributed solutions. This paper presents a new process to reengineer C++ legacy systems into the J2EE partition distributed environment. The process consists of four steps: translation from C++ to Java code; extraction of components using the cluster technology; modeling component interfaces and partition of the components in J2EE distribute environment. It has been applied to a large equity-trading legacy system which has proved to be successful

    Migration to PaaS Clouds - Migration Process and Architectural Concerns

    Get PDF
    In the cloud computing technology stack, infrastructure has matured more than platform or software service technologies with respect to languages and techniques used for architecting and managing respective applications. Platform-asa- Service (PaaS) emerges as a focus for the near future that we will focus on. We look at software architecture and programming concerns in the context of migration to PaaS solutions, i.e. the transition of platform systems from on-premise to cloud solutions. We investigate best-practice approaches in cloud-aware coding in the form of patterns and formulate these as a migration process. While one-to-one mappings of software from on-premise to cloud platforms are possible, statelessness and data externalisation from stateful sessions and applications emerge as solutions if cloud benefits such as elasticity and performance are aimed at

    Performance measurements of Web services

    Get PDF
    Web services are rapidly evolving application-integration technologies that allow applications in heterogeneous environments to communicate with each other. In this thesis we perform a measurements-based study of an e-commerce application that uses web services to execute business operations. We use the TPC-W specification to generate a session-based workload. The component level response times and the hardware resource usage on the different machines are measured. The component level response times are extracted from the application server logs. From the results it is seen that as the workload increases the response times of the web services components increase. From the hardware resource usage it is clear that web service components require more processing time due to the processing of XML data required in each web service call. The method used in this thesis allows us to study the impact that different components can have on the overall performance of an application

    A Blueprint for Applications in Enterprise Information Portals

    Get PDF
    Electronic Commerce (E-Commerce, EC) is thoroughly changing business models of organizations (governments, corporations, and communities) and individuals the way of living and working. However, the major success will accrue to those companies that are willing to transform their organizations and business processes, which is the scope of e-Business. An Enterprise Information Portal (EIP) provides real time information and integrated applications to knowledge workers, employees, customers, business partners and the general public as well. Effective applications of EIP facilitate high quality strategic decisions. That is, an EIP can enhance an organization’s productivity, improve the collaboration to facilitate E-Commerce and gain competitive advantages. However, the EIP solutions are usually too expensive to small businesses. With Enterprise Application Integration (EAI) approach, this paper presents an economic way to design a low-cost EIP that leverages existing systems. Moreover, a prototype is implemented to show the feasibility. For the external data access, the web mining technology is utilized to mine some relevant and valuable web contents from the Internet and put these contents into the document warehouse. By combining the textual information inside the document warehouse and the numeric data from the data warehouse, competitive advantages can be provided over those who work with just the numbers

    A model-driven approach to broaden the detection of software performance antipatterns at runtime

    Full text link
    Performance antipatterns document bad design patterns that have negative influence on system performance. In our previous work we formalized such antipatterns as logical predicates that predicate on four views: (i) the static view that captures the software elements (e.g. classes, components) and the static relationships among them; (ii) the dynamic view that represents the interaction (e.g. messages) that occurs between the software entities elements to provide the system functionalities; (iii) the deployment view that describes the hardware elements (e.g. processing nodes) and the mapping of the software entities onto the hardware platform; (iv) the performance view that collects specific performance indices. In this paper we present a lightweight infrastructure that is able to detect performance antipatterns at runtime through monitoring. The proposed approach precalculates such predicates and identifies antipatterns whose static, dynamic and deployment sub-predicates are validated by the current system configuration and brings at runtime the verification of performance sub-predicates. The proposed infrastructure leverages model-driven techniques to generate probes for monitoring the performance sub-predicates and detecting antipatterns at runtime.Comment: In Proceedings FESCA 2014, arXiv:1404.043

    Reverse Engineering Heterogeneous Applications

    Get PDF
    Nowadays a large majority of software systems are built using various technologies that in turn rely on different languages (e.g. Java, XML, SQL etc.). We call such systems heterogeneous applications (HAs). By contrast, we call software systems that are written in one language homogeneous applications. In HAs the information regarding the structure and the behaviour of the system is spread across various components and languages and the interactions between different application elements could be hidden. In this context applying existing reverse engineering and quality assurance techniques developed for homogeneous applications is not enough. These techniques have been created to measure quality or provide information about one aspect of the system and they cannot grasp the complexity of HAs. In this dissertation we present our approach to support the analysis and evolution of HAs based on: (1) a unified first-class description of HAs and, (2) a meta-model that reifies the concept of horizontal and vertical dependencies between application elements at different levels of abstraction. We implemented our approach in two tools, MooseEE and Carrack. The first is an extension of the Moose platform for software and data analysis and contains our unified meta-model for HAs. The latter is an engine to infer derived dependencies that can support the analysis of associations among the heterogeneous elements composing HA. We validate our approach and tools by case studies on industrial and open-source JEAs which demonstrate how we can handle the complexity of such applications and how we can solve problems deriving from their heterogeneous nature
    corecore