4,567 research outputs found

    Intention-oriented programming support for runtime adaptive autonomic cloud-based applications

    Get PDF
    The continuing high rate of advances in information and communication systems technology creates many new commercial opportunities but also engenders a range of new technical challenges around maximising systems' dependability, availability, adaptability, and auditability. These challenges are under active research, with notable progress made in the support for dependable software design and management. Runtime support, however, is still in its infancy and requires further research. This paper focuses on a requirements model for the runtime execution and control of an intention-oriented Cloud-Based Application. Thus, a novel requirements modelling process referred to as Provision, Assurance and Auditing, and an associated framework are defined and developed where a given system's non/functional requirements are modelled in terms of intentions and encoded in a standard open mark-up language. An autonomic intention-oriented programming model, using the Neptune language, then handles its deployment and execution. © 2013 Elsevier Ltd. All rights reserved

    Automatic performance optimisation of component-based enterprise systems via redundancy

    Get PDF
    Component technologies, such as J2EE and .NET have been extensively adopted for building complex enterprise applications. These technologies help address complex functionality and flexibility problems and reduce development and maintenance costs. Nonetheless, current component technologies provide little support for predicting and controlling the emerging performance of software systems that are assembled from distinct components. Static component testing and tuning procedures provide insufficient performance guarantees for components deployed and run in diverse assemblies, under unpredictable workloads and on different platforms. Often, there is no single component implementation or deployment configuration that can yield optimal performance in all possible conditions under which a component may run. Manually optimising and adapting complex applications to changes in their running environment is a costly and error-prone management task. The thesis presents a solution for automatically optimising the performance of component-based enterprise systems. The proposed approach is based on the alternate usage of multiple component variants with equivalent functional characteristics, each one optimized for a different execution environment. A management framework automatically administers the available redundant variants and adapts the system to external changes. The framework uses runtime monitoring data to detect performance anomalies and significant variations in the application's execution environment. It automatically adapts the application so as to use the optimal component configuration under the current running conditions. An automatic clustering mechanism analyses monitoring data and infers information on the components' performance characteristics. System administrators use decision policies to state high-level performance goals and configure system management processes. A framework prototype has been implemented and tested for automatically managing a J2EE application. Obtained results prove the framework's capability to successfully manage a software system without human intervention. The management overhead induced during normal system execution and through management operations indicate the framework's feasibility

    Service-oriented architecture of adaptive, intelligent data acquisition and processing systems for long-pulse fusion experiments

    Get PDF
    The data acquisition systems used in long-pulse fusion experiments need to implement data reduction and pattern recognition algorithms in real time. In order to accomplish these operations, it is essential to employ software tools that allow for hot swap capabilities throughout the temporal evolution of the experiments. This is very important because processing needs are not equal during different phases of the experiment. The intelligent test and measurement system (ITMS) developed by UPM and CIEMAT is an example of a technology for implementing scalable data acquisition and processing systems based on PXI and CompactPCI hardware. In the ITMS platform, a set of software tools allows the user to define the processing algorithms associated with the different experimental phases using state machines driven by software events. These state machines are specified using the State Chart XML (SCXML) language. The software tools are developed using JAVA, JINI, an SCXML engine and several LabVIEW applications. Within this schema, it is possible to execute data acquisition and processing applications in an adaptive way. The power of SCXML semantics and the ability to work with XML user-defined data types allow for very easy programming of the ITMS platform. With this approach, the ITMS platform is a suitable solution for implementing scalable data acquisition and processing systems based on a service-oriented model with the ability to easily implement remote participation applications

    Development of a virtualization systems architecture course for the information sciences and technologies department at the Rochester Institute of Technology (RIT)

    Get PDF
    Virtualization is a revolutionary technology that has changed the way computing is performed in data centers. By converting traditionally siloed computing assets to shared pools of resources, virtualization provides a considerable number of advantages such as more efficient use of physical server resources, more efficient use of datacenter space, reduced energy consumption, simplified system administration, simplified backup and disaster recovery, and a host of other advantages. Due to the considerable number of advantages, companies and organizations of various sizes have either migrated their workloads to virtualized environments or are considering virtualization of their workloads. As per Gartner Magic Quadrant for x86 Server Virtualization Infrastructure 2013 , roughly two-third of x86 server workloads are virtualized [1]. The need for virtualization solutions by companies and organizations has increased the demand for qualified virtualization professionals for planning, designing, implementing, and maintaining virtualized infrastructure of different scales. Although universities are the main source for educating IT professionals, the field of information technology is so dynamic and changing so rapidly that not all universities can keep pace with the change. As a result, providing the latest technology that is being used in the information technology industry in the curriculums of universities is a big advantage for information technology universities. Taking into consideration the trend toward virtualization in computing environments and the great demand for virtualization professionals in the industry, the faculty of Information Sciences and Technologies department at RIT decided to prepare a graduate course in the master\u27s program in Networking and System Administration entitled Virtualization Systems Architecture , which better prepares students to a find a career in the field of enterprise computing. This research is composed of five chapters. It starts by briefly going through the history of computer virtualization and exploring when and why it came into existence and how it evolved. The second chapter of the research goes through the challenges in virtualization of the x86 platform architecture and the solutions used to overcome the challenges. In the third chapter, various types of hypervisors are discussed and the advantages and disadvantages of each one are discussed. In the fourth chapter, the architecture and features of the two leading virtualization solutions are explored. Then in the final chapter, the research goes through the contents of the Virtualization Systems Architecture course

    A componentbased approach to online software evolution:

    Get PDF
    SUMMARY Many software systems need to provide services continuously and uninterruptedly. Meanwhile, these software systems need to keep evolving continuously to fix bugs, add functions, improve algorithms, adapt to new running environments and platforms, or prevent potential problems. This situation makes online evolution an important issue in the field of software maintenance and evolution. This paper proposes a component-based approach to online software evolution. Nowadays component technology has been widely adopted. Component technology facilitates software evolution, but also introduces some new issues. In our approach, an application server is used to evolve the application, without special support from the compiler or operating system. The implementation and performance analysis of our approach are also covered

    Fog and Edge Oriented Embedded Enterprise Systems Patterns: Towards Distributed Enterprise Systems That Run on Edge and Fog Nodes

    Get PDF
    Enterprise software systems enable enterprises to enhance business and management reporting tasks in enterprise settings. Internet of Things (IoT) focuses on making interactions possible between a number of network-connected physical devices. Prominence of IoT sensors and multiple business drivers have created a contemporary need for enterprise software systems to interact with IoT devices. Business process implementations, business logic and microservices have traditionally been centralized in enterprise systems. Constraints like privacy, latency, bandwidth, connectivity and security have posed a new set of architectural challenges that can be resolved by designing enterprise systems differently so that parts of business logic and processes can run on fog and edge devices to improve privacy, minimize communication bandwidth and promote low-latency business process execution. This paper aims to propose a set of patterns for the expansion of previously-centralized enterprise systems to the edge of the network. Patterns are supported by a case study for contextualization and analysis

    Strategic perspectives on modularity

    Get PDF
    In this paper we argue that the debate on modularity has come to a point where a consensus is slowly emerging. However, we also contend that this consensus is clearly technology driven. In particular, no room is left for firm strategies. Typically, technology is considered as an exogenous variable to which firms have no choices but to adapt. Taking a slightly different perspective, our main objective is to offer a conceptual framework enabling to shed light on the role of corporate strategies in the process of modularization. From interviews with academic design engineers, we show that firms often consider product architecture as a critical variable to fit their strategic requirements. Based on design sciences, we build an original approach to product modularity. This approach, which leaves an important space for firms' strategic choices, proves also to seize a large part of the industrial reality of modularity. Our framework, which is a first step towards the consideration of strategies within the framework of modularity, gives an account for the diversity of industrial logics related to product modularization.product modularity ; corporate strategy ; technological determinism

    Software Defined Networking Opportunities for Intelligent Security Enhancement of Industrial Control Systems

    Get PDF
    In the last years, cyber security of Industrial Control Systems (ICSs) has become an important issue due to the discovery of sophisticated malware that by attacking Critical Infrastructures, could cause catastrophic safety results. Researches have been developing countermeasures to enhance cyber security for pre-Internet era systems, which are extremely vulnerable to threats. This paper presents the potential opportunities that Software Defined Networking (SDN) provides for the security enhancement of Industrial Control Networks. SDN permits a high level of configuration of a network by the separation of control and data planes. In this work, we describe the affinities between SDN and ICSs and we discuss about implementation strategies
    corecore