276,473 research outputs found

    Large-scale Complex IT Systems

    Get PDF
    This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that identifies the major challenges and issues in the development of large-scale complex, software-intensive systems. Central to this is the notion that we cannot separate software from the socio-technical environment in which it is used.Comment: 12 pages, 2 figure

    On the engineering of systems of systems: key challenges for the requirements engineering community!

    Get PDF
    Software intensive systems of the future will be ultra large-scale systems of systems. Systems of Systems Engineering focuses on the interoperation of many independent, self-contained constituent systems to achieve a global need. The scale and complexity of systems of systems possess unique challenges for the Requirements Engineering community. Current requirements engineering techniques are inadequate in addressing these challenges and new concepts, methods, techniques, tools and processes are required. This paper identifies some immediate key challenges for the Requirements Engineering community that need to be scoped and describes some road-mapping activities that aim to address these challenges

    Many-Task Computing and Blue Waters

    Full text link
    This report discusses many-task computing (MTC) generically and in the context of the proposed Blue Waters systems, which is planned to be the largest NSF-funded supercomputer when it begins production use in 2012. The aim of this report is to inform the BW project about MTC, including understanding aspects of MTC applications that can be used to characterize the domain and understanding the implications of these aspects to middleware and policies. Many MTC applications do not neatly fit the stereotypes of high-performance computing (HPC) or high-throughput computing (HTC) applications. Like HTC applications, by definition MTC applications are structured as graphs of discrete tasks, with explicit input and output dependencies forming the graph edges. However, MTC applications have significant features that distinguish them from typical HTC applications. In particular, different engineering constraints for hardware and software must be met in order to support these applications. HTC applications have traditionally run on platforms such as grids and clusters, through either workflow systems or parallel programming systems. MTC applications, in contrast, will often demand a short time to solution, may be communication intensive or data intensive, and may comprise very short tasks. Therefore, hardware and software for MTC must be engineered to support the additional communication and I/O and must minimize task dispatch overheads. The hardware of large-scale HPC systems, with its high degree of parallelism and support for intensive communication, is well suited for MTC applications. However, HPC systems often lack a dynamic resource-provisioning feature, are not ideal for task communication via the file system, and have an I/O system that is not optimized for MTC-style applications. Hence, additional software support is likely to be required to gain full benefit from the HPC hardware

    Privacy through Anonymisation in Large-scale Socio-technical Systems: Multi-lingual Contact Centres across the EU

    Get PDF
    Large-scale socio-technical systems (STS) inextricably interconnect individual – e.g., the right to privacy –, social – e.g., the eïŹ€ectiveness of organisational processes –, and technology issues —e.g., the software engineering process. As a result, the design of the complex software infrastructure involves also non-technological aspects such as the legal ones—so that, e.g., law-abidingness can be ensured since the early stages of the software engineering process. By focussing on contact centres (CC) as relevant examples of knowledge-intensive STS, we elaborate on the articulate aspects of anonymisation: there, individual and organisational needs clash, so that only an accurate balancing between legal and technical aspects could possibly ensure the system eïŹƒciency while preserving the individual right to privacy. We discuss ïŹrst the overall legal framework, then the general theme of anonymisation in CC. Finally we overview the technical process developed in the context of the BISON project

    Engineering Multi-Agent Systems: State of Affairs and the Road Ahead

    Get PDF
    The continuous integration of software-intensive systems together with the ever-increasing computing power offer a breeding ground for intelligent agents and multi-agent systems (MAS) more than ever before. Over the past two decades, a wide variety of languages, models, techniques and methodologies have been proposed to engineer agents and MAS. Despite this substantial body of knowledge and expertise, the systematic engineering of large-scale and open MAS still poses many challenges. Researchers and engineers still face fundamental questions regarding theories, architectures, languages, processes, and platforms for designing, implementing, running, maintaining, and evolving MAS. This paper reports on the results of the 6th International Workshop on Engineering Multi-Agent Systems (EMAS 2018, 14th-15th of July, 2018, Stockholm, Sweden), where participants discussed the issues above focusing on the state of affairs and the road ahead for researchers and engineers in this area

    Architecture--Performance Interrelationship Analysis In Single/Multiple Cpu/Gpu Computing Systems: Application To Composite Process Flow Modeling

    Get PDF
    Current developments in computing have shown the advantage of using one or more Graphic Processing Units (GPU) to boost the performance of many computationally intensive applications but there are still limits to these GPU-enhanced systems. The major factors that contribute to the limitations of GPU(s) for High Performance Computing (HPC) can be categorized as hardware and software oriented in nature. Understanding how these factors affect performance is essential to develop efficient and robust applications codes that employ one or more GPU devices as powerful co-processors for HPC computational modeling. The present work analyzes and understands the intrinsic interrelationship of both hardware and software categories on computational performance for single and multiple GPU-enhanced systems using a computationally intensive application that is representative of a large portion of challenges confronting modern HPC. The representative application uses unstructured finite element computations for transient composite resin infusion process flow modeling as the computational core, characteristics and results of which reflect many other HPC applications via the sparse matrix system used for the solution of linear system of equations. This work describes these various software and hardware factors and how they interact to affect performance of computationally intensive applications enabling more efficient development and porting of High Performance Computing applications that includes current, legacy, and future large scale computational modeling applications in various engineering and scientific disciplines

    Influential factors of aligning Spotify squads in mission-critical and offshore projects – a longitudinal embedded case study

    Get PDF
    Changing the development process of an organization is one of the toughest and riskiest decisions. This is particularly true if the known experiences and practices of the new considered ways of working are relative and subject to contextual assumptions. Spotify engineering culture is deemed as a new agile software development method which increasingly attracts large-scale organizations. The method relies on several small cross-functional self-organized teams (i.e., squads). The squad autonomy is a key driver in Spotify method, where a squad decides what to do and how to do it. To enable effective squad autonomy, each squad shall be aligned with a mission, strategy, short-term goals and other squads. Since a little known about Spotify method, there is a need to answer the question of: How can organizations work out and maintain the alignment to enable loosely coupled and tightly aligned squads? In this paper, we identify factors to support the alignment that is actually performed in practice but have never been discussed before in terms of Spotify method. We also present Spotify Tailoring by highlighting the modified and newly introduced processes to the method. Our work is based on a longitudinal embedded case study which was conducted in a real-world large-scale offshore software intensive organization that maintains mission-critical systems. According to the confidentiality agreement by the organization in question, we are not allowed to reveal a detailed description of the features of the explored project

    Large scale continuous integration and delivery:Making great software better and faster

    Get PDF
    Since the inception of continuous integration, and later continuous delivery, the methods of producing software in the industry have changed dramatically over the last two decades. Automated, rapid and frequent compilation, integration, testing, analysis, packaging and delivery of new software versions have become commonplace. This change has had significant impact not only on software engineering practice, but on the way we as consumers and indeed as a society relate to software. Moreover, as we live in an increasingly software-intensive and software-dependent world, the quality and reliability of the systems we use to build, test and deliver that software is a crucial concern. At the same time, it is repeatedly shown that the successful and effective implementation of continuous engineering practices is far from trivial, particularly in a large scale context. This thesis approaches the software engineering practices of continuous integration and delivery from multiple points of view, and is split into three parts, accordingly. Part I focuses on understanding the nature of continuous integration and differences in its interpretation and implementation. In order to address this divergence and provide practitioners and researchers alike with better and less ambiguous methods for describing and designing continuous integration and delivery systems, Part II applies the paradigm of system modeling to continuous integration and delivery. Meanwhile, Part III addresses the problem of traceability. Unique challenges to traceability in the context of continuous practices are highlighted, and possible solutions are presented and evaluated
    • 

    corecore