4,377 research outputs found

    Communication: key factor in multidisciplinary system design

    Get PDF
    System design research often looks at ways to model the system that is developing. Many modelling techniques and model representations exist. Another aspect these models can be used for is to enable, facilitate and improve communication among the developers during the process. The young System Design Group at the faculty of Engineering Technology of the University of Twente, the Netherlands, aims at focusing on this communication aspect in system design.\ud In the paper, a few finished and running projects undertaken in close cooperation with industry are described concisely. From these projects three research themes are derived. These are: creation of high-level models, combining model representations and condense information. The paper ends with plans for future research

    Function allocation theory for creative design

    Get PDF
    Function structure influences on systems architecture (or product architecture). This paper discusses a design method for creative design solutions that focuses on the allocation of functions. It first proposes a theory called “Function Allocation Theory” to allocate a function to an appropriate subsystem or component during the systems decomposition phase. By doing so, the complexity of design solutions can be reduced. The theory is applied to some examples including collaborative robots and robotics maintenance. Finally, the paper illustrates a case study of designing a reaction-free fastening system using this theory

    Exploiting Inter- and Intra-Memory Asymmetries for Data Mapping in Hybrid Tiered-Memories

    Full text link
    Modern computing systems are embracing hybrid memory comprising of DRAM and non-volatile memory (NVM) to combine the best properties of both memory technologies, achieving low latency, high reliability, and high density. A prominent characteristic of DRAM-NVM hybrid memory is that it has NVM access latency much higher than DRAM access latency. We call this inter-memory asymmetry. We observe that parasitic components on a long bitline are a major source of high latency in both DRAM and NVM, and a significant factor contributing to high-voltage operations in NVM, which impact their reliability. We propose an architectural change, where each long bitline in DRAM and NVM is split into two segments by an isolation transistor. One segment can be accessed with lower latency and operating voltage than the other. By introducing tiers, we enable non-uniform accesses within each memory type (which we call intra-memory asymmetry), leading to performance and reliability trade-offs in DRAM-NVM hybrid memory. We extend existing NVM-DRAM OS in three ways. First, we exploit both inter- and intra-memory asymmetries to allocate and migrate memory pages between the tiers in DRAM and NVM. Second, we improve the OS's page allocation decisions by predicting the access intensity of a newly-referenced memory page in a program and placing it to a matching tier during its initial allocation. This minimizes page migrations during program execution, lowering the performance overhead. Third, we propose a solution to migrate pages between the tiers of the same memory without transferring data over the memory channel, minimizing channel occupancy and improving performance. Our overall approach, which we call MNEME, to enable and exploit asymmetries in DRAM-NVM hybrid tiered memory improves both performance and reliability for both single-core and multi-programmed workloads.Comment: 15 pages, 29 figures, accepted at ACM SIGPLAN International Symposium on Memory Managemen

    ISEL: An e-Taxation System for Employers

    Get PDF
    In 2008 the State of Geneva modified its regulation on taxation at source in order to collect electronic fiscal data from employers. Indeed the latter provide data on their employees directly to the tax administration (AFC) and furthermore pay taxes to the State on behalf of their employees. They subtract the corresponding amounts from employees' income and refund that money to the fiscal administration. The taxation at source system is applied to foreigners who work in Switzerland or who receive Swiss pensions, to people who live in Geneva but work in other Cantons, as well as to performers, artists or speakers who work occasionally in Geneva. More than 12'000 companies and 117'000 employees are concerned by the scheme, and large companies provide data on several thousand employees. In the past these files provided by employers were handled semi-automatically by the AFC (at best). The new system (called ISEL for ImpĂ´t Ă  la Source En Ligne) offers employers two electronic channels to provide data on employees: file transfer (.XSD) and internet e-form. This case study describes the ISEL project and its context, and discusses the issues raised by the introduction of this e-taxation system. On the human side, our paper takes a qualitative approach, based on interviews of various stakeholders involved in the project. They were asked questions on ISEL's functionality, usability, performance, and so on. On the technical side, the paper presents the architecting principles of the e-government approach in Geneva (Legality, Responsibility, Transparency and Symmetry) and the workflow that was implemented on top of AFC's legacy system.private public partnership; tax collection; e-services; e-government; data exchange; architecture; usability

    Migrating to Cloud-Native Architectures Using Microservices: An Experience Report

    Full text link
    Migration to the cloud has been a popular topic in industry and academia in recent years. Despite many benefits that the cloud presents, such as high availability and scalability, most of the on-premise application architectures are not ready to fully exploit the benefits of this environment, and adapting them to this environment is a non-trivial task. Microservices have appeared recently as novel architectural styles that are native to the cloud. These cloud-native architectures can facilitate migrating on-premise architectures to fully benefit from the cloud environments because non-functional attributes, like scalability, are inherent in this style. The existing approaches on cloud migration does not mostly consider cloud-native architectures as their first-class citizens. As a result, the final product may not meet its primary drivers for migration. In this paper, we intend to report our experience and lessons learned in an ongoing project on migrating a monolithic on-premise software architecture to microservices. We concluded that microservices is not a one-fit-all solution as it introduces new complexities to the system, and many factors, such as distribution complexities, should be considered before adopting this style. However, if adopted in a context that needs high flexibility in terms of scalability and availability, it can deliver its promised benefits

    Issues of Architectural Description Languages for Handling Dynamic Reconfiguration

    Get PDF
    Dynamic reconfiguration is the action of modifying a software system at runtime. Several works have been using architectural specification as the basis for dynamic reconfiguration. Indeed ADLs (architecture description languages) let architects describe the elements that could be reconfigured as well as the set of constraints to which the system must conform during reconfiguration. In this work, we investigate the ADL literature in order to illustrate how reconfiguration is supported in four well-known ADLs: pi-ADL, ACME, C2SADL and Dynamic Wright. From this review, we conclude that none of these ADLs: (i) addresses the issue of consistently reconfiguring both instances and types; (ii) takes into account the behaviour of architectural elements during reconfiguration; and (iii) provides support for assessing reconfiguration, e.g., verifying the transition against properties.Comment: 6\`eme Conf\'erence francophone sur les architectures logicielles (CAL'2012), Montpellier : France (2012

    An empirical study of architecting for continuous delivery and deployment

    Get PDF
    Recently, many software organizations have been adopting Continuous Delivery and Continuous Deployment (CD) practices to develop and deliver quality software more frequently and reliably. Whilst an increasing amount of the literature covers different aspects of CD, little is known about the role of software architecture in CD and how an application should be (re-) architected to enable and support CD. We have conducted a mixed-methods empirical study that collected data through in-depth, semi-structured interviews with 21 industrial practitioners from 19 organizations, and a survey of 91 professional software practitioners. Based on a systematic and rigorous analysis of the gathered qualitative and quantitative data, we present a conceptual framework to support the process of (re-) architecting for CD. We provide evidence-based insights about practicing CD within monolithic systems and characterize the principle of "small and independent deployment units" as an alternative to the monoliths. Our framework supplements the architecting process in a CD context through introducing the quality attributes (e.g., resilience) that require more attention and demonstrating the strategies (e.g., prioritizing operations concerns) to design operations-friendly architectures. We discuss the key insights (e.g., monoliths and CD are not intrinsically oxymoronic) gained from our study and draw implications for research and practice.Comment: To appear in Empirical Software Engineerin

    Scenario-based system architecting : a systematic approach to developing future-proof system architectures

    Get PDF
    This thesis summarizes the research results of Mugurel T. Ionita, based on the work conducted in the context of the STW15 - AIMES16 project. The work presented in this thesis was conducted at Philips Research and coordinated by Eindhoven University of Technology. It resulted in six external available publications, and ten internal reports which are company confidential. The research regarded the methodology of developing system architectures, focusing in particular on two aspects of the early architecting phases. These were, first the generation of multiple architectural options, to consider the most likely changes to appear in the business environment, and second the quantitative assessment of these options with respect to how well they contribute to the overall quality attributes of the future system, including cost and risk analysis. The main reasons for looking at these two aspects of the architecting process was because architectures usually have to live for long periods of time, up to 5 years, which requires that they are able to deal successfully with the uncertainty associated with the future business environment. A second reason was because the quality attributes, the costs and the risks of a future system are usually dictated by its architecture, and therefore an early quantitative estimate about these attributes could prevent the system redesign. The research results of this project were two methods, namely a method for designing architecture options that are more future-proof, meaning more resilient to future changes, (SODA method), and within SODA a method for the quantitative assessment of the proposed architectural options (SQUASH method). The validation of the two methods has been performed in the area of professional systems, where they were applied in a concrete case study from the medical domain. The SODA method is an innovative solution to the problem of developing system architectures that are designed to survive the most likely changes to be foreseen in the future business environment of the system. The method enables on one hand the business stakeholders of a system to provide the architects with their knowledge and insight about the future when new systems are created. And on the other hand, the method enables the architects to take a long view and think strategically in terms of different plausible futures and unexpected surprises, when designing the high level structure of their systems. The SQUASH method is a systematic way of assessing in a quantitative manner, the proposed architectural options, with respect to how well they deal with quality aspects, costs and risks, before the architecture is actually implemented. The method enables the architects to reason about the most relevant attributes of the future system, and to make more informed decisions about their design, based on the quantitative data. Both methods, SODA and SQUASH, are descriptive in nature, rooted in the best industrial practices, and hence proposing better ways of developing system architectures

    Change-Impact driven Agile Architecting.

    Full text link
    Software architecture is a key factor to scale up Agile Software Development ASD in large softwareintensive systems. Currently, software architectures are more often approached through mechanisms that enable to incrementally design and evolve software architectures aka. agile architecting. Agile architecting should be a light-weight decision-making process, which could be achieved by providing knowledge to assist agile architects in reasoning about changes. This paper presents the novel solution of using change-impact knowledge as the main driver for agile architecting. The solution consists of a Change Impact Analysis technique and a set of models to assist agile architects in the change -decision-making- process by retrieving the change-impact architectural knowledge resulting from adding or changing features iteration after iteration. To validate our approach, we have put our solution into practice by running a project of a metering management system in electric power networks in an i-smart software factory
    • …
    corecore