163,493 research outputs found

    DEVELOPING METHOD FOR ASSESSING FUNCTIONAL COMPLEXITY OF SOFTWARE INFORMATION SYSTEM

    Get PDF
    Solution of problems for improvement of methods and technologies of software configuration remains important and requires the development of the existing information technology to provide customization of software in terms of changing the end user requirements. Changing requirements stipulate the use of iterative software lifecycle. As part of the life cycle, it is necessary use additional methods to simplify of software architecture and obtaining software with a minimum of functional complexity. This is necessary in order to avoid increasing the time and labor costs for design, development and support of software. To address the disadvantages of existing methods it is proposed to use a method of assessing the functional complexity of the software information system, which is based on the existing graph multilevel model of software architecture. The method is based on FP-metrics calculation for each architectural element and a corresponding level of the graph model. Metrics values allow choosing software modules with a minimum of functional complexity in configuring the software architecture to satisfy the functional requirements of the end user

    Using Modularity Metrics to assist Move Method Refactoring of Large System

    Full text link
    For large software systems, refactoring activities can be a challenging task, since for keeping component complexity under control the overall architecture as well as many details of each component have to be considered. Product metrics are therefore often used to quantify several parameters related to the modularity of a software system. This paper devises an approach for automatically suggesting refactoring opportunities on large software systems. We show that by assessing metrics for all components, move methods refactoring an be suggested in such a way to improve modularity of several components at once, without hindering any other. However, computing metrics for large software systems, comprising thousands of classes or more, can be a time consuming task when performed on a single CPU. For this, we propose a solution that computes metrics by resorting to GPU, hence greatly shortening computation time. Thanks to our approach precise knowledge on several properties of the system can be continuously gathered while the system evolves, hence assisting developers to quickly assess several solutions for reducing modularity issues

    Towards a broader view on software architecture analysis of flexibility

    Get PDF
    Software architecture analysis helps us assess the quality of a software system at an early stage. In this paper we describe a case study of software architecture analysis that we have performed to assess the flexibility of a large administrative system. Our analysis was based on scenarios, representing possible changes to the requirements of the system and its environment. Assessing the effect of these scenarios provides insight into the flexibility of the system. One of the problems is to express the effect of a scenario in such a way that it provides insight into the complexity of the necessary changes. Part of our research is directed at developing an instrument for doing just that. This instrument is applied in the analysis described in this paper

    Assessing architectural evolution: A case study

    Get PDF
    This is the post-print version of the Article. The official published can be accessed from the link below - Copyright @ 2011 SpringerThis paper proposes to use a historical perspective on generic laws, principles, and guidelines, like Lehman’s software evolution laws and Martin’s design principles, in order to achieve a multi-faceted process and structural assessment of a system’s architectural evolution. We present a simple structural model with associated historical metrics and visualizations that could form part of an architect’s dashboard. We perform such an assessment for the Eclipse SDK, as a case study of a large, complex, and long-lived system for which sustained effective architectural evolution is paramount. The twofold aim of checking generic principles on a well-know system is, on the one hand, to see whether there are certain lessons that could be learned for best practice of architectural evolution, and on the other hand to get more insights about the applicability of such principles. We find that while the Eclipse SDK does follow several of the laws and principles, there are some deviations, and we discuss areas of architectural improvement and limitations of the assessment approach

    Architecture of Environmental Risk Modelling: for a faster and more robust response to natural disasters

    Full text link
    Demands on the disaster response capacity of the European Union are likely to increase, as the impacts of disasters continue to grow both in size and frequency. This has resulted in intensive research on issues concerning spatially-explicit information and modelling and their multiple sources of uncertainty. Geospatial support is one of the forms of assistance frequently required by emergency response centres along with hazard forecast and event management assessment. Robust modelling of natural hazards requires dynamic simulations under an array of multiple inputs from different sources. Uncertainty is associated with meteorological forecast and calibration of the model parameters. Software uncertainty also derives from the data transformation models (D-TM) needed for predicting hazard behaviour and its consequences. On the other hand, social contributions have recently been recognized as valuable in raw-data collection and mapping efforts traditionally dominated by professional organizations. Here an architecture overview is proposed for adaptive and robust modelling of natural hazards, following the Semantic Array Programming paradigm to also include the distributed array of social contributors called Citizen Sensor in a semantically-enhanced strategy for D-TM modelling. The modelling architecture proposes a multicriteria approach for assessing the array of potential impacts with qualitative rapid assessment methods based on a Partial Open Loop Feedback Control (POLFC) schema and complementing more traditional and accurate a-posteriori assessment. We discuss the computational aspect of environmental risk modelling using array-based parallel paradigms on High Performance Computing (HPC) platforms, in order for the implications of urgency to be introduced into the systems (Urgent-HPC).Comment: 12 pages, 1 figure, 1 text box, presented at the 3rd Conference of Computational Interdisciplinary Sciences (CCIS 2014), Asuncion, Paragua

    Considering Structural Properties of Inter-organizational Network Fragments during Business-IT Alignment

    Get PDF
    Value exchange models can be used to reason about possible networked business constellations. Such inter-organizational business settings are determined in most cases solely from a financial point of view, i.e. by assessing the economic sustainability of the constellation. In this paper we discuss also other criteria that are relevant and should additionally be considered, namely the structural properties of the inter-organizational constellation itself. The multitude of possible interorganizational business constellations – and underlying systems constellations respectively – makes it a necessary requirement to split such constellations into recurring structural patterns, which we call fragments. The structural properties are helping the designer to reason about quality related issues of the inter-organizational network, and may have an influence on design choices to be made. The paper suggests to design new e-business constellations not only on the basis of financial criteria, but to consider also quality issues of the inter-organizational network

    High-Integrity Performance Monitoring Units in Automotive Chips for Reliable Timing V&V

    Get PDF
    As software continues to control more system-critical functions in cars, its timing is becoming an integral element in functional safety. Timing validation and verification (V&V) assesses softwares end-to-end timing measurements against given budgets. The advent of multicore processors with massive resource sharing reduces the significance of end-to-end execution times for timing V&V and requires reasoning on (worst-case) access delays on contention-prone hardware resources. While Performance Monitoring Units (PMU) support this finer-grained reasoning, their design has never been a prime consideration in high-performance processors - where automotive-chips PMU implementations descend from - since PMU does not directly affect performance or reliability. To meet PMUs instrumental importance for timing V&V, we advocate for PMUs in automotive chips that explicitly track activities related to worst-case (rather than average) softwares behavior, are recognized as an ISO-26262 mandatory high-integrity hardware service, and are accompanied with detailed documentation that enables their effective use to derive reliable timing estimatesThis work has also been partially supported by the Spanish Ministry of Economy and Competitiveness (MINECO) under grant TIN2015-65316-P and the HiPEAC Network of Excellence. Jaume Abella has been partially supported by the MINECO under Ramon y Cajal postdoctoral fellowship number RYC-2013-14717. Enrico Mezzet has been partially supported by the Spanish Ministry of Economy and Competitiveness under Juan de la Cierva-Incorporación postdoctoral fellowship number IJCI-2016- 27396.Peer ReviewedPostprint (author's final draft
    corecore