3,503 research outputs found

    Value/Cost Analysis of Modularity Improvements

    Get PDF

    Value/Cost Analysis of Modularity Improvements

    Get PDF

    Understanding, Analysis, and Handling of Software Architecture Erosion

    Get PDF
    Architecture erosion occurs when a software system's implemented architecture diverges from the intended architecture over time. Studies show erosion impacts development, maintenance, and evolution since it accumulates imperceptibly. Identifying early symptoms like architectural smells enables managing erosion through refactoring. However, research lacks comprehensive understanding of erosion, unclear which symptoms are most common, and lacks detection methods. This thesis establishes an erosion landscape, investigates symptoms, and proposes identification approaches. A mapping study covers erosion definitions, symptoms, causes, and consequences. Key findings: 1) "Architecture erosion" is the most used term, with four perspectives on definitions and respective symptom types. 2) Technical and non-technical reasons contribute to erosion, negatively impacting quality attributes. Practitioners can advocate addressing erosion to prevent failures. 3) Detection and correction approaches are categorized, with consistency and evolution-based approaches commonly mentioned.An empirical study explores practitioner perspectives through communities, surveys, and interviews. Findings reveal associated practices like code review and tools identify symptoms, while collected measures address erosion during implementation. Studying code review comments analyzes erosion in practice. One study reveals architectural violations, duplicate functionality, and cyclic dependencies are most frequent. Symptoms decreased over time, indicating increased stability. Most were addressed after review. A second study explores violation symptoms in four projects, identifying 10 categories. Refactoring and removing code address most violations, while some are disregarded.Machine learning classifiers using pre-trained word embeddings identify violation symptoms from code reviews. Key findings: 1) SVM with word2vec achieved highest performance. 2) fastText embeddings worked well. 3) 200-dimensional embeddings outperformed 100/300-dimensional. 4) Ensemble classifier improved performance. 5) Practitioners found results valuable, confirming potential.An automated recommendation system identifies qualified reviewers for violations using similarity detection on file paths and comments. Experiments show common methods perform well, outperforming a baseline approach. Sampling techniques impact recommendation performance

    Analysis of relationship between software metrics and process models

    Get PDF
    This thesis studies the correlation between software process models and software metrics. To support our studies we have defined a Process - Metric Evaluation Framework and derived an evaluation template from it. The template served as a basic tool in studding the relationships between various process models, artifacts and software metrics. We have evaluated a number of process models according to our template and have identified suitable software metrics. We have also recommended a root cause analysis approach at various points of the process models. The suggested software metrics can be derived from various product and process artifacts. They can be used to curb the risks generated at each phase of the development process, identify issues, and do better planning and project management. The evaluation template can also be used to evaluate other models and identify effective metrics

    A Smart Products Lifecycle Management (sPLM) Framework - Modeling for Conceptualization, Interoperability, and Modularity

    Get PDF
    Autonomy and intelligence have been built into many of today’s mechatronic products, taking advantage of low-cost sensors and advanced data analytics technologies. Design of product intelligence (enabled by analytics capabilities) is no longer a trivial or additional option for the product development. The objective of this research is aimed at addressing the challenges raised by the new data-driven design paradigm for smart products development, in which the product itself and the smartness require to be carefully co-constructed. A smart product can be seen as specific compositions and configurations of its physical components to form the body, its analytics models to implement the intelligence, evolving along its lifecycle stages. Based on this view, the contribution of this research is to expand the “Product Lifecycle Management (PLM)” concept traditionally for physical products to data-based products. As a result, a Smart Products Lifecycle Management (sPLM) framework is conceptualized based on a high-dimensional Smart Product Hypercube (sPH) representation and decomposition. First, the sPLM addresses the interoperability issues by developing a Smart Component data model to uniformly represent and compose physical component models created by engineers and analytics models created by data scientists. Second, the sPLM implements an NPD3 process model that incorporates formal data analytics process into the new product development (NPD) process model, in order to support the transdisciplinary information flows and team interactions between engineers and data scientists. Third, the sPLM addresses the issues related to product definition, modular design, product configuration, and lifecycle management of analytics models, by adapting the theoretical frameworks and methods for traditional product design and development. An sPLM proof-of-concept platform had been implemented for validation of the concepts and methodologies developed throughout the research work. The sPLM platform provides a shared data repository to manage the product-, process-, and configuration-related knowledge for smart products development. It also provides a collaborative environment to facilitate transdisciplinary collaboration between product engineers and data scientists

    Geographic Information Systems for Real-Time Environmental Sensing at Multiple Scales

    Get PDF
    The purpose of this investigation was to design, implement, and apply a real-time geographic information system for data intensive water resource research and management. The research presented is part of an ongoing, interdisciplinary research program supporting the development of the Intelligent River® observation instrument. The objectives of this research were to 1) design and describe software architecture for a streaming environmental sensing information system, 2) implement and evaluate the proposed information system, and 3) apply the information system for monitoring, analysis, and visualization of an urban stormwater improvement project located in the City of Aiken, South Carolina, USA. This research contributes to the fields of software architecture and urban ecohydrology. The first contribution is a formal architectural description of a streaming environmental sensing information system. This research demonstrates the operation of the information system and provides a reference point for future software implementations. Contributions to urban ecohydrology are in three areas. First, a characterization of soil properties for the study region of the City of Aiken, SC is provided. The analysis includes an evaluation of spatial structure for soil hydrologic properties. Findings indicate no detectable structure at the scales explored during the study. The second contribution to ecohydrology comes from a long-term, continuous monitoring program for bioinfiltration basin structures located in the study area. Results include an analysis of soil moisture dynamics based on data collected at multiple depths with high spatial and temporal resolution. A novel metric is introduced to evaluate the long-term performance of bioinfiltration basin structures based on soil moisture observation data. Findings indicate a decrease in basin performance over time for the monitored sites. The third contribution to the field of ecohydrology is the development and application of a spatially and temporally explicit rainfall infiltration and excess model. The model enables the simulation and visualization of bioinfiltration basin hydrologic response at within-catchment scales. The model is validated against observed soil moisture data. Results include visualizations and stormwater volume calculations based on measured versus predicted bioinfiltration basin performance over time

    Applying model-based systems engineering in search of quality by design

    Get PDF
    2022 Spring.Includes bibliographical references.Model-Based System Engineering (MBSE) and Model-Based Engineering (MBE) techniques have been successfully introduced into the design process of many different types of systems. The application of these techniques can be reflected in the modeling of requirements, functions, behavior, and many other aspects. The modeled design provides a digital representation of a system and the supporting development data architecture and functional requirements associated with that architecture through modeling system aspects. Various levels of the system and the corresponding data architecture fidelity can be represented within MBSE environment tools. Typically, the level of fidelity is driven by crucial systems engineering constraints such as cost, schedule, performance, and quality. Systems engineering uses many methods to develop system and data architecture to provide a representative system that meets costs within schedule with sufficient quality while maintaining the customer performance needs. The most complex and elusive constraints on systems engineering are defining system requirements focusing on quality, given a certain set of system level requirements, which is the likelihood that those requirements will be correctly and accurately found in the final system design. The focus of this research will investigate specifically the Department of Defense Architecture Framework (DoDAF) in use today to establish and then assess the relationship between the system, data architecture, and requirements in terms of Quality By Design (QbD). QbD was first coined in 1992, Quality by Design: The New Steps for Planning Quality into Goods and Services [1]. This research investigates and proposes a means to: contextualize high-level quality terms within the MBSE functional area, provide an outline for a conceptual but functional quality framework as it pertains to the MBSE DoDAF, provides tailored quality metrics with improved definitions, and then tests this improved quality framework by assessing two corresponding case studies analysis evaluations within the MBSE functional area to interrogate model architectures and assess quality of system design. Developed in the early 2000s, the Department of Defense Architecture Framework (DoDAF) is still in use today, and its system description methodologies continue to impact subsequent system description approaches [2]. Two case studies were analyzed to show proposed QbD evaluation to analyze DoDAF CONOP architecture quality. The first case study addresses the analysis of DoDAF CONOP of the National Aeronautics and Space Administration (NASA) Joint Polar Satellite System (JPSS) ground system for National Oceanic and Atmospheric Administration (NOAA) satellite system with particular focus on the Stored Mission Data (SMD) mission thread. The second case study addresses the analysis of DoDAF CONOP of the Search and Rescue (SAR) navel rescue operation network System of Systems (SoS) with particular focus on the Command and Control signaling mission thread. The case studies help to demonstrate a new DoDAF Quality Conceptual Framework (DQCF) as a means to investigate quality of DoDAF architecture in depth to include the application of DoDAF standard, the UML/SysML standards, requirement architecture instantiation, as well as modularity to understand architecture reusability and complexity. By providing a renewed focus on a quality-based systems engineering process when applying the DoDAF, improved trust in the system and data architecture of the completed models can be achieved. The results of the case study analyses reveal how a quality-focused systems engineering process can be used during development to provide a product design that better meets the customer's intent and ultimately provides the potential for the best quality product

    A Requirements-Based Exploration of Open-Source Software Development Projects – Towards a Natural Language Processing Software Analysis Framework

    Get PDF
    Open source projects do have requirements; they are, however, mostly informal, text descriptions found in requests, forums, and other correspondence. Understanding such requirements provides insight into the nature of open source projects. Unfortunately, manual analysis of natural language requirements is time-consuming, and for large projects, error-prone. Automated analysis of natural language requirements, even partial, will be of great benefit. Towards that end, I describe the design and validation of an automated natural language requirements classifier for open source software development projects. I compare two strategies for recognizing requirements in open forums of software features. The results suggest that classifying text at the forum post aggregation and sentence aggregation levels may be effective. Initial results suggest that it can reduce the effort required to analyze requirements of open source software development projects. Software development organizations and communities currently employ a large number of software development techniques and methodologies. This implied complexity is also enhanced by a wide range of software project types and development environments. The resulting lack of consistency in the software development domain leads to one important challenge that researchers encounter while exploring this area: specificity. This results in an increased difficulty of maintaining a consistent unit of measure or analysis approach while exploring a wide variety of software development projects and environments. The problem of specificity is more prominently exhibited in an area of software development characterized by a dynamic evolution, a unique development environment, and a relatively young history of research when compared to traditional software development: the open-source domain. While performing research on open source and the associated communities of developers, one can notice the same challenge of specificity being present in requirements engineering research as in the case of closed-source software development. Whether research is aimed at performing longitudinal or cross-sectional analyses, or attempts to link requirements to other aspects of software development projects and their management, specificity calls for a flexible analysis tool capable of adapting to the needs and specifics of the explored context. This dissertation covers the design, implementation, and evaluation of a model, a method, and a software tool comprising a flexible software development analysis framework. These design artifacts use a rule-based natural language processing approach and are built to meet the specifics of a requirements-based analysis of software development projects in the open-source domain. This research follows the principles of design science research as defined by Hevner et. al. and includes stages of problem awareness, suggestion, development, evaluation, and results and conclusion (Hevner et al. 2004; Vaishnavi and Kuechler 2007). The long-term goal of the research stream stemming from this dissertation is to propose a flexible, customizable, requirements-based natural language processing software analysis framework which can be adapted to meet the research needs of multiple different types of domains or different categories of analyses

    Dataperusteinen palaute eTerveyspalveluiden sisällöntuotantoon

    Get PDF
    Web analytics has proven significant potential for constantly improving the provided web-based services and applications. By analyzing interaction data collected from web applications, it is possible to study how the applications are used in detail. The focus of this study is to analyze if interaction data collected with Piwik PRO web analytics platform using JavaScript tagging can provide sufficient detail about user behaviour and interaction in a modern single-page web application. Furthermore, the analysis seeks to answer if the collected data can be refined in a way that will help the content managers of the web application to continuously improve the content and to spot dysfunctional content. The research is based on Omapolku, a Finnish public e-health service providing digital services for personalized healthcare. In this study, the analysis focuses on evaluating digital treatment pathways in Omapolku, which provides various types of information and utilities designed for the needs of specific patient groups. The evaluation is based on the graphical user interface of a treatment pathway view by analyzing a sample dataset consisting of actions performed by the users. The data is analyzed with general web analytics metrics and by applying statistical analyses of web usage mining. The results show that the interaction data can provide necessary detail for evaluating general usage metrics and basic usage patterns. However, the results show that the data does not provide necessary information for identifying most actions performed by the users, which makes it practically impossible to link the data to the front-end components of the user interface. As an outcome of this study, it is recommended that additional identifiers are added to the front-end components of the treatment path interface and that the JavaScript tagging script is modified to record the corresponding identifiers and the action context. In addition, a novel prototype was designed as a solution to the identified challenges and to support the work of the content managers.Web-analytiikka on osoittanut nykypäivänä potentiaalinsa osana web-pohjaisten sovellusten jatkuvaa kehitystä. Web-sovelluksista kerätyn interaktiodatan analysointi mahdollistaa sen, että sovellusten käyttöä voidaan tutkia yksityiskohtaisesti. Tämä työ keskittyy analysoimaan, mikäli Piwik PRO analytiikkapalvelun JavaScript seurantakoodilla kerätty interaktiodata tarjoaa riittäviä yksityiskohtia käyttäjien käyttäytymisestä ja interaktiosta yksisivuisessa web-sovelluksessa. Tämän lisäksi työ keskittyy tutkimaan, mikäli kerättyä dataa voidaan jalostaa siten, että sitä voi hyödyntää toimintahäiriöisten sisältöjen paikantamiseen sekä sisällön jatkuvaan kehittämiseen. Tutkimus perustuu Omapolku-sovellukseen, joka on julkinen suomalainen eTerveyspalvelu. Omapolku tarjoaa digitaalisia palveluita henkilökohtaiseen terveydenhuoltoon. Tässä työssä analyysi perustuu Omapolun digitaalisien hoitopolkujen toimivuuden arvioimiseen. Digitaaliset hoitopolut tarjoavat monipuolista tietoa sekä työkaluja, jotka on suunniteltu potilasryhmäkohtaisesti tietyn hoitotarpeen mukaisesti. Hoitopolkujen toimivuuden arvointi toteutetaan tutkimalla digihoitopolkujen graafisesta käyttöliittymästä kerättyä interaktiodataa. Kerättyä dataa analysoidaan yleisillä web-analytiikan mittareilla sekä tilastollisilla web-tiedonlouhinnan menetelmillä. Työn tulokset osoittavat, että interaktiodata voi tarjota tarpeellista tietoa yleisten mittareiden laskemiseksi sekä yksinkertaisten käyttäytymismallien selvittämiseksi. Tulokset myös osoittavat, että data ei tarjoa tietoa yksityiskohtaisten tapahtumien alkuperän selvittämiseksi käyttöliittymässä. Työn tuloksena suositellaan, että digihoitopolkujen käyttöliittymän komponentteihin lisätään lisätunnisteita ja että JavaScript seurantakoodia muokataan siten, että tapahtuman konteksti ja siihen liittyvä komponenttitunniste tallennetaan tapahtumaan. Tämän lisäksi työssä esitetään prototyyppi ratkaisuna havaittuihin haasteisiin sekä tukemaan sisällöntuottajien työtä
    corecore