1,010 research outputs found

    Empirical Study on the Effect of a Software Architecture Representation's Abstraction Level on the Architecture-Level Software Understanding

    Get PDF
    Abstract-Architectural component models represent high level designs and are frequently used as a central view of architectural descriptions of software systems. Using the architectural component model it is possible to perceive the interactions between the system's major parts and to understand the overall system's structure. In this paper we present a study that examines the effect of the level of abstraction of the software architecture representation on the architecture-level understandability of a software system. Three architectural representations of the same software system that differ in the level of abstraction (and hence in the number of components used in the architecture) are studied. Our results show that an architecture at the abstraction level that is sufficient to adequately maps the system's relevant functionalities to the corresponding architectural components (i.e., each component in the architecture corresponds to one system's relevant functionality) significantly improves the architecturelevel understanding of the software system, as compared to two other architectures that have a low and a high number of elements and hence tangles or scatters the system's relevant functionalities into several architectural components

    Software Evolution for Industrial Automation Systems. Literature Overview

    Get PDF

    Component-based software engineering: a quantitative approach

    Get PDF
    Dissertação apresentada para a obtenção do Grau de Doutor em Informática pela Universidade Nova de Lisboa, Faculdade de Ciências e TecnologiaBackground: Often, claims in Component-Based Development (CBD) are only supported by qualitative expert opinion, rather than by quantitative data. This contrasts with the normal practice in other sciences, where a sound experimental validation of claims is standard practice. Experimental Software Engineering (ESE) aims to bridge this gap. Unfortunately, it is common to find experimental validation efforts that are hard to replicate and compare, to build up the body of knowledge in CBD. Objectives: In this dissertation our goals are (i) to contribute to evolution of ESE, in what concerns the replicability and comparability of experimental work, and (ii) to apply our proposals to CBD, thus contributing to its deeper and sounder understanding. Techniques: We propose a process model for ESE, aligned with current experimental best practices, and combine this model with a measurement technique called Ontology-Driven Measurement (ODM). ODM is aimed at improving the state of practice in metrics definition and collection, by making metrics definitions formal and executable,without sacrificing their usability. ODM uses standard technologies that can be well adapted to current integrated development environments. Results: Our contributions include the definition and preliminary validation of a process model for ESE and the proposal of ODM for supporting metrics definition and collection in the context of CBD. We use both the process model and ODM to perform a series experimental works in CBD, including the cross-validation of a component metrics set for JavaBeans, a case study on the influence of practitioners expertise in a sub-process of component development (component code inspections), and an observational study on reusability patterns of pluggable components (Eclipse plug-ins). These experimental works implied proposing, adapting, or selecting adequate ontologies, as well as the formal definition of metrics upon each of those ontologies. Limitations: Although our experimental work covers a variety of component models and, orthogonally, both process and product, the plethora of opportunities for using our quantitative approach to CBD is far from exhausted. Conclusions: The main contribution of this dissertation is the illustration, through practical examples, of how we can combine our experimental process model with ODM to support the experimental validation of claims in the context of CBD, in a repeatable and comparable way. In addition, the techniques proposed in this dissertation are generic and can be applied to other software development paradigms.Departamento de Informática of the Faculdade de Ciências e Tecnologia, Universidade Nova de Lisboa (FCT/UNL); Centro de Informática e Tecnologias da Informação of the FCT/UNL; Fundação para a Ciência e Tecnologia through the STACOS project(POSI/CHS/48875/2002); The Experimental Software Engineering Network (ESERNET);Association Internationale pour les Technologies Objets (AITO); Association forComputing Machinery (ACM

    Evaluating Visual Realism in Drawing Areas of Interest on UML Diagrams

    Get PDF
    Areas of interest (AOIs) are defined as an addition to UML diagrams: groups of elements of system architecture diagrams that share some common property. Some methods have been proposed to automatically draw AOIs on UML diagrams. However, it is not clear how users perceive the results of such methods as compared to human-drawn areas of interest. We present here a process of studying and improving the perceived quality of computer-drawn AOIs. We qualitatively evaluated how users perceive the quality of computer- and human-drawn AOIs, and used these results to improve an existing algorithm for drawing AOIs. Finally, we designed a quantitative comparison for AOI drawings and used it to show that our improved renderings are closer to human drawings than the original rendering algorithm results. The combined user evaluation, algorithmic improvements, and quantitative comparison support our claim of improving the perceived quality of AOIs rendered on UML diagrams.

    Development of a Framework for Managing the Industry 4.0 Equipment Procurement Process for the Irish Life Sciences Sector

    Get PDF
    Industry 4.0 (I4.0) brings unprecedented opportunities for Manufacturing Corporations poised to implement Digital Business models; DigitALIZAtion. Industry Standards have been developed for the core technologies of the I4.0 Digital Supply Chains. Manufacturing equipment must now be procured to integrate seamlessly at any point in these novel supply chains. The aim of this study is to determine if an I4.0 Equipment Procurement Process (I4.0-EPP) can be developed which reduces the risk of equipment integration issues. It asks; Can the form of the equipment be specified, so that it correctly fits into the I4.0 Digital Supply Chain, to facilitate the desired I4.0 Digital Business function? An Agile Development Methodology was utilized to design the I4.0-EPP techniques and tools, for use by Technical and Business Users. Significant knowledge gaps were identified during User Acceptance Testing (UAT) by Technical Practitioners, over four equipment procurement case studies. Several iterations of UAT by MEng students, highlighted the requirement for Requirements Guides and specialized workbooks. These additional tools increased the understandability of the technical topics to an acceptable level and delivered very accurate results across a wide spectrum of users. This research demonstrates that techniques and tools can be developed for an I4.0-EPP which are accurate, feasible and viable, but, as with Six Sigma, will only become desirable, when mandated by Corporate Business Leaders. Future research should focus on implementing the ALIZA Matrix with Corporate Practitioners in the Business Domain. This approach will bring the ALIZA techniques and tools, developed during this study, to the attention of Corporate Business Leaders with the authority to sponsor them

    ICE: Enabling Non-Experts to Build Models Interactively for Large-Scale Lopsided Problems

    Full text link
    Quick interaction between a human teacher and a learning machine presents numerous benefits and challenges when working with web-scale data. The human teacher guides the machine towards accomplishing the task of interest. The learning machine leverages big data to find examples that maximize the training value of its interaction with the teacher. When the teacher is restricted to labeling examples selected by the machine, this problem is an instance of active learning. When the teacher can provide additional information to the machine (e.g., suggestions on what examples or predictive features should be used) as the learning task progresses, then the problem becomes one of interactive learning. To accommodate the two-way communication channel needed for efficient interactive learning, the teacher and the machine need an environment that supports an interaction language. The machine can access, process, and summarize more examples than the teacher can see in a lifetime. Based on the machine's output, the teacher can revise the definition of the task or make it more precise. Both the teacher and the machine continuously learn and benefit from the interaction. We have built a platform to (1) produce valuable and deployable models and (2) support research on both the machine learning and user interface challenges of the interactive learning problem. The platform relies on a dedicated, low-latency, distributed, in-memory architecture that allows us to construct web-scale learning machines with quick interaction speed. The purpose of this paper is to describe this architecture and demonstrate how it supports our research efforts. Preliminary results are presented as illustrations of the architecture but are not the primary focus of the paper

    Improving Reuse of Distributed Transaction Software with Transaction-Aware Aspects

    Get PDF
    Implementing crosscutting concerns for transactions is difficult, even using Aspect-Oriented Programming Languages (AOPLs) such as AspectJ. Many of these challenges arise because the context of a transaction-related crosscutting concern consists of loosely-coupled abstractions like dynamically-generated identifiers, timestamps, and tentative value sets of distributed resources. Current AOPLs do not provide joinpoints and pointcuts for weaving advice into high-level abstractions or contexts, like transaction contexts. Other challenges stem from the essential complexity in the nature of the data, operations on the data, or the volume of data, and accidental complexity comes from the way that the problem is being solved, even using common transaction frameworks. This dissertation describes an extension to AspectJ, called TransJ, with which developers can implement transaction-related crosscutting concerns in cohesive and loosely-coupled aspects. It also presents a preliminary experiment that provides evidence of improvement in reusability without sacrificing the performance of applications requiring essential transactions. This empirical study is conducted using the extended-quality model for transactional application to define measurements on the transaction software systems. This quality model defines three goals: the first relates to code quality (in terms of its reusability); the second to software performance; and the third concerns software development efficiency. Results from this study show that TransJ can improve the reusability while maintaining performance of TransJ applications requiring transaction for all eight areas addressed by the hypotheses: better encapsulation and separation of concern; loose Coupling, higher-cohesion and less tangling; improving obliviousness; preserving the software efficiency; improving extensibility; and hasten the development process
    corecore