2,758 research outputs found

    Integration of Quality Attributes in Software Product Line Development

    Full text link
    Different approaches for building modern software systems in complex and open environments have been proposed in the last few years. Some efforts try to apply Software Product Line (SPL) approach to take advantage of the massive reuse for producing software systems that share a common set of features. In general quality assurance is a crucial activity for success in software industry, but it is even more important when talking about Software Product Lines since the intensive reuse of assets makes the quality attributes (a measurable physical or abstract property of an entity) of the assets to be transmitted to the whole SPL scope. However, despite the importance that quality has in software product line development, most of the methodologies being applied in Software Product Line Development focus only on managing the commonalities and variability within the product line and not giving support to the non--¿ functional requirements that the products must fit. The main goal of this master final work is to introduce quality attributes in early stages of software product line development processes by means of the definition of a production plan that, on one hand, integrates quality as an additional view for describing the extension of the software product line and, on the other hand introduces the quality attributes as a decision factor during product configuration and when selecting among design alternatives. Our approach has been defined following the Model--¿ Driven Software Development paradigm. Therefore all the software artifacts defined had its correspondent metamodels and the processes defined rely on automated model transformations. Finally in order to illustrate the feasibility of the approach we have integrated the quality view in an SPL example in the context of safety critical embedded systems on the automotive domain.González Huerta, J. (2011). Integration of Quality Attributes in Software Product Line Development. http://hdl.handle.net/10251/15835Archivo delegad

    Software variability in service robotics

    Get PDF
    Robots artificially replicate human capabilities thanks to their software, the main embodiment of intelligence. However, engineering robotics software has become increasingly challenging. Developers need expertise from different disciplines as well as they are faced with heterogeneous hardware and uncertain operating environments. To this end, the software needs to be variable—to customize robots for different customers, hardware, and operating environments. However, variability adds substantial complexity and needs to be managed—yet, ad hoc practices prevail in the robotics domain, challenging effective software reuse, maintenance, and evolution. To improve the situation, we need to enhance our empirical understanding of variability in robotics. We present a multiple-case study on software variability in the vibrant and challenging domain of service robotics. We investigated drivers, practices, methods, and challenges of variability from industrial companies building service robots. We analyzed the state-of-the-practice and the state-of-the-art—the former via an experience report and eleven interviews with two service robotics companies; the latter via a systematic literature review. We triangulated from these sources, reporting observations with actionable recommendations for researchers, tool providers, and practitioners. We formulated hypotheses trying to explain our observations, and also compared the state-of-the-art from the literature with the-state-of-the-practice we observed in our cases. We learned that the level of abstraction in robotics software needs to be raised for simplifying variability management and software integration, while keeping a sufficient level of customization to boost efficiency and effectiveness in their robots’ operation. Planning and realizing variability for specific requirements and implementing robust abstractions permit robotic applications to operate robustly in dynamic environments, which are often only partially known and controllable. With this aim, our companies use a number of mechanisms, some of them based on formalisms used to specify robotic behavior, such as finite-state machines and behavior trees. To foster software reuse, the service robotics domain will greatly benefit from having software components—completely decoupled from hardware—with harmonized and standardized interfaces, and organized in an ecosystem shared among various companies

    Automatic generation of software interfaces for supporting decisionmaking processes. An application of domain engineering & machine learning

    Get PDF
    [EN] Data analysis is a key process to foster knowledge generation in particular domains or fields of study. With a strong informative foundation derived from the analysis of collected data, decision-makers can make strategic choices with the aim of obtaining valuable benefits in their specific areas of action. However, given the steady growth of data volumes, data analysis needs to rely on powerful tools to enable knowledge extraction. Information dashboards offer a software solution to analyze large volumes of data visually to identify patterns and relations and make decisions according to the presented information. But decision-makers may have different goals and, consequently, different necessities regarding their dashboards. Moreover, the variety of data sources, structures, and domains can hamper the design and implementation of these tools. This Ph.D. Thesis tackles the challenge of improving the development process of information dashboards and data visualizations while enhancing their quality and features in terms of personalization, usability, and flexibility, among others. Several research activities have been carried out to support this thesis. First, a systematic literature mapping and review was performed to analyze different methodologies and solutions related to the automatic generation of tailored information dashboards. The outcomes of the review led to the selection of a modeldriven approach in combination with the software product line paradigm to deal with the automatic generation of information dashboards. In this context, a meta-model was developed following a domain engineering approach. This meta-model represents the skeleton of information dashboards and data visualizations through the abstraction of their components and features and has been the backbone of the subsequent generative pipeline of these tools. The meta-model and generative pipeline have been tested through their integration in different scenarios, both theoretical and practical. Regarding the theoretical dimension of the research, the meta-model has been successfully integrated with other meta-model to support knowledge generation in learning ecosystems, and as a framework to conceptualize and instantiate information dashboards in different domains. In terms of the practical applications, the focus has been put on how to transform the meta-model into an instance adapted to a specific context, and how to finally transform this later model into code, i.e., the final, functional product. These practical scenarios involved the automatic generation of dashboards in the context of a Ph.D. Programme, the application of Artificial Intelligence algorithms in the process, and the development of a graphical instantiation platform that combines the meta-model and the generative pipeline into a visual generation system. Finally, different case studies have been conducted in the employment and employability, health, and education domains. The number of applications of the meta-model in theoretical and practical dimensions and domains is also a result itself. Every outcome associated to this thesis is driven by the dashboard meta-model, which also proves its versatility and flexibility when it comes to conceptualize, generate, and capture knowledge related to dashboards and data visualizations

    Impact of product complexity on inventory levels

    Get PDF
    Thesis (M. Eng. in Logistics)--Massachusetts Institute of Technology, Engineering Systems Division, 2006.Includes bibliographical references (leaves 62-63).In this thesis we consider a manufacturing and distribution supply chain of a roll-based product whose width comes in 1-cm increments. We formulate a computer model subject to stochastic, inelastic demand to determine the relationship between width interval and finished goods inventory levels. Assuming that the supply chain operates with the same set of policies regardless of the width interval value, we illustrate that the value of risk pooling diminishes as the interval widens. Due to the presence of a counteracting effect, we also demonstrate that increasing the width interval does not always reduce the amount of inventory requirements. Lastly, we show that the supply chain can operate with lower inventory levels without compromising the service level by pushing the inventory down the chain.by Ying-Lai (Chandler) See [and] Jin Namkoong.M.Eng.in Logistic

    Model driven product line engineering : core asset and process implications

    Get PDF
    Reuse is at the heart of major improvements in productivity and quality in Software Engineering. Both Model Driven Engineering (MDE) and Software Product Line Engineering (SPLE) are software development paradigms that promote reuse. Specifically, they promote systematic reuse and a departure from craftsmanship towards an industrialization of the software development process. MDE and SPLE have established their benefits separately. Their combination, here called Model Driven Product Line Engineering (MDPLE), gathers together the advantages of both. Nevertheless, this blending requires MDE to be recasted in SPLE terms. This has implications on both the core assets and the software development process. The challenges are twofold: (i) models become central core assets from which products are obtained and (ii) the software development process needs to cater for the changes that SPLE and MDE introduce. This dissertation proposes a solution to the first challenge following a feature oriented approach, with an emphasis on reuse and early detection of inconsistencies. The second part is dedicated to assembly processes, a clear example of the complexity MDPLE introduces in software development processes. This work advocates for a new discipline inside the general software development process, i.e., the Assembly Plan Management, which raises the abstraction level and increases reuse in such processes. Different case studies illustrate the presented ideas.This work was hosted by the University of the Basque Country (Faculty of Computer Sciences). The author enjoyed a doctoral grant from the Basque Goverment under the “Researchers Training Program” during the years 2005 to 2009. The work was was co-supported by the Spanish Ministry of Education, and the European Social Fund under contracts WAPO (TIN2005-05610) and MODELINE (TIN2008-06507-C02-01)

    Derivation and consistency checking of models in early software product line engineering

    Get PDF
    Dissertação para obtenção do Grau de Doutor em Engenharia InformáticaSoftware Product Line Engineering (SPLE) should offer the ability to express the derivation of product-specific assets, while checking for their consistency. The derivation of product-specific assets is possible using general-purpose programming languages in combination with techniques such as conditional compilation and code generation. On the other hand, consistency checking can be achieved through consistency rules in the form of architectural and design guidelines, programming conventions and well-formedness rules. Current approaches present four shortcomings: (1) focus on code derivation only, (2) ignore consistency problems between the variability model and other complementary specification models used in early SPLE, (3) force developers to learn new, difficult to master, languages to encode the derivation of assets, and (4) offer no tool support. This dissertation presents solutions that contribute to tackle these four shortcomings. These solutions are integrated in the approach Derivation and Consistency Checking of models in early SPLE (DCC4SPL) and its corresponding tool support. The two main components of our approach are the Variability Modelling Language for Requirements(VML4RE), a domain-specific language and derivation infrastructure, and the Variability Consistency Checker (VCC), a verification technique and tool. We validate DCC4SPL demonstrating that it is appropriate to find inconsistencies in early SPL model-based specifications and to specify the derivation of product-specific models.European Project AMPLE, contract IST-33710; Fundação para a Ciência e Tecnologia - SFRH/BD/46194/2008

    Smart Society and Artificial Intelligence: Big Data Scheduling and the Global Standard Method Applied to Smart Maintenance

    Get PDF
    Abstract The implementation of artificial intelligence (AI) in a smart society, in which the analysis of human habits is mandatory, requires automated data scheduling and analysis using smart applications, a smart infrastructure, smart systems, and a smart network. In this context, which is characterized by a large gap between training and operative processes, a dedicated method is required to manage and extract the massive amount of data and the related information mining. The method presented in this work aims to reduce this gap with near-zero-failure advanced diagnostics (AD) for smart management, which is exploitable in any context of Society 5.0, thus reducing the risk factors at all management levels and ensuring quality and sustainability. We have also developed innovative applications for a human-centered management system to support scheduling in the maintenance of operative processes, for reducing training costs, for improving production yield, and for creating a human–machine cyberspace for smart infrastructure design. The results obtained in 12 international companies demonstrate a possible global standardization of operative processes, leading to the design of a near-zero-failure intelligent system that is able to learn and upgrade itself. Our new method provides guidance for selecting the new generation of intelligent manufacturing and smart systems in order to optimize human–machine interactions, with the related smart maintenance and education
    • …
    corecore