3,170 research outputs found

    Technical product portfolio:standardization by component commonality

    Get PDF
    Abstract. One key element when considering research and development (RnD) design decisions is the final cost of the product. A big part of the product cost comes from the component costs, also known as the bill of materials (BOM) costs. In industries, the focus on reducing the cost of every single component may lead to a situation, where almost identical solutions are implemented with different components. Furthermore, the increasing amount of product variety demanded by the markets increases the number of components that need to be managed even further. This research focuses on the aspect of using common components for different products. The research consists of a literature review and a case study. The aim of this study is to understand what the benefits and the limitations of using component commonality are, and how it can be used to a company’s benefit. The aim of the case study is to analyze the current state of the component commonality in the case company’s products. Furthermore, opportunities to improve the component commonality are identified, and the effects are estimated. According to the results, the solutions for how the component management should be improved in the case company are suggested. This research was conducted by reviewing the scientific literature around the concept of product and component management, and then more precisely, on the component commonality approach. The scientific literature provided insights into the benefits and limitations of component commonality could be. Based on this, the current state of the company’s component commonality was analyzed. The products were analyzed using a chosen commonality index found in the literature. Bill of material (BOM) data were used in the analyses. The analyses suggested that using common components is indeed possible for the products: higher commonality was observed among the products developed in a single RnD site in comparison to the commonality of the products of separate RnD sites. A development point was observed in improving the cooperation between different RnD sites, to be able to use common products throughout the company’s product portfolio. A technologically feasible development opportunity to improve this cross-site cooperation was found in connector components. The monetary gains of using common connectors were estimated using a heuristic approach and a notable reduction in the costs was observed. An organizational body to improve the co-operation was recognized, and the research findings were discussed with them.Tekninen tuoteportfolio : standardointi jaetuilla komponenteilla. Tiivistelmä. Lopputuotteen kustannus on yksi keskeinen asia, joka tulee huomioida tehtäessä tuotekehitykseen liittyviä päätöksiä. Iso osa lopputuotteen kustannuksista koostuu tuoterakenteen komponenttien yhteenlasketusta hinnasta, eli niin sanotusta ”BOM” -hinnasta (bill of materials). Fokusoituminen hinnan vähentämiseen komponenttitasolla voi johtaa tilanteeseen, jossa samanlaisia suunnitteluratkaisuja tehdään eri komponenteilla. Lisäksi vaihtelevan tuotevalikoiman kasvava kysyntä lisää tarvittavien komponenttien määrää vielä entisestään. Tämä tutkimus keskittyy siihen, kuinka yhteisiä komponentteja voidaan käyttää eri tuotteiden välillä. Tutkimus koostuu kirjallisuuskatsauksesta sekä tapaustutkimuksesta. Tutkimuksen tavoitteena on ymmärtää mitä hyötyjä ja haittoja yhteisten komponenttien käytöllä on, ja kuinka niitä voidaan käyttää yrityksen hyödyksi. Tapaustutkimuksen tavoitteena on selvittää, kuinka laajasti yhteisiä komponentteja käytetään yrityksen tuotteissa. Lisäksi halutaan löytää mahdollisuuksia yhteisten komponenttien käytön lisäämiselle ja arvioida mitä hyötyjä tämä toisi. Yritykselle esitetään tutkimuksen tulosten mukaisesti rakennetut kehitysehdotukset. Tutkimuksen kirjallisuuskatsaus -osiossa tarkasteltiin tuote- ja komponenttihallinnan tieteellistä kirjallisuutta, keskittyen yhteisten komponenttien käyttöä koskevaan kirjallisuuteen. Kirjallisuuskatsauksessa opitun pohjalta tehtiin tapaustutkimus, jossa case-yrityksen nykytila arvioitiin yhteisten komponenttien osalta. Analyyseissä käytettiin kirjallisuudessa löydettyä indeksiä, jonka käyttäminen vaati tuoterakennetiedon (BOM). Analyysien tulokset viittasivat siihen, että case-yrityksessä oli parantamisen varaa yhteisten komponenttien käytössä. Parempaa yhteisten komponenttien käyttöä havaittiin niiden tuotteiden välillä, jotka oli suunniteltu samassa tuotekehitysyksikössä. Yhteisten komponenttien käyttöä voitaisiin siis kehittää parantamalla tuotekehitysyksiköiden välistä yhteistyötä. Teknologisesti toteuttamiskelpoinen kehitysalue löydettiin liitinkomponenteissa. Liitinkomponenttien yhteiskäytön parantamisen tuomat kustannushyödyt arvioitiin heuristisella menetelmällä. Yhteisten liitinkomponenttien käytön lisäämisen todettiin tuovan mahdollisuuksia selviin kustannussäästöihin. Yrityksen sisällä löydettiin yksikkö, joka pystyy käytännössä toteuttamaan tässä työssä esiin tuodut kehitysehdotukset. Tutkimuksen tulokset käytiin läpi tämän yksikön kanssa

    Microservice Transition and its Granularity Problem: A Systematic Mapping Study

    Get PDF
    Microservices have gained wide recognition and acceptance in software industries as an emerging architectural style for autonomic, scalable, and more reliable computing. The transition to microservices has been highly motivated by the need for better alignment of technical design decisions with improving value potentials of architectures. Despite microservices' popularity, research still lacks disciplined understanding of transition and consensus on the principles and activities underlying "micro-ing" architectures. In this paper, we report on a systematic mapping study that consolidates various views, approaches and activities that commonly assist in the transition to microservices. The study aims to provide a better understanding of the transition; it also contributes a working definition of the transition and technical activities underlying it. We term the transition and technical activities leading to microservice architectures as microservitization. We then shed light on a fundamental problem of microservitization: microservice granularity and reasoning about its adaptation as first-class entities. This study reviews state-of-the-art and -practice related to reasoning about microservice granularity; it reviews modelling approaches, aspects considered, guidelines and processes used to reason about microservice granularity. This study identifies opportunities for future research and development related to reasoning about microservice granularity.Comment: 36 pages including references, 6 figures, and 3 table

    Power Grid Network Evolutions for Local Energy Trading

    Full text link
    The shift towards an energy Grid dominated by prosumers (consumers and producers of energy) will inevitably have repercussions on the distribution infrastructure. Today it is a hierarchical one designed to deliver energy from large scale facilities to end-users. Tomorrow it will be a capillary infrastructure at the medium and Low Voltage levels that will support local energy trading among prosumers. In our previous work, we analyzed the Dutch Power Grid and made an initial analysis of the economic impact topological properties have on decentralized energy trading. In this paper, we go one step further and investigate how different networks topologies and growth models facilitate the emergence of a decentralized market. In particular, we show how the connectivity plays an important role in improving the properties of reliability and path-cost reduction. From the economic point of view, we estimate how the topological evolutions facilitate local electricity distribution, taking into account the main cost ingredient required for increasing network connectivity, i.e., the price of cabling

    Protocol Layering and Internet Policy

    Get PDF

    Protocol Layering and Internet Policy

    Get PDF
    An architectural principle known as protocol layering is widely recognized as one of the foundations of the Internet’s success. In addition, some scholars and industry participants have urged using the layers model as a central organizing principle for regulatory policy. Despite its importance as a concept, a comprehensive analysis of protocol layering and its implications for Internet policy has yet to appear in the literature. This Article attempts to correct this omission. It begins with a detailed description of the way the five-layer model developed, introducing protocol layering’s central features, such as the division of functions across layers, information hiding, peer communication, and encapsulation. It then discusses the model’s implications for whether particular functions are performed at the edge or in the core of the network, contrasts the model with the way that layering has been depicted in the legal commentary, and analyzes attempts to use layering as a basis for competition policy. Next the Article identifies certain emerging features of the Internet that are placing pressure on the layered model, including WiFi routers, network-based security, modern routing protocols, and wireless broadband. These developments illustrate how every architecture inevitably limits functionality as well as the architecture’s ability to evolve over time in response to changes in the technological and economic environment. Together these considerations support adopting a more dynamic perspective on layering and caution against using layers as a basis for a regulatory mandate for fear of cementing the existing technology into place in a way that prevents the network from innovating and evolving in response to shifts in the underlying technology and consumer demand

    Protocol Layering and Internet Policy

    Get PDF
    An architectural principle known as protocol layering is widely recognized as one of the foundations of the Internet’s success. In addition, some scholars and industry participants have urged using the layers model as a central organizing principle for regulatory policy. Despite its importance as a concept, a comprehensive analysis of protocol layering and its implications for Internet policy has yet to appear in the literature. This Article attempts to correct this omission. It begins with a detailed description of the way the five-layer model developed, introducing protocol layering’s central features, such as the division of functions across layers, information hiding, peer communication, and encapsulation. It then discusses the model’s implications for whether particular functions are performed at the edge or in the core of the network, contrasts the model with the way that layering has been depicted in the legal commentary, and analyzes attempts to use layering as a basis for competition policy. Next the Article identifies certain emerging features of the Internet that are placing pressure on the layered model, including WiFi routers, network-based security, modern routing protocols, and wireless broadband. These developments illustrate how every architecture inevitably limits functionality as well as the architecture’s ability to evolve over time in response to changes in the technological and economic environment. Together these considerations support adopting a more dynamic perspective on layering and caution against using layers as a basis for a regulatory mandate for fear of cementing the existing technology into place in a way that prevents the network from innovating and evolving in response to shifts in the underlying technology and consumer demand

    Ontological approach for DSL development

    Get PDF
    This paper presents a project whose main objective is to explore the Ontological based development of Domain Specific Languages (DSL), more precisely, of their underlying Grammar. After reviewing the basic concepts characterizing Ontologies and DSLs, we introduce a tool, Onto2Gra, that takes profit of the knowledge described by the ontology and automatically generates a grammar for a DSL that allows to discourse about the domain described by that ontology. This approach represents a rigorous method to create, in a secure and effective way, a grammar for a new specialized language restricted to a concrete domain. The usual process of creating a grammar from the scratch is, as every creative action, difficult, slow and error prone; so this proposal is, from a Grammar Engineering point of view, of uttermost importance. After the grammar generation phase, the Grammar Engineer can manipulate it to add syntactic sugar to improve the final language quality or even to add specific semantic actions. The Onto2Gra project is composed of three engines. The main one is OWL2DSL, the component that converts an OWL ontology into a complete Attribute Grammar for the construction of an internal representation of all the input data. The two additional modules are Onto2OWL, converts ontologies written in OntoDL into standard OWL, and DDesc2OWL, converts domain instances written in the new DSL into the initial OWL ontology.This work has been supported by FCT Fundação para a Ciência e Tecnologia within the Project Scope: UID/CEC/00319/2013

    Traveling of Requirements in the Development of Packaged Software: An Investigation of Work Design and Uncertainty

    Get PDF
    Software requirements, and how they are constructed, shared and translated across software organizations, express uncertainties that software developers need to address through appropriate structuring of the process and the organization at large. To gain new insights into this important phenomenon, we rely on theory of work design and the travelling metaphor to undertake an in-depth qualitative inquiry into recurrent development of packaged software for the utility industry. Using the particular context of software provider GridCo, we examine how requirements are constructed, shared, and translated as they travel across vertical and horizontal boundaries. In revealing insights into these practices, we contribute to theory by conceptualizing how requirements travel, not just locally, but across organizations and time, thereby uncovering new knowledge about the responses to requirement uncertainty in development of packaged software. We also contribute to theory by providing narrative accounts of in situ requirements processes and by revealing practical consequences of organization structure on managing uncertainty

    Logic Programming: Context, Character and Development

    Get PDF
    Logic programming has been attracting increasing interest in recent years. Its first realisation in the form of PROLOG demonstrated concretely that Kowalski's view of computation as controlled deduction could be implemented with tolerable efficiency, even on existing computer architectures. Since that time logic programming research has intensified. The majority of computing professionals have remained unaware of the developments, however, and for some the announcement that PROLOG had been selected as the core language for the Japanese 'Fifth Generation' project came as a total surprise. This thesis aims to describe the context, character and development of logic programming. It explains why a radical departure from existing software practices needs to be seriously discussed; it identifies the characteristic features of logic programming, and the practical realisation of these features in current logic programming systems; and it outlines the programming methodology which is proposed for logic programming. The problems and limitations of existing logic programming systems are described and some proposals for development are discussed. The thesis is in three parts. Part One traces the development of programming since the early days of computing. It shows how the problems of software complexity which were addressed by the 'structured programming' school have not been overcome: the software crisis remains severe and seems to require fundamental changes in software practice for its solution. Part Two describes the foundations of logic programming in the procedural interpretation of Horn clauses. Fundamental to logic programming is shown to be the separation of the logic of an algorithm from its control. At present, however, both the logic and the control aspects of logic programming present problems; the first in terms of the extent of the language which is used, and the second in terms of the control strategy which should be applied in order to produce solutions. These problems are described and various proposals, including some which have been incorporated into implemented systems, are described. Part Three discusses the software development methodology which is proposed for logic programming. Some of the experience of practical applications is related. Logic programming is considered in the aspects of its potential for parallel execution and in its relationship to functional programming, and some possible criticisms of the problem-solving potential of logic are described. The conclusion is that although logic programming inevitably has some problems which are yet to be solved, it seems to offer answers to several issues which are at the heart of the software crisis. The potential contribution of logic programming towards the development of software should be substantial
    corecore