1,174 research outputs found

    Challenge of validation in requirements engineering

    Get PDF
    AbstractThis paper will review the evolution of validation techniques and their current status in Requirements Engineering (RE). We start by answering the following questions: What validate? Why the benefits of having the requirements validation activities during the RE process? Who are the stakeholders involved in the requirements validation process? Where applied the validation in the RE process? and How the techniques and the approaches of requirements validation

    Development of a resource agent for an e-manufacturing system

    Get PDF
    Due to globalisation and distributed manufacturing systems the development and manufacture of products is no longer an isolated activity undertaken by either one discipline or a single organization but has become a global process. Using e-manufacturing companies can now outsource to manufacturers outside their geographical area and make them dependent on the production capabilities and responsiveness of the suppliers. Hence there is need for the suppliers to provide reliable information on the state of the orders being processed. E-manufacturing promises companies to exchange the required information with their suppliers by increased visibility to the shop floor and providing a platform for information interchange. The paper discusses the development of an e-manufacturing resource agent to enable manufactures to predict the probability of their outsourced machinery being available and the probability to complete an order without having a breakdown. The Maintenance Free Operation Period (MFOP) method is used to develop the agent. This means that the manufacturer will be expected to have a guarantee that no unscheduled maintenance activities will occur during each defined period of operation with the predefined level of confidence

    Preserving the Quality of Architectural Tactics in Source Code

    Get PDF
    In any complex software system, strong interdependencies exist between requirements and software architecture. Requirements drive architectural choices while also being constrained by the existing architecture and by what is economically feasible. This makes it advisable to concurrently specify the requirements, to devise and compare alternative architectural design solutions, and ultimately to make a series of design decisions in order to satisfy each of the quality concerns. Unfortunately, anecdotal evidence has shown that architectural knowledge tends to be tacit in nature, stored in the heads of people, and lost over time. Therefore, developers often lack comprehensive knowledge of underlying architectural design decisions and inadvertently degrade the quality of the architecture while performing maintenance activities. In practice, this problem can be addressed through preserving the relationships between the requirements, architectural design decisions and their implementations in the source code, and then using this information to keep developers aware of critical architectural aspects of the code. This dissertation presents a novel approach that utilizes machine learning techniques to recover and preserve the relationships between architecturally significant requirements, architectural decisions and their realizations in the implemented code. Our approach for recovering architectural decisions includes the two primary stages of training and classification. In the first stage, the classifier is trained using code snippets of different architectural decisions collected from various software systems. During this phase, the classifier learns the terms that developers typically use to implement each architectural decision. These ``indicator terms\u27\u27 represent method names, variable names, comments, or the development APIs that developers inevitably use to implement various architectural decisions. A probabilistic weight is then computed for each potential indicator term with respect to each type of architectural decision. The weight estimates how strongly an indicator term represents a specific architectural tactics/decisions. For example, a term such as \emph{pulse} is highly representative of the heartbeat tactic but occurs infrequently in the authentication. After learning the indicator terms, the classifier can compute the likelihood that any given source file implements a specific architectural decision. The classifier was evaluated through several different experiments including classical cross-validation over code snippets of 50 open source projects and on the entire source code of a large scale software system. Results showed that classifier can reliably recognize a wide range of architectural decisions. The technique introduced in this dissertation is used to develop the Archie tool suite. Archie is a plug-in for Eclipse and is designed to detect wide range of architectural design decisions in the code and to protect them from potential degradation during maintenance activities. It has several features for performing change impact analysis of architectural concerns at both the code and design level and proactively keep developers informed of underlying architectural decisions during maintenance activities. Archie is at the stage of technology transfer at the US Department of Homeland Security where it is purely used to detect and monitor security choices. Furthermore, this outcome is integrated into the Department of Homeland Security\u27s Software Assurance Market Place (SWAMP) to advance research and development of secure software systems

    Distributed manufacturing systems and the internet of things : a case study

    Get PDF
    In order to stay competitive in today's global market, manufacturing companies need to be flexible. To ensure flexible production, shorten processing times, and reduce time-tomarket, companies are utilizing the distributed manufacturing system paradigm, wherein geographically distributed, local resources are used for product development and production. In this context, the Internet of Things (IoT) has emerged as a concept which uses existing communication technologies, such as local wireless networks and the Internet to ensure visibility of anything from anywhere and at any time. In the paper, a case study of applying the IoT to the manufacturing domain is discussed. A distributed agent-based system for virtual monitoring and control of 3-axis CNC milling machine tools is designed and developed. The machines' 3D models and process states are shown through a web interface in real-time. The potential and challenges of implementing this system and the basic building blocks for decentralized value creation are discussed

    A Search Engine for Finding and Reusing Architecturally Significant Code

    Get PDF
    Architectural tactics are the building blocks of software architecture. They describe solutions for addressing specific quality concerns, and are prevalent across many software systems. Once a decision is made to utilize a tactic, the developer must generate a concrete plan for implementing the tactic in the code. Unfortunately, this is a non-trivial task even for experienced developers. Developers often resort to using search engines, crowd-sourcing websites, or discussion forums to find sample code snippets to implement a tactic. A fundamental problem of finding implementation for architectural patterns/tactics is the mismatch between the high-level intent reflected in the descriptions of these patterns ,and low-level implementation details of them. To reduce this mismatch, we created a novel Tactic Search Engine called ArchEngine (ARCHitecture search ENGINE). ArchEngine can replace this manual Internet-based search process and help developers to reuse proper architectural knowledge and accurately implement tactics and patterns from a wide range of open source systems. ArchEngine helps developers find implementation examples of tactic for a given technical context. It uses information retrieval and program analysis techniques to retrieve applications that implement these design concepts. Furthermore, the search engine lists the code snippets where the patterns/tactics are located. Our case study with 21 professional software developers shows that ArchEngine is more effective than other search engines (e.g. SourceForge and Koders) in helping programmers to quickly find implementations of architectural tactics/patterns

    The benefits and challenges of integrating ERP and Business Intelligence

    Get PDF
    Organizations have invested a significant amount of resources in the implementation of Business Intelligence (BI) and Enterprise Resource Planning (ERP) systems. In today’s competitive business environment, ERP and BI have become vital strategic tools, which impact directly on the success of any project implementation. The benefit of combining ERP and BI is that BI systems add intelligence into ERP data. The IT performance and decision-making capability inside the organization can be significantly improved by integrating ERP and BI systems. There has been given little attention to the integration of Enterprise Resource Planning and Business Intelligence (BIERP). Even though there have been studies which explain the integration of these systems, the literature is still classified as diversified and fragmented. This study attempts to review and evaluate articles which are related to the integration of BI and ERP. This study aims to examine how does the integration between ERP and BI systems affects businesses performance and what are the benefits and challenges of integrating these systems. The thesis was carried out as an empirical, qualitative case study, and semi-structured interviews were used to collect data. Five interviews were conducted, four different companies and six people were interviewed from relevant departments for this topic. Observations and interviews indicate that the concept of direct integration between ERP and BI is quite outdated. In order to ensure that data is transferred and stored between the two programs effectively, different software has been introduced between ERP and BI today as a result of technological advancements.In the studied companies, the integration of ERP and BI does not always go smoothly, and companies have run into various issues in this area. For instance, maintaining the increased data volume, problems with data accuracy, and technical compatibility are challenges related to ERP BI integration. The empirical data also shows that ERP BI integration has many benefits. For example, it can enhance reporting, decision-making, and process efficiency because of its automation features. For instance, it can provide up-to-date reports on key performance indicators in real-time. Furthermore, because the company's figures now arrive automatically thanks to ERP BI integration, employees of the company now spend less time performing routine tasks, freeing them up to concentrate more on data analysis.Organisaatiot ovat investoineet huomattavan paljon resursseja Business Intelligence (BI) - ja Enterprise Resource Planning (ERP) -järjestelmien käyttöönottoon. Nykyisessä kilpailullisessa liiketoimintaympäristössä ERP- ja BI-järjestelmistä on tullut elintärkeitä strategisia välineitä, jotka vaikuttavat suoraan minkä tahansa hankkeen toteutuksen onnistumiseen. ERP- ja BI-järjestelmien yhdistämisen etuna on, että BI-järjestelmät lisäävät ERP-tietoihin älykkyyttä. Organisaation sisäistä tietoteknistä suorituskykyä ja päätöksentekokykyä voidaan parantaa merkittävästi integroimalla ERP- ja BI-järjestelmät. Kirjallisuudessa yritysresurssien suunnitte-lun ja Business Intelligence -järjestelmien integrointiin on kiinnitetty vain vähän huomiota. Vaikka näiden järjestelmien integrointia selittäviä tutkimuksia on tehty, kirjallisuus on edel-leen luokiteltu hajanaiseksi. Tässä tutkimuksessa pyritään tarkastelemaan ja arvioimaan ar-tikkeleita, jotka liittyvät BI:n ja ERP:n integrointiin. Tutkimuksessa pyritään selvittämään, mi-ten ERP- ja BI-järjestelmien integrointi vaikuttaa yritysten suorituskykyyn ja mitkä ovat näi-den järjestelmien integroinnin hyödyt ja haasteet. Tutkielma toteutettiin empiirisenä, laadullisena case-tutkimuksena, ja aineiston keräämiseen käytettiin puolistrukturoituja haastatteluita. Haastatteluja tehtiin viisi, neljää eri yritystä ja kuutta henkilöä haastateltiin tämän aiheen kannalta merkityksellisiltä osastoilta. Havainnot ja haastattelut osoittavat, että käsite ERP:n ja BI:n välisestä suorasta integroinnista on melko vanhentunut. Sen varmistamiseksi, että tiedot siirretään ja tallennetaan tehokkaasti näiden kahden ohjelman välillä, ERP- ja BI-ohjelmien välille on nykyään otettu käyttöön erilaisia oh-jelmistoja teknisen kehityksen ansiosta. Tutkituissa yrityksissä ERP:n ja BI:n integrointi ei aina suju ongelmitta, ja yritykset ovat törmänneet erilaisiin ongelmiin tällä alalla. Esimerkiksi li-sääntyneen tietomäärän ylläpitäminen, tietojen tarkkuuteen liittyvät ongelmat ja tekninen yhteensopivuus ovat ERP BI -integraatioon liittyviä haasteita. Empiiriset havainnot osoittavat myös, että ERP BI -integraatiolla on monia etuja. Se voi esimerkiksi parantaa raportointia, päätöksentekoa ja prosessien tehokkuutta automaatio-ominaisuuksiensa ansiosta. Se voi esimerkiksi tarjota ajantasaisia raportteja keskeisistä suorituskykyindikaattoreista reaaliajas-sa. Koska yrityksen luvut tulevat nyt automaattisesti ERP BI -integraation ansiosta, yrityksen työntekijät käyttävät vähemmän aikaa rutiinitehtävien suorittamiseen, jolloin he voivat kes-kittyä enemmän tietojen analysointiin

    Optimizing recovery protocols for replicated database systems

    Full text link
    En la actualidad, el uso de tecnologías de informacíon y sistemas de cómputo tienen una gran influencia en la vida diaria. Dentro de los sistemas informáticos actualmente en uso, son de gran relevancia los sistemas distribuidos por la capacidad que pueden tener para escalar, proporcionar soporte para la tolerancia a fallos y mejorar el desempeño de aplicaciones y proporcionar alta disponibilidad. Los sistemas replicados son un caso especial de los sistemas distribuidos. Esta tesis está centrada en el área de las bases de datos replicadas debido al uso extendido que en el presente se hace de ellas, requiriendo características como: bajos tiempos de respuesta, alto rendimiento en los procesos, balanceo de carga entre las replicas, consistencia e integridad de datos y tolerancia a fallos. En este contexto, el desarrollo de aplicaciones utilizando bases de datos replicadas presenta dificultades que pueden verse atenuadas mediante el uso de servicios de soporte a mas bajo nivel tales como servicios de comunicacion y pertenencia. El uso de los servicios proporcionados por los sistemas de comunicación de grupos permiten ocultar los detalles de las comunicaciones y facilitan el diseño de protocolos de replicación y recuperación. En esta tesis, se presenta un estudio de las alternativas y estrategias empleadas en los protocolos de replicación y recuperación en las bases de datos replicadas. También se revisan diferentes conceptos sobre los sistemas de comunicación de grupos y sincronia virtual. Se caracterizan y clasifican diferentes tipos de protocolos de replicación con respecto a la interacción o soporte que pudieran dar a la recuperación, sin embargo el enfoque se dirige a los protocolos basados en sistemas de comunicación de grupos. Debido a que los sistemas comerciales actuales permiten a los programadores y administradores de sistemas de bases de datos renunciar en alguna medida a la consistencia con la finalidad de aumentar el rendimiento, es importante determinar el nivel de consistencia necesario. En el caso de las bases de datos replicadas la consistencia está muy relacionada con el nivel de aislamiento establecido entre las transacciones. Una de las propuestas centrales de esta tesis es un protocolo de recuperación para un protocolo de replicación basado en certificación. Los protocolos de replicación de base de datos basados en certificación proveen buenas bases para el desarrollo de sus respectivos protocolos de recuperación cuando se utiliza el nivel de aislamiento snapshot. Para tal nivel de aislamiento no se requiere que los readsets sean transferidos entre las réplicas ni revisados en la fase de cetificación y ya que estos protocolos mantienen un histórico de la lista de writesets que es utilizada para certificar las transacciones, este histórico provee la información necesaria para transferir el estado perdido por la réplica en recuperación. Se hace un estudio del rendimiento del protocolo de recuperación básico y de la versión optimizada en la que se compacta la información a transferir. Se presentan los resultados obtenidos en las pruebas de la implementación del protocolo de recuperación en el middleware de soporte. La segunda propuesta esta basada en aplicar el principio de compactación de la informacion de recuperación en un protocolo de recuperación para los protocolos de replicación basados en votación débil. El objetivo es minimizar el tiempo necesario para transfeir y aplicar la información perdida por la réplica en recuperación obteniendo con esto un protocolo de recuperación mas eficiente. Se ha verificado el buen desempeño de este algoritmo a través de una simulación. Para efectuar la simulación se ha hecho uso del entorno de simulación Omnet++. En los resultados de los experimentos puede apreciarse que este protocolo de recuperación tiene buenos resultados en múltiples escenarios. Finalmente, se presenta la verificación de la corrección de ambos algoritmos de recuperación en el Capítulo 5.Nowadays, information technology and computing systems have a great relevance on our lives. Among current computer systems, distributed systems are one of the most important because of their scalability, fault tolerance, performance improvements and high availability. Replicated systems are a specific case of distributed system. This Ph.D. thesis is centered in the replicated database field due to their extended usage, requiring among other properties: low response times, high throughput, load balancing among replicas, data consistency, data integrity and fault tolerance. In this scope, the development of applications that use replicated databases raises some problems that can be reduced using other fault-tolerant building blocks, as group communication and membership services. Thus, the usage of the services provided by group communication systems (GCS) hides several communication details, simplifying the design of replication and recovery protocols. This Ph.D. thesis surveys the alternatives and strategies being used in the replication and recovery protocols for database replication systems. It also summarizes different concepts about group communication systems and virtual synchrony. As a result, the thesis provides a classification of database replication protocols according to their support to (and interaction with) recovery protocols, always assuming that both kinds of protocol rely on a GCS. Since current commercial DBMSs allow that programmers and database administrators sacrifice consistency with the aim of improving performance, it is important to select the appropriate level of consistency. Regarding (replicated) databases, consistency is strongly related to the isolation levels being assigned to transactions. One of the main proposals of this thesis is a recovery protocol for a replication protocol based on certification. Certification-based database replication protocols provide a good basis for the development of their recovery strategies when a snapshot isolation level is assumed. In that level readsets are not needed in the validation step. As a result, they do not need to be transmitted to other replicas. Additionally, these protocols hold a writeset list that is used in the certification/validation step. That list maintains the set of writesets needed by the recovery protocol. This thesis evaluates the performance of a recovery protocol based on the writeset list tranfer (basic protocol) and of an optimized version that compacts the information to be transferred. The second proposal applies the compaction principle to a recovery protocol designed for weak-voting replication protocols. Its aim is to minimize the time needed for transferring and applying the writesets lost by the recovering replica, obtaining in this way an efficient recovery. The performance of this recovery algorithm has been checked implementing a simulator. To this end, the Omnet++ simulating framework has been used. The simulation results confirm that this recovery protocol provides good results in multiple scenarios. Finally, the correction of both recovery protocols is also justified and presented in Chapter 5.García Muñoz, LH. (2013). Optimizing recovery protocols for replicated database systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/31632TESI
    corecore